text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Cervical kinematic change after posterior full-endoscopic cervical foraminotomy for disc herniation or foraminal stenosis Objective Posterior full-endoscopic cervical foraminotomy (PECF) is one of minimally invasive surgical techniques for cervical radiculopathy. Because of minimal disruption of posterior cervical structures, such as facet joint, cervical kinematics was minimally changed. However, a larger resection of facet joint is required for cervical foraminal stenosis (FS) than disc herniation (DH). The objective was to compare the cervical kinematics between patients with FS and DH after PECF. Methods Consecutive 52 patients (DH, 34 vs. FS, 18) who underwent PECF for single-level radiculopathy were retrospectively reviewed. Clinical parameters (neck disability index, neck pain and arm pain), and segmental, cervical and global radiological parameters were compared at postoperative 3, 6, and 12 months, and yearly thereafter. A linear mixed-effect model was used to assess interactions between groups and time. Any occurrence of significant pain during follow-up was recorded during a mean follow-up period of 45.5 months (range 24–113 months). Results Clinical parameters improved after PECF, with no significant differences between groups. Recurrent pain occurred in 6 patients and surgery (PECF, anterior discectomy and fusion) was performed in 2 patients. Pain-free survival rate was 91% for DH and 83% for FS, with no significant difference between the groups (P = 0.29). Radiological changes were not different between groups (P > 0.05). Segmental neutral and extension curvature became more lordotic. Cervical curvature became more lordotic on neutral and extension X-rays, and the range of cervical motion increased. The mismatch between T1-slope and cervical curvature decreased. Disc height did not change, but the index level showed degeneration at postoperative 2 years. Conclusion Clinical and radiological outcomes after PECF were not different between DH and FS patients and kinematics were significantly improved. These findings may be informative in a shared decision-making process. Introduction In patients with radiculopathy due to foraminal disc herniation or stenosis, surgery is recommended when non-surgical treatment is not effective [1][2][3][4]. Currently, surgical options include anterior cervical discectomy fusion (ACDF), artificial disc replacement, and posterior microforaminotomy [4][5][6][7]. Although ACDF and artificial disc replacement are well-established and popular surgical methods, surgery without instrumentation while preserving cervical motion would be a good alternative to those surgical methods [8,9]. Clinical outcomes were not found to be different between posterior foraminotomy and ACDF during a 5-year follow-up [8]. However, disruption of spinal kinematics and subsequent re-operation are concerns after foraminotomy [10,11]. Although a systematic review in 2016 showed a similar reoperation rate between ACDF and posterior foraminotomy (4% vs. 6%) [12], a study conducted using data from the national Swedish spine register (Swespine) showed that the reoperation rate was significantly higher after posterior foraminotomy than after ACDF (6% vs. 1%, P < 0.01) [8]. Nonetheless, there are many potential advantages of posterior foraminotomy, such as the ability to achieve similar clinical outcomes with lower medical costs, and a lower incidence of adjacent segment disease than is the case of ACDF, may also be an attractive advantage of posterior foraminotomy [12,13]. However, concerns remain regarding the unfavorable consequences of partial facetectomy, such as progression of cervical kyphosis, loss of cervical lordosis, and re-operation [10]. ACDF continues to be preferred as a surgical option over foraminotomy, as shown in the Swespine study, in which 3721 of 4368 (85%) patients underwent ACDF, while 647 of 4368 (15%) patients underwent posterior foraminotomy [8]. Recently, posterior full-endoscopic cervical foraminotomy (PECF) emerged as a minimally invasive surgical technique, and its influence on changes in cervical kinematics may not be as significant as that of open surgery [14][15][16][17][18]. A recent systematic review in 2019 showed a similar reoperation rate (3.9% vs. 6.9%) and complication rate (7.8% vs. 4%) between ACDF and minimally invasive posterior cervical foraminotomy [6]. However, some problems still remain. Despite the minimally invasive nature of PECF, injury of the facet joint and musculature may increase the likelihood of progression to cervical kyphosis or loss of lordosis in patients with foraminal stenosis (FS) compared to patients with disc herniation (DH), because more extensive removal of the facet joint is necessary in patients with FS than in those with DH [9, 10, Funding: This work was supported by the New Faculty Startup Fund from Seoul National University. This study was supported by grant no. 04-2021-0540 from Seoul National University Hospital research fund. This study was supported by Doosan Yonkang foundation (800-20210527). There was no additional external funding received for this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: I have read the journal's policy and the authors of this manuscript have the following competing interests: The corresponding author (CHK) is a consultant of RIWOspine GmBH. All the other authors declare that they have no conflicts of interest concerning the materials/ methods used in this study or the findings described in this paper. No benefits in any form have been or will be received from any commercial party related directly or indirectly to the subject of this manuscript. This does not alter our adherence to PLOS ONE policies on sharing data and materials. 19]. Therefore, the present study was planned to compare cervical kinematics between patients with DH and those with FS after PECF. Patients After obtaining permission from the institutional review board, the prospectively collected medical records of patients who underwent PECF between June 2010 and February 2018 were retrospectively reviewed. The prospective collection of clinical and radiological data and retrospective review of the data were approved by the Institutional Review Board (No. H-0507-509-153 and 2101-080-118, respectively). Informed consent was obtained from all individual participants included in this study for the prospective collection of data. However, the requirement for informed consent was waived for the retrospective review because this study involved no more than minimal risk and would not adversely affect the rights and welfare of the patients. This study included patients 1) with single-level unilateral radiculopathy due to cervical DH or FS, 2) a positive Spurling's test, 3) disc space narrowing not more than 50%, [20] 4) complete preoperative clinical and radiological data, and 5) postoperative follow-up for more than 2 years. Patients with 1) previous cervical or lumbar spinal surgery, 2) malignancy, inflammatory joint disease, trauma, psychiatric disease and neuromuscular disease, and 3) ossification of the posterior longitudinal ligament were excluded [15,19,21]. For DH, a foraminal soft disc herniation was confirmed with computed tomography (CT) scan and magnetic resonance imaging (MRI) without any bony foraminal stenosis. All patients with bony foraminal stenosis confirmed by CT and MRI were classified as FS. Finally, 52 patients (DH, 34 vs. FS, 18) were included in this study. Surgical techniques The surgical techniques of PECF were similar to those previously reported [14][15][16][17]22]. PECF was performed in a prone position under general anesthesia. The surgical level was identified with C-arm fluoroscopy and a skin incision of 8 mm was made at skin above the "V-point," which is formed by the lamina, descending facet, and ascending facet [14][15][16][17]. The obturator (6.9 mm outer diameter), working tube (8.0 mm outer diameter) and endoscope (4.1-mm working channel, Vertebris 1 , Richard Wolf GmbH, Knittlingen, Germany) were sequentially introduced through the skin incision [14][15][16][17]. Laminectomy and facetectomy were performed using an endoscopic drill under direct visualization. The size of the bone drilling was dependent on the size and location of the herniated disc material and the extent of stenosis, and it was usually within a radius of 3-4 mm around the V-point for soft disc herniations and 5-6 mm for foraminal stenosis [14][15][16][17]19]. The herniated disc was removed through the axilla or shoulder of the nerve root. Excluding the size of bone drilling and removal of disc material, there was no other difference between the surgical procedures of DH and FS. Decompression and free movement of the nerve root were conformed at the shoulder/axilla and superolateral/ inferolateral corner of the nerve root. A closed suction drain was inserted through the working tube and the skin was closed. Patients were encouraged to walk on the day of surgery without a neck brace and discharged the next day without limitations of neck motion [15,19]. Clinical evaluation Information on weight, height, occupation, smoking status, and diabetes was collected during a preoperative interview with a nurse. A questionnaire including the Neck Disability Index (NDI, out of 50) [23], numerical rating scales of neck pain (Neck-NRS, out of 10) and arm pain (Arm-NRS, out of 10) were filled out by every patient preoperatively. After surgery, patients were scheduled to visit the clinic at 1, 3, 6, 12 months and yearly thereafter, and filled out the same questionnaire at every visit. Occupations were categorized into three categories according to occupational activity (OA): high OA, intermediate OA, and low OA [24]. Those data were prospectively recorded in the electronic medical records system of the hospital. During follow-up, re-appearance of significant pain was recorded and it was defined as an "adverse event" in this study. The pain was first managed by medication for 1-2 weeks and interventions such as epidural injection were performed by pain physicians if medication did not work. When those measures failed to control pain, surgery was recommended. Patients were followed up for 45.5 ± 20.6 months (range, 24-113 months). Radiological evaluation Preoperatively, patients underwent MRI, CT and plain X-rays, which included cervical lateral neutral, flexion, and extension X-rays and whole-spine anterior-posterior and lateral X-rays. At the follow-up clinic visits, X-rays were taken at 3, 6, and 12 months and yearly thereafter. All X-rays were taken using the same protocol: The patients were asked to stand and look straight ahead for the neutral-position and whole-spine radiographs, and to flex and extend their neck to the extent they could tolerate for the flexion and extension X-rays [14,15]. The radiological parameters were evaluated in three aspects: local, regional, and global ones (Fig 1). Locally, the index level segmental neutral angle (SA-N), segmental flexion angle (SA-F), segmental extension angle (SA-E), segmental range of motion (S-ROM), anterior disc height (aH), posterior disc height (pH), and cervical degenerative index (CDI) at the surgical level were assessed [20]. The magnification ratio was assessed by measuring the anterior-posterior lengths of cranial vertebral bodies on plain X-rays and computed tomography (CT), and the ratio was used to calculate the actual aH and pH. Regionally, the C2 to C7 sagittal vertical axis (C27-SVA), T1 slope (T1S), cervical neural curvature from C2-7 (CA-N), cervical flexion curvature (CA-F), cervical extension curvature (CA-E), difference between T1 slope and CA-N (T1S-CA), and cervical range of motion (C-ROM) were evaluated. Globally, the C7 sagittal vertical axis (C7-SVA) and T1 pelvic angle (TPA) were evaluated (Fig 1). The measurements and the analysis were performed on 150% magnified images using measuring tools in the institution's picture archiving and communication system (Marosis, version 5483, Infinitt Healthcare, Seoul, Korea), which was run in a Microsoft Windows environment (Microsoft Corp., Redmond, WA, USA) [25]. All of the above parameters were measured by an independent researcher (Blinded for review). The methods of measurement are described in detail in Fig 1. The CDI was scored 0-3 for each of the following 4 categories: narrowing of the disc space, presence of bony sclerosis, osteophytes, and olisthesis [20]. A CDI of 0 means no degeneration, while a score of 12 indicates severe degeneration [20]. Statistical analysis The patients were divided into two groups: DH (n = 34) and FS (n = 18), and the variables were summarized using mean (standard deviation) or frequency (proportion). After performing the normality tests, these variables were compared between groups using the t-test or chisquare test, as appropriate. The Kaplan-Meier method was used to assess the event-free survival time, and the log-rank test was used to make between-group comparisons. A linear mixed-effect model was used to assess the changes in clinical parameters (NDI, Neck-NRS, and Arm-NRS) and radiological parameters. The fixed effects were group, time, the interaction between group and time, age, diabetes, smoking, body mass index (BMI, kg/m 2 ), and sex. The random effect was a random intercept. The interaction between group and time was tested with a 0.01 significance level to control the rate of false-positive interactions due to the number of parameters tested. A post hoc analysis using the Bonferroni method was planned for significant time effects; differences in clinical and radiological parameters between before and after the operation, and between 3 months and the other time points after the operation. Therefore, the significance levels for the post-hoc test were 0.006 and 0.007 for clinical and radiological parameters, respectively. All of the statistical analyses were performed using SAS 1 version 9.4 (SAS Institute Inc., Cary, NC, USA). Results The characteristics of the patients are described in Table 1. The mean age of the DH group was 47.7 years and that of the FS group was 46.3 years (P = 0.28). C6-7 was the most common surgical level, followed by C5-6, C7-T1, and C4-5; the distribution was not statistically different between groups (P = 0.89). BMI was similar between the groups (P = 0.09) and the proportion of patients with smoking or diabetes mellitus was not significantly different between the groups (P = 1.00 and 0.11, respectively). Most patients (43/52, 83%) worked in jobs with intermediate OA, and the distribution was not significantly different between the groups (P = 0.60). There were no significant differences in clinical and radiological parameters (P > 0.05), except for NRS-Arm, which was significantly higher in the DH patients than in the FS patients (P = 0.002), and T1S, which was higher in the FS patients than in the DH patients (P = 0.01). Table 2 and Fig 2 show the observed mean values (standard deviation) and the mean values with 95% confidence intervals (CIs) adjusted by OA, BMI, age and sex, respectively. The adjusted mean of the NDI and Neck-NRS were significantly different across the time points (P < 0.001) in each group and the adjusted mean values of NDI and Neck-NRS were lower in DH patients than in FS patients by 1.51 (95% CI: -0.62 to 3.65; P = 0.16) and 0.25 (95% CI: -0.33 to 0.84; P = 0.39), respectively, but these differences were not statistically significant. The NDI decreased significantly at postoperative 1 month (P <0.006) and was lowest at postoperative 3 and 4 years ( Table 2 and Fig 2). Neck-NRS significantly decreased at postoperative 1 month and there was no further change during the follow-up period ( Table 2 and Fig 2). The interaction between group and time was significant for Arm-NRS (P = 0.002), which may have been caused by different patterns of changes within the two groups, as the Arm-NRS was higher in the DH group before the operation, but slightly higher in the FS group after the operation. The adjusted mean of Arm-NRS significantly decreased after the operation (adjusted P < 0.006) in each group, but there was no further significant change during the follow-up period. Radiological measurements. a. The Cobb method was used to measure the segmental angle between the superior and inferior endplates of the index disc in neutral (SA-N), flexion (SA-F) and extension (SA-E) X-rays. Segmental range of motion (sROM) was calculated by subtracting SA-E from SA-F. The anterior and posterior heights of discs were measured on X-rays. The magnification ratio was assessed by measuring the anterior-posterior lengths of cranial vertebral bodies on plain X-rays and computed tomography, and the ratio was used to calculate the actual anterior disc height (aH) and posterior disc height (pH). b. Cervical lordosis was measured using the Cobb method from the inferior endplate of C2 to the inferior endplate of C7 in neutral (CA-N), flexion (CA-F) and extension (CA-E) X-rays. Cervical range of motion (C-ROM) was calculated by subtracting CA-E from CA-F. The sagittal vertical axis was defined as the horizontal length between the vertical plumb line from the center of the C2 body to the posterosuperior corner of C7 (C27-SVA). T1 slope (TS) was measured between the horizontal line and the extension line along the superior endplate of T1. c. The sagittal vertical axis was measured from the center of the C7 vertebral body to the posterosuperior corner of the S1 vertebral body (C7-SVA). The T1 pelvic angle (TPA) was defined as the angle between 2 lines from the line from the centroid of the bicoxofemoral axis to the centroid of T1 and to the middle of the S1 superior endplate. Clinical outcomes https://doi.org/10.1371/journal.pone.0281926.g001 The patients in this study did not experience any direct surgery-related complications such as nerve palsy, dysesthesia, or dural tear [26]. Adverse events occurred in 6 patients (12%) during follow-up. In the DH group, 3 patients experienced events at 15 months, 36 months, and 43 months and the events were controlled by an epidural injection at the index level, nucleoplasty at the index level, and epidural injections at the below level, respectively. In the FS group, 3 patients experienced events at 27 months, 30 months, and 36 months and the events were controlled by PECF at the below level, epidural injection at the index level, and anterior cervical discectomy and fusion at the index level, respectively. Overall, the 5-year event-free survival rate was 80% (95% CI: 66%-95%), and it was not different between groups (P = 0.29) (Fig 3). Radiological outcomes Radiological parameters were presented in three aspects: locally, regionally, and globally ( Table 3). None of the radiological outcomes showed significant interactions between groups and across time points (P > 0.01). Locally, SA-N, SA-F, SA-E, and CDI significantly changed after the operation (P < 0.007), while a significant change was not observed in S-ROM, aH, and pH across time points (Fig 4A-4D). The CDI showed that degeneration had progressed at postoperative 2 years (P < 0.007) (Fig 4D). CDI increased in 6 patients (18%) in the DH group and 5 patients (28%) in the FS group (P = 0.48). The 5-year degeneration-free survival rates were 70% (95% CI: 50%-90%) for the DH group and 60% (95% CI: 30%-90%) for the FS group, without a significant difference between groups (P = 0.32). Regionally, C27-SVA significantly decreased at postoperative 3 months and maintained throughout follow-up period (Fig 4E). Regional parameters, C27-SVA, CA-N, CA-E, C-ROM and T1S-CA mismatch, showed a significant change at postoperative 3 months (P < 0.007), but there was no further change thereafter (P > 0.007) ( Table 3 and Fig 4E-4I). However, CA-F did not show a significant change across time points (p > 0.007). Discussion The purpose of this study was to show kinematics in patients with DH and with FS after PECF. The clinical improvements after PECF were not significantly different between groups. Radiological parameters were evaluated not just locally, but also regionally and globally. We observed several significant changes after surgery. Locally, SA-N and SA-E became more lordotic after surgery, while disc height did not change. Although degenerative changes occurred at postoperative 2 years, further degenerative changes were not observed thereafter. Regionally, noticeable changes were a decreased T1S-CA mismatch and increased C-ROM. Globally, C7-SVA changed toward a neutral posture (Fig 5). Radiological changes after posterior foraminotomy Jagannathan et al. analyzed the segmental and cervical angles after posterior open cervical foraminotomy, and observed a loss of cervical lordosis in 20% of the patients (30/162), one-third of whom had symptoms [10]. Regardless of this shortcoming, overall, posterior cervical foraminotomy has been recognized as a valid surgical procedure for patients with radiculopathy and showed a similar reoperation rate as anterior cervical discectomy and fusion [6,8,10,27]. Recently, PECF has emerged as an alternative to microscopic surgery and showed comparable clinical outcomes in a randomized controlled trial and systematic review [5,17]. The primary advantage of PECF is its minimally invasive nature thanks to the high magnification and illumination [19]. Conseque. These advantages were reflected by improved cervical lordosis after PECF, even in patients with cervical hypolordosis [14,15]. Therefore, we sought to explore whether the advantages of PECF would be valid for patients with FS, because a larger foraminotomy is necessary in patients with FS than in patients with DH [9,10,19]. Patients with FS may be more likely to experience cervical kinematic changes and degeneration than patients with DH. The present study was planned to address this point, and showed that the pathology did not influence radiologic ntly, the size of foraminotomy and injuries to the posterior structure could both be minimized [19] al and clinical outcomes. Radiological parameters were assessed from local, regional, and global perspectives. Locally, segmental kinematics were well-maintained: disc height was preserved, the segmental angle became lordotic, and the segmental ROM was well-maintained throughout the follow-up period. Thus, PECF did not have a significant influence on segmental kinematics. Although PECF is not a major corrective surgery, it has gained interest for its indirect effect of pain relief Events and censors are represented with line graphs. Three events occurred in each group, and the event-free survival rates were 91% for disc herniation patients and 83% for foraminal stenosis patients during the follow-up period, without a significant difference between groups (P = 0.29). https://doi.org/10.1371/journal.pone.0281926.g003 on the regional and global scale [28][29][30]. Regionally, the patients were able to achieve a more lordotic cervical posture, and extend and move their neck better than before surgery. The gap between T1 slope and cervical posture was narrowed by around 5˚. Consequently, the neck moved closer to the gravity line. Overall, patients' postures became more comfortable after surgery [31][32][33]. The changes of local, regional, and global curvature may be speculated like followings [31,[33][34][35]. Patient could extend neck more freely without pain and this may explain SA-E DH -3.1 (5.6) -6.6 (4. https://doi.org/10.1371/journal.pone.0281926.t003 improved regional cervical curvature. Although there were no statistically significant changes in anterior and posterior disc height (aH and pH, Table 3), the heights were not the same before and after surgery and those minimal change by improved cervical lordosis might have changed local curvature. Although causal relationships could not be inferred based on this study, patients with decreased neck pain were more likely to be able to take an upright posture and this may influence on global curvature. We suggest that the indication of PECT would be the most important factor for satisfactory outcomes. As mentioned, PECF was indicated for patients 1) with single-level unilateral radiculopathy due to cervical DH or FS, 2) a positive Spurling's test, 3) disc space narrowing not more than 50%, [20]. Although the shape of cervical curvature was not specified in this study, cervical kyphosis more than 10 degrees was not a contraindication of PECF, if the curvature was not a structural change reflected by such as decreased disc height more than 50%, foraminal arthrosis or spur change at endplate [15,16,19]. The purpose of PECF was to relieve pain and the change of curvature was a secondary change after relief of pain. Therefore, PECF is usually indicated for patient with radiculopathy, mild cervical degeneration, and functional curvature change and it should not be considered to correct cervical curvature when the curvature change was structural one. Reoperation The present study showed that adverse events occurred in 6 patients, and secondary surgery was done for 1 patient in the FS group (6%) at the index level. Lubelski et al. reported a reoperation rate of 6.4% at the index level after posterior open cervical foraminotomy during 2 years of postoperative follow-up [27]. Although PECF is a minimally invasive surgical technique, it is not a regenerative treatment and degeneration naturally progressed at postoperative 2 years as showed in this study. Nonetheless, further degeneration did not occur thereafter, and a further comparative study with a control group is necessary to determine whether PECF hastened the progression of degeneration [20,36]. In addition, the advantages of PECF compared to open foraminotomy need to be verified by a comparative study. Although many questions remain to be answered, the present study showed an event-free rate of 80%-90% after PECF, and this information could be helpful in a shared decision-making process. Limitations The present study tried to compare the kinematics between DH and FS, but was underpowered due to its small sample size. A larger number of patients would be necessary to overcome type I or II error. Second, kinematic changes were assessed by static flexion and extension Xrays, which were not capable of showing kinematics between those positions [37]. Third, surgical injury of the facet joint and musculature by PECF may also indirectly counteract the positive effects on cervical curvature that are obtained by the alleviation of pain [14]. In addition, the size of foraminotomy was not measured in computed tomography or magnetic resonance imaging, which was not routinely ordered without symptom. Therefore, this study was not to evaluate kinematic change according to the size of foraminotomy. Long-term follow-up observations of a large number of patients are required to identify the trade-off between the natural restoration of curvature and the aggravation of curvature due to surgical trauma [14]. Fourth, this study did not compare PECF and open foraminotomy. In addition, the advantages of PECF compared to conventional foraminotomy cannot be evaluated based on the current results. A prospective cohort study or randomized controlled trial would be necessary to compare kinematics between these procedures. Regardless of those shortcomings, the present study at least showed that the underlying pathology may not worsen cervical kinematics if the surgical insult is minimized, as the patients were able to take a comfortable posture involving economical movements after surgery due to decreased pain and muscle spasms [15,38]. When indicated, PECF may be an alternative surgical option for motion preservation, even for patients with FS [21]. Conclusions The clinical and radiological outcomes after PECF were not significantly different between patients with disc herniation and patients with foraminal stenosis. To have the best outcome, the indication of PECF should be kept in mind. PECT is usually indicated for patient with radiculopathy, mild cervical degeneration, and functional curvature change. It may not be simple to tell functional from structural change of curvature, but sings of moderate to severe degeneration, such as decreased disc height more than 50%, facet arthrosis and spur change may indicate structural change. In such case, a secondary change of cervical curvature would not occur, and this should be considered in deciding surgical techniques. These findings will be informative for surgeons and patients during the shared decision-making process.
6,109.6
2023-02-21T00:00:00.000
[ "Medicine", "Engineering" ]
Minitips in Frequency-Modulation Atomic Force Microscopy at Liquid–Solid Interfaces A frequency-modulation atomic force microscope was operated in liquid using sharpened and cone-shaped tips. The topography of mica and alkanethiol monolayers was obtained with subnanometer resolution, regardless of nominal tip radius, which was either 10 or 250 nm. Force–distance curves determined over a hexadecane–thiol interface showed force modulations caused by liquid layers structured at the interface. The amplitude of force modulation and the layer-to-layer distance were completely insensitive to the nominal tip radius. These results are evidence that minitips smaller than the nominal radius are present on the tip body and function as a force probe. Introduction Atomic force microscopy (AFM) has been a powerful tool for the investigation of liquid-solid interfaces. It is now possible to observe the structure of interfacial liquid in addition to the topography of a solid. When a liquid is structured over a solid to form liquid layers on the surface, modulations appear in the tip-surface force as a function of tip-surface distance. Liquid layers have been found by AFM on a number of interfaces, as summarized in recent reviews. [1][2][3][4] The latest technical development by Fukuma et al. 5) reduced the noise of frequency-modulation AFM (FM-AFM) to less than 20 fm Hz À1=2 . Using improved microscopes, layered liquids have been identified over polydiacetylene, 6) mica, [7][8][9] TiO 2 , 10) Al 2 O 3 , 11) graphite, 12) thiol monolayers, 13,14) and a lipid bilayer. 15) The improved force sensitivity allows weak-force detection. With such weak forces, a small tip apex functions as the force probe without collapse. A small tip apex is desirable to improve spatial resolution. The tip apex was shown to play an important role in simulations of AFM at liquid-solid interfaces. 16,17) Experimental knowledge of the tip apex is quite limited, especially for tips scanned in liquids. In the present study, we compared two commercially available silicon cantilevers with sharpened or cone-shaped tips over water-mica and alkane-thiol interfaces. Topography and force-distance curves obtained in water or hexadecane were totally insensitive to the nominal radius of the tips, suggesting the presence and critical role of atomistic minitips on the tip bodies. Microscope and cantilevers Topography and force-distance curves were determined in an aqueous KCl solution or neat n-hexadecane at room temperature, using a modified Shimadzu SPM 9600 microscope. The deflection noise of the microscope had been reduced to less than 20 fm Hz À1=2 . The low deflection noise is critical for the observation of weak tip-surface forces in liquids. The absolute deflection of cantilevers was estimated by comparing the theoretical amplitude of the cantilever Brownian motion to the deflection sensor output recorded with a spectrum analyzer. Two commercial silicon cantilevers with different tip radii, 10 nm (Nanosensors NCH) and 250 nm (Team Nanotech LHCR250), were used. The shape of the tips was checked after imaging scans using a scanning electron microscope (JEOL JSM-5610). The nominal spring constants provided by the suppliers, 40 N m À1 for each cantilever, were used for calibrating the deflection sensor sensitivity. NCH cantilevers were coated with aluminum on the back surface (NCH-R) when used in hexadecane and with gold in the aqueous solution. LHCR cantilevers were coated with aluminum on the back surface. The quality factor (Q) of resonance was 5-10. To remove spurious oscillation in low Q environments, a bandpass filter of Q ¼ 20 was inserted in the feedback loop to regulate the oscillation amplitude constant. The resonance oscillation frequency was in the range of 140-190 kHz measured in liquids. Substrates and solutions Muscovite mica (Furu-uchi Chemical) was cleaved with scotch tape. The KCl solution was prepared at a concentration of 1 mol L À1 with KCl (Nakarai, >99:5%) and Millipore water. The dodecanethiol self-assembled monolayer (SAM) was prepared on Au(111) films of 150 nm thickness. Cleaved mica wafers were annealed at 450 C in a vacuum of 10 À5 Pa and exposed to a gold vapor source. The gold-deposited wafers were soaked in a 1 mM ethanol solution of 1dodecanethiol (Wako, >98%) for 24 h. The soaked wafers were rinsed several times with pure ethanol and immersed in hexadecane (Wako, >97%). Figure 1 presents the SEM images of the tips after scans. There was no sign of tip-surface collisions during the scans. The tip of the NCH-R cantilever remained sharp, as expected from the nominal radius of 10 nm. The tip of the LHCR cantilever retained its cone shape with a 250 nm radius. Figure 2 shows the topography of mica measured in the KCl solution with the two cantilevers. The vertical tip position was regulated to keep the frequency shift (Áf ) constant. The repulsive tip-surface force was estimated to be on the order of 0.1 nN using the quantitative relationship developed by Sader and Jarvis. 18) The topography observed with the two tips shows equivalent honeycomb structures with hexagons of 0.5 nm. The honeycomb structure observed agreed with an earlier report. 6) Note that subnanometer resolution was achieved with the cone-shaped LHCR tip. The topographic resolution and contrast of mica was insensitive to the shape of the two tips. It was difficult to deduce a possible contribution of long-range force to the topography observed with the cone-shaped tip. Topography obtained with sharpened and coneshaped tips Subnanometer resolution was also achieved on a soft, organic surface using the two tips. Figure 3 shows the topography of the dodecanethiol SAM in hexadecane. Protrusions appeared with a hexagonal arrangement of 0.5 nm spacing and are assigned to the methyl head groups of )-ordered thiol monolayer. It was difficult to resolve the ffiffi ffi 3 p structure at the thiol monolayer facing water. The hydrophobic property of the thiol monolayer may be the reason for the insufficient resolution. These results show that subnanometer resolution was achieved with the sharpened NCH tip and the cone-shaped LHCR tip. The topography obtained with AFM is generally a convolution of the surface corrugation and tip. The subnanometer resolution thus suggests a subnanometer-sized apex present on both the sharpened and cone-shaped tips. This suggestion is in line with the minitip assumption frequently assumed in scanning probe microscopy. Protrusions may be naturally present on a tip body. One such protrusion, which is closest to the surface, exclusively receives the short-ranged repulsive force response from the surface in atomic force microscopy. [19][20][21][22] 3.3 Force curves obtained with sharpened and coneshaped tips Force-distance curves were measured at the hexadecanethiol SAM interface using the two cantilevers. The oscillating cantilever was scanned vertically from the liquid to the surface until Áf exceeded a threshold, +1000 Hz. The vertical scan was limited to the height of the threshold to avoid the tip tapping the surface. Áf was monitored as a function of the vertical coordinate to produce a Áf -distance curve at one lateral coordinate. The tip was then shifted laterally by a fixed amount and another vertical scan was conducted. By repeating the vertical scan-lateral shift cycle, a two-dimensional Áf distribution was constructed. Liquid hexadecane structured on the thiol monolayer presented an uneven Áf distribution as a function of the vertical distance from the surface. We found, in a recent study, 13) that the Áf distribution was, on the other hand, homogenous along lateral coordinates. Fifty Áf -distance curves were observed in scan-shift cycles in the present study and summed to obtain an averaged curve. The water-mica interface was not favorable for averaging, since the interfacial water was laterally structured to give heterogeneous Áf curves along lateral coordinates. 8,9) The Áf curves in hexadecane were converted to force-distance curves and then averaged. The nominal spring constants provided by the suppliers were used in the conversion. Detailed insight on applied force was gained by quantitatively comparing the averaged curves. Figure 4 shows force-distance curves obtained with the sharpened and cone-shaped tips. The horizontal axis of the figure represents relative tip-surface distance, as the two curves were adjusted at the second local maximum. Three local maxima were identified on each of the two curves, with the same peak-to-peak distances and modulation amplitude. The modulation amplitude was insensitive to the tip body radius, which deviates from the Derjaguin approximation. 23) This approximation relates the radius of a spherical tip (R tip ) to the tip-surface force strength (F), and predicts a constant ratio, F=R tip , in a continuum liquid. The approximation works well with micrometer-sized colloidal tips. Lim et al. 24) pointed out that the modulation amplitude deviated from the prediction of the Derjaguin approximation when they used tips of 15-100 nm radius in organic solvents over graphite. If we applied the approximation to the two tips, the modulation amplitude would be larger by 25 times with the cone-shaped tip than with the sharpened tip. This was clearly not the case, as the results show in Fig. 4. The measured force modulation amplitude was totally insensitive to the tip body radius. This suggests that the effective area loading the force to hexadecane is smaller than the nominal radius, which was 10 nm in the present study. Here, we return to the minitip assumption mentioned in x3.2. The thickness of the region occupied by layered hexadecane was 2 nm, as seen in Fig. 4. When the minitip closest to the surface penetrates that region, it receives force modulated by the structured hexadecane, and the resonance oscillation of the cantilever is accordingly affected. Hexadecane density should be uniform outside of the 2-nm-thick layer. The rest of the tip body and entire cantilever are surrounded by hexadecane liquid of uniform density, as illustrated in Fig. 5. Hence the tip body cannot contribute to the amplitude of force modulations caused by the structured hexadecane. Fluctuating minitip The amplitude of force modulations fluctuated in scan-shift cycles to a limited extent. Figure 6(a) shows a crosssectional presentation of Áf distributions over the hexadecane-thiol interface obtained with the cone-shaped tip. Positive (negative) Áf is shown to be bright (dark). The brightest region at the bottom represents the contour on which Áf exceeded the threshold. Three pairs of dark and bright layers appeared from the surface to the liquid, showing the uneven density distribution of structured hexadecane. This layered feature was disturbed occasionally. One of disturbances appeared in the region marked by a broken line. The shape of force curves discontinuously fluctuated to a limited extent at the disturbance. Force curves before and after the disturbance are compared in Fig. 6(b). Fifty curves in each of boxes (1) and (2) were averaged to produce curves (1) and (2). The two averaged curves presented the same peak-to-peak distances with different modulation amplitudes. The amplitude from the second minimum to second maximum was 18 pN in curve (1) and 13 pN in curve (2). The different amplitude should reflect the different effective area of the minitip. By accumulating similar events observed in repeated scans, the amplitude of the second minimum and second maximum was seen to fluctuate in the range of 10-30 pN. This range of force fluctuation is consistent with the minitip assumption, in which only a few atoms are exposed to the liquid. In an earlier study of water on CaCO 3 , 25) attractive forces on the order of 10 pN acting between single atomic sites on the sample and the front atoms of the tip were measured. Consider here the extent of confinement caused by our minitip. When solid walls pinch a liquid, diffusion of liquid molecules is limited. The number density of the molecules consequently increases. The pressure applied by the walls also enhances the dense packing of the liquid molecules. These effects of confinement distinctly appear in surface force apparatus (SFA) studies. 26) n-undecane presented layerto-layer distances of 0.3-0.4 nm when confined by octadecyltriethoxysilane-covered mica wafers. 27) On the other hand, the force curves shown in Figs. 4 and 6 presented longer peak-to-peak distances of 0.6 nm. The longer distance is a sign of less confinement. The two force curves in Fig. 6(b) exhibit the same peak-to-peak distances accompanied by different force amplitudes. The different force amplitudes indicate differently sized minitips. The peak-to-peak distance insensitivity to the minitip size may suggest that our minitip was not large enough to cause the confinement of hexadecane. We thus infer that the intrinsic structure of an open liquid-solid interface is observable by atomic force microscopy with a 10-pN-order force sensitivity. Conclusions Using sharpened and cone-shaped tips, constant frequencyshift topography and force-distance curves were obtained on hexadecane-thiol SAM and water-mica interfaces. The topography of SAM and mica was obtained with atomistic resolution regardless of the nominal radius of the tips. The amplitude of force modulation and layer-to-layer distances of the interfacial hexadecane were totally insensitive to the nominal tip radius. These results revealed that minitips smaller than the nominal radius, 10 nm in the current study, are present on the tip body and function as a force probe. (1) and (2) were averaged and presented in (b). Each curve is laterally adjusted to each force maxima and staggered vertically for viewing ease. The origin of the relative distance is set to the distance at which Áf of curve (2) exceeded the threshold of +1000 Hz.
3,053.4
2012-02-07T00:00:00.000
[ "Materials Science", "Physics" ]
Bibliometric Analysis of Emerging Bond Market Research: Performance Insights and Science Mapping This bibliometric paper investigates the research landscape in emerging bond market literature, spanning 1993 to 2023, and encompasses a total of 325 research articles. Employing a multifaceted approach, it begins by examining publication trends, core journals, prominent authors, influential articles, and keyword dynamics, providing a comprehensive overview of research dynamics in this domain. Beyond performance analysis, the study ventures into science mapping using co-word analysis to uncover the underlying conceptual structure of the emerging bond market field. The bibliographic data has been drawn from Scopus and analyzed using the Bibliometrix R package, providing insights into the current dimensions of emerging bond market studies. This analysis facilitated the identification of five major keyword clusters, i.e., sovereign bonds, the impact of financial crises, yield curve, corporate bonds, and Islamic bonds in the emerging bond market space. Based on these themes, the study also suggests avenues for future scholarly exploration in this specialized field. Introduction The landscape of portfolio capital flows in emerging bond markets has witnessed substantial changes in the aftermath of the global financial crisis, largely attributed to the low-interest-rate environment prevailing in advanced economies (Garcia-Lopez et al., 2021).This has prompted institutional investors to seek higher yields in riskier assets, driving a surge in investment in emerging market bonds, particularly local currency bonds issued by governments and corporations in emerging nations (Belke & Verheyen, 2014).Figure 1 illustrates these dynamics, showcasing significant jumps in portfolio investment after 2008 and 2015. Figure 1. Portfolio investment in bonds of Emerging (middle-income) economies Source: World Bank data Retrieved from https://data.worldbank.org/indicator/DT.NFL.BOND.CD?end=2020&locations=XP&start=2001 These emerging market bonds, known for their appealing yield enhancement and diversification benefits, have rapidly become a significant segment of the global bond market, holding a 17% market share in 2020, up from 5% in 2010 (SIFMA, 2021).As academic interest in understanding the dynamics of emerging bond markets grows, researchers continue to delve deeper into the various factors that influence market behavior, risk assessment, and investment strategies.Thus, this manuscript aims to perform a bibliometric analysis to explore the development trends, influential contributors, and conceptual structure of the emerging bond market field.The study seeks to identify key journals, chart significant publications and authors, conceptual structure, and propose directions for future research.Bibliometric analysis, known for its objectivity in interpreting vast amounts of unstructured data, employs two major techniques in this study: performance analysis and science mapping (Vogel & Güttel, 2013;Donthu et al., 2021).Performance analysis provides insights into the field's development and prolific research constituents, including authors, institutions, countries, and journals.This may include charting the publication trend and identifying core journals, prominent authors, and influential articles in the field.On the other hand, science mapping explores the structural and dynamic aspects of scientific disciplines by creating visual networks that illustrate how different concepts, authors, or articles are interconnected.This has been done using quantitative methods like citation analysis, co-citation, bibliographic coupling, co-word analysis, and co-authorship analysis.This approach stands in contrast to traditional narrative-based reviews, ensuring a rigorous and unbiased examination of the existing literature (Tranfield et al., 2003). The use of bibliometric analysis has gained prominence in the social sciences and related fields, with recent applications in business and finance, such as studies on airline revenue management (Raza et al., 2020), recycling behavior (Phulwani et al., 2020), volatility spillover (Chen & Yang, 2021), electronic word of mouth (Donthu et al., 2021), behavioral finance (Kumar & Choudhary, 2023) and sustainable finance (Kumar et al., 2022).These studies highlight the increasing importance of employing bibliometric analysis to explore and map the structures of literature in social sciences and allied fields. This review study will help to provide a fresh and expansive perspective on the emerging bond market field.The study uses publication-and citation-based metrics for performance analysis and enrichment techniques such as PageRank and centrality for science mapping, as suggested by (Donthu et al., 2021).The study focuses on the following research questions. • What are the trends in the annual publication volume and growth rate of research articles in the emerging bond market?• Which articles have been most influential in shaping the field of the emerging bond market, as indicated by citation patterns, and who are the most prolific authors contributing to this field?• Which journals have been the most prominent outlets for emerging bond market research?• What constitutes the conceptual structure of the field of emerging bond market? • What are the future research directions for emerging bond market research? The analysis answers these questions by providing publication trends, influential articles, core journals, and prolific authors using performance analysis.Next, the study employs a science mapping tool to identify conceptual structures using coword clustering.Lastly, the study uses these identified clusters (themes) to analyze the gaps in the literature and suggest future research directions. The study's findings can be utilized in a variety of ways.First, researchers in the emerging bond market field may obtain an overview of the publishing trend over time to understand the significant jumps in publications and associated events.Second, future authors can quickly locate relevant publications (influential articles), prominent research outlets (core journals), and prolific authors.Third, prospective authors can use the conceptual bases revealed through this study to identify the evolution and critical studies of the field.Fourth, the study will assist future EJBE 2024, 17(33) researchers by providing future research directions and gaps in the emerging bond market field. The rest of the paper is organized as follows.Section two describes the methodology, data retrieval process, and various bibliometric techniques, including co-word analysis used in the study.Next, section three discusses the results of the performance and science mapping analysis.Section four suggests future research directions.Finally, section five concludes with the study's key findings and limitations. Methods and Materials The study employs the SPAR-4-SLR protocol to ensure proper procedure for conducting the bibliometric review.The SPAR-4-SLR protocol, in particular, consists of three major stages, namely assembling, arranging, and assessing (Paul et al., 2021), the details of which are discussed in the following sections. Assembling The assembling stage comprises two sub-stages, namely identification, and acquisition of research domain data.In the first sub-stage, the study identified research questions (Performance and science mapping of the emerging bond market literature), data source (Journals), and source quality (Scopus).Next, to retrieve the bibliographic data, a search query in the operator "Title-Abs-key" of the Scopus database "emerging*" and "bond market*" was run, resulting in 409 research documents as of 24 November 2023.The study used Scopus due to its better journal coverage and ease of extraction than other databases, such as the Web of Science and Google Scholar, especially in the Social Sciences field (Mongeon & Paul-Hus, 2016;Paul et al., 2021;Phulwani et al., 2020).The bibliometric data was collected on 24 November 2023, encompassing all the studies published up to that date. Arranging The arranging stage deals with two sub-stages, namely, organizing code and purification.The study uses the organizing code in Scopus to filter the search results according to subject area, language, document type, and source type.The review incorporates subject areas of the emerging bond market, namely "business, management, and accounting," "economics, econometrics, and finance," "decision sciences," "environmental science," "mathematics," "arts and humanities," "computer science," and "social sciences."The interdisciplinary nature of asset class research is exemplified by the diverse range of subject areas contributing to the field, as shown in the study by Gairola and Dey (2023).Subjects such as 'Business, management, and accounting' and 'Decision Sciences' intersect significantly with traditional financial domains, showcasing the need to include broader subject filters in the bibliometric analysis.Thus, a variety of interdisciplinary subject areas are included to extend beyond the core finance subject filters to embrace influences from other subjects.Next, for purification, documents not in English were removed due to limited language proficiency in languages other than English.Non-journal sources, such as books, book chapters, and conference proceedings, were discarded because they may not have been subjected to rigorous peer review.Other than that, editorials were also left out due to their non-peer-review nature.In all, 325 documents were returned after the organizing code and the purification of search results in the arranging stage. Assessing With different types of qualitative and quantitative literature review methods available (Knopf, 2006), the present study employs bibliometric analysis to evaluate the emerging bond market.Bibliometric analysis, a sub-form of domain-based systematic review, refers to the quantitative approach to evaluating and studying scientific communications (Donthu et al., 2021;Vogel & Güttel, 2013;Zupic & Čater, 2014).The study implements bibliometric analysis through performance analysis and science mapping using Biblliometrix R-package (Aria & Cuccurullo, 2017).The performance analysis depicts the publication trend, core journals, prominent authors, influential articles, and keyword dynamics, whereas science mapping identifies the conceptual structure of the domain.Furthermore, the study indicates the gaps and potential research directions through article readings to propose the agenda.Finally, with regard to reporting, the study uses a combination of words, figures, and tables as reporting conventions, and it admits its limitations at the end. Results and Discussion The results of this study are divided into two parts: 1) Performance Analysis and 2) Science mapping.The performance analysis helps measure the field's academic richness, such as publication trends, core journals, prominent authors, influential articles, and keyword dynamics.In contrast, science mapping assists in investigating the intellectual and conceptual structure of the field. Publication Trend over the Years Figure 2 presents the publication trend in emerging bond market literature.The first paper on the emerging bond market was published in a Scopus-indexed journal by Rhee (1993), which discussed the growth of emerging bond markets in southeast Asia and obstacles to their development.Following this foundational work, research on emerging bond markets has increased significantly over the last three decades, as indicated by a 12.99% compound annual growth rate in publications.This growth rate significantly exceeds that of option pricing models, which is 6%, indicating that the latter is a well-established domain (Nisha et al., 2024). Compared to the unpredictable yet exponential growth witnessed in Bitcoin research (Jalal et al., 2021) and the market-correlated swings in gold-focused studies (Corbet et al., 2019), emerging bond markets appear as a subject characterized by both strong growth and intellectual vigor.Figure 2 depicts the financial research community's involvement in emerging bond markets amidst the changing dynamics of global financial systems.Other than that, a significant jump in publication growth can be seen after 2008 and 2015 due to the global financial crisis and negative real interest rates in advanced economies, respectively. Figure 2. Publication Trend over the year Source: Authors' Calculations Core journals Table 1 presents a detailed view of these journals, quantifying their influence through bibliometric indices such as the h-index, g-index, and m-index and capturing their scholarly output through total citations (TC) and number of publications (NP). Annual Scientific Production The Journal of International Money and Finance emerges as a pivotal publication, leading the core cluster with the highest h-index, g-index, and m-index and featuring 15 publications, as shown in Table 1.The Emerging Market Review, while having fewer publications, leads in total citations, emphasizing its critical role in disseminating research with a notable impact, closely followed by the Journal of Banking and Finance and the International Review of Economics and Finance.This concentration of citations and indices in these top journals underscores their significant influence on the literature concerning emerging bond markets. Utilizing Bradford's law, Figure 3 clusters journals in the emerging bond market (Bradford, 1934).The core zone includes 16 journals, contributing 33% of publications. Prominent authors Table 2 lists the prominent contributors to emerging bond market research, detailing their key citation-and publication-related metrics in assessing the impact and productivity of researchers.The h-index measures an author's productivity and citation impact by counting the number of publications that have each received at least as many citations as the number of publications.The g-index enhances this by focusing on the most cited papers, calculated by finding the largest number of articles that together have squared total citations of at least g², thus emphasizing peak academic output.The m-index normalizes the h-index by the years the researcher has been active, facilitating comparisons across different career stages. Within this framework, V. Piljak from the University of Vaasa in Finland stands out as the most prolific author, with the highest h-index of 4 and a g-index of 6, through a compact yet impactful set of six publications.Followed closely by J.Moreover, while traditional citation counts are insightful, normalized total citation scores offer a refined measure of influence adjusted for field-specific citation behaviors.Although Zaremba et al. (2021) article is not the most cited, it holds the highest normalized total citation score of 10.57, reflecting its significant impact.This paper delves into the effects of the COVID-19 pandemic on the term structure of interest rates across international sovereign bond markets, utilizing a comprehensive dataset from both developed and emerging countries.It is closely followed by Banga (2019), with a score of 9.48.Together, these seminal articles significantly shape the discourse on emerging bond markets, providing foundational insights and novel strategies to address global financial challenges effectively. Science Mapping (Conceptual Mapping) The conceptual structure, also known as co-word analysis, establishes relationships within the documents through the co-occurrence of keywords present in authors' keywords, keywords plus, titles, or abstracts.This method employs a network or map of these keywords to understand the cognitive and conceptual structure of the domain, as described by Börner et al. (2003).It is also instrumental in suggesting future directions of the research field Donthu et al. (2021). In this study, authors' keywords were preferred due to their comprehensiveness in representing the articles' content, as suggested by Zhang et al. (2016).The cooccurrence matrix generated using the Bibliometrix R package leads to a network graph, where the nodes are keywords and edges represent their co-occurrences, with edge weights proportional to the frequency of these occurrences.The network utilizes the Louvain clustering algorithm and the Kamada-Kawai layout to enhance clarity and accuracy in visual representation, as suggested by Blondel et al. (2008) and Zupic and Čater (2014).Co-occurrences are normalized using similarity measures with association strength to ensure meaningful clustering.This method not only clarifies the current research landscape in terms of clusters but also assists in identifying emerging trends and areas that have not been sufficiently explored within the field of emerging bond markets.Naqvi (2019) underscored the dominance of external push factors in shaping EM government policy and capital flows, pointing out the minimal control these governments have over their financial environments.These studies collectively highlighted the nuanced and significant effects of international monetary policies on market volatility, investment decisions, and economic policy in emerging markets. Cluster 3 Yield Curve (Orange): The third cluster mainly deals with the effects of macroeconomic policy on the yield curve factors.A total of six major keywords have been identified in this cluster, namely yield curve, term structure, macroeconomic factors, liquidity, and interest rates.The term structure is the cluster's most influential and prolific keyword due to its high betweenness, closeness, and PageRank score.The cluster encompasses 20 papers, indicating a concentrated and significant exploration of this theme.Specifically, the articles under this cluster evaluate the impact of local macroeconomic and financial factors (Mbarek et al., 2019;Paweenawat, 2017;Sowmya & Prasanna, 2018) and global macroeconomic policy (Candelon & Moura, 2023;Cepni et al., 2021;Christensen et al., 2021;Özbek & Talaslı, 2020).For instance, Mbarek et al. (2019) showed that monetary policy shocks in Tunisia significantly affect the short end of the yield curve during economic uncertainty, a finding that aligns with Paweenawat (2017), who demonstrated that the term structure of Thai government bonds provides key information on future interest rates and GDP growth despite market illiquidity. Expanding further, Sowmya and Prasanna (2018) investigated the bi-directional influences between yield curve movements and macroeconomic factors across nine Asian markets, highlighting how policy rates and inflation impact short-term rates.This interplay between domestic and global influences is further explored by Candelon and Moura (2023) and Cepni et al. (2021), who noted how global uncertainties and macroeconomic conditions impact yield curves in emerging markets, emphasizing the substantial role of international policies. Additionally, Christensen et al. (2021) discussed how foreign investments influence financial stability in Mexican sovereign bonds through changes in liquidity premiums. Similarly, Özbek and Talaslı (2020) examined the role of domestic and international factors in determining term premia across various emerging markets. Together, these studies shed light on the complex interactions of local and global policies on yield curves, providing critical insights for shaping future monetary strategies and understanding their implications on market stability in emerging economies. Cluster 4 Corporate Bonds (Violet): The fourth cluster contains keywords that focus on corporate bond risk modeling and consists of 10 research papers.This cluster comprises only two major keywords: corporate bonds and China.China has the highest closeness and PageRank scores of 0.5 and 0.0503, respectively, highlighting its importance in this cluster and having an influential impact on other clusters. This cluster explores recovery rates of defaulted corporate bonds in emerging markets through studies like Mili et al. (2018), who found that firm characteristics significantly influence these rates, particularly during financial crises.Furthermore, Lin and Milhaupt (2017) assessed China's corporate bond market, identifying how a state-centric network has driven its growth despite institutional weaknesses, impacting its functionality and its interconnectedness with China's shadow banking system. Collectively, these research efforts provide a detailed understanding of how both domestic conditions and wider systemic influences affect the dynamics of the corporate bond market in emerging economies.By highlighting the unique challenges and characteristics of emerging markets, particularly China, this cluster provides invaluable insights into managing risks and understanding the complex interdependencies that influence corporate bonds globally.These contributions are crucial for policymakers, investors, and researchers who navigate or study the complexities of corporate finance in volatile and politically intricate markets. Cluster 5 Islamic bonds (Grey): The sixth and final cluster pertains to Islamic bonds (Sukuk) and comprises two major keywords, namely Islamic finance and Sukuk, and it includes five papers.However, both the keywords have the same PageRank and closeness score of 0.0344 and 1, respectively, highlighting their prestigious and influential nature within the cluster.The research within this cluster investigates the financial integration between Sukuk and conventional bonds on a global scale, as well as the co-movement with other asset classes and their underlying determinants.Bhuiyan et al. (2018) utilized wavelet coherence and Multivariate GARCH analyses to demonstrate that Sukuk offers significant international diversification benefits by examining volatilities and correlations with bonds from emerging markets.In another study, Bhuiyan et al. (2019) further identified the bidirectional causality between the Malaysian Sukuk and various Asian bond markets, although interactions with China's bond market were notably limited.Furthermore, Hassan et al. (2018) found that Sukuk exhibits lower volatility and greater stability during market shocks compared to conventional bonds, with correlations notably strengthening during economic downturns, underscoring its resilience as a financial instrument. Future Research Directions Notably, a thorough review of articles within each keyword cluster revealed numerous observations, highlighting potential avenues for future research that could significantly enrich the field of the emerging bond market.Reflecting on these gaps, the study formulated 15 research questions.These are summarized in Table 5, which organizes them across six thematic areas identified as key future research directions for the emerging bond market field. To develop these research questions, a comprehensive literature review was undertaken, involving a detailed analysis of all papers associated with each co-word cluster.Questions were either directly extracted from the papers where they were explicitly stated or inspired by the identified gaps and emerging trends.Furthermore, Table 5 includes a 'Supporting Source' column, which cites the specific research papers from which the questions were derived or that provide the foundational context for the newly developed inquiries.This methodological approach ensures that each question is both firmly grounded in and contributes to advancing the scholarly discourse within these thematic areas. Building on this foundation, the following sections will delve into the gaps the study identified and discuss the future research questions that emerged.This exploration aims to bridge the current knowledge voids and guide further investigations into the dynamic landscape of the emerging bond market. Price discovery in the sovereign bond market Price discovery related to the sovereign bond market is becoming a more significant area to examine as investors' exposure to sovereign debt markets grows.Although the general issue of relative efficiency between CDS and bond markets has been thoroughly explored, historical findings in the sovereign sector have been ambiguous, in contrast to corporate bonds (Raja et al., 2020).Another interesting issue in the CDS and sovereign bond market is their level of integration and impact on the market's volatility as a whole.However, some studies have examined this issue (see, e.g., Li & Scrimgeour (2021)), but the sample size used for the study was limited to only three emerging markets.Thus, extending these studies to include a larger country sample and more benchmark instruments is important.Other than that, the impact of micro-economic policies on sovereign market instruments has not yet been studied systematically (Mosley et al., 2020). Understanding the yield curve dynamics through a macroeconomic perspective Given the global investors' "search for yield" behavior, a large amount of capital has flowed into EM economies via debt instruments (Cepni et al., 2021), making the region vulnerable to macroeconomic shocks.Although researchers in these markets are studying the effect of global and local macroeconomic factors on the term structure but have failed to incorporate important factors such as illiquidity, quantitative easing, global financial cycle, and foreign holdings (Ahi et al., 2018;Christensen et al., 2021) Other than that, increasing the sample size would help in improving generalization of the results (Candelon & Moura, 2023).Furthermore, the future research direction should evaluate the performance of benchmark yield curve models such as Nelson-Siegel and Nelson-Siegel-Svensson in different portfolio optimization strategies. Examining the spillover effects of financial contagion Although studies examining the volatility and return spillovers aspect of financial contagion have garnered the attention of various academics, the underlying factors causing these spillovers to have not been studied extensively (Tsang et al., 2021).These factors include balance sheet adjustments by major central banks, trade interlinkage, credit flows, unconventional monetary policy, and interbank linkages. It is also necessary to evaluate their time-varying nature during good and bad periods.Another aspect of the spillover effects of financial contagion for future research should be quantifying these effects over the regional markets and economies (MacDonald, 2017).The extent of the spillover effects on GDP growth rate, change in local monetary policies, and asset market returns in absolute terms should also be explored. Credit risk modeling in the corporate bond market of emerging markets Ample studies exist on the trends and determinants of corporate bond market development, but there is limited literature available on the other subfields of the corporate bond market, such as credit risk modeling.This includes studies on the recovery rates in defaulted corporate bonds of emerging markets and their determinants (Mili et al., 2018).Thus, future studies can include country-and firmspecific variables with a bigger sample of emerging countries.One of the major country-specific variables could be the influence of the political environment on the corporate bond market in emerging economies.However, most of the studies on this aspect are on the Chinese corporate bond market (see Lin & Milhaupt (2017); Schweizer et al. (2021); Walker et al. (2021); there is hardly any study on other emerging markets of Asia and Latin America. Integration of Sukuk with other developed and emerging bond markets Islamic bonds (Sukuk) are receiving much attention from potential emerging market investors, but there are hardly any conclusive studies on the co-movement and leadlag relationship between Sukuk, developed, and emerging market bonds in the short and long run (Bhuiyan et al., 2019).It would also be meaningful to study the financial integration between Sukuk and traditional bonds for diversification purposes (Bhuiyan et al., 2018).There is also a dearth of studies that examine the behavior of Sukuk in terms of dynamic correlations and volatility (Hassan et al., 2018).However, future research should incorporate a broader data sample with a bigger time sample and corporate Sukuk. Conclusion and Limitations This study employs bibliometric analysis using the Bibliometrix R package to analyze the performance and examine the intellectual structure of emerging bond market literature.Examining 325 articles, the publication trend reveals a 12.99% annual growth with a significant jump after 2015.Out of 170 sources, 16 have been identified as core journals, contributing approximately 33% to the field.Noteworthy journals include the Journal of International Money and Finance, Emerging Market Review, and Emerging Markets Finance and Trade.Piljak emerges as a prominent author in the field with the highest number of publications.Sovereign bonds, yield curve, and contagion dominate keywords, highlighting local currency bond market and integration themes.The top 20 influential articles in the emerging bond market domain highlight critical insights into the dynamics of green, Islamic, and sovereign bond markets during crises, with Banga (2019) studying the barriers in green bond markets for developing countries being the most cited.Notably, Zaremba et al. (2021) article, which has the highest normalized total citation score of 10.57, delves into the impact of the COVID-19 pandemic on international sovereign bond markets, emphasizing its significant scholarly influence. Next, the study employs science mapping to identify the conceptual structure and future research directions for the field.Through co-word analysis, five major keyword clusters have been identified within the domain of emerging bond markets: 1) Sovereign bonds, which focus on price discovery and market integration, prominently featuring keywords like "sovereign bonds" and "CDS" that underscore their central role in knowledge dissemination and influence across clusters; 2) Impact of financial crisis over EM bonds, exploring the effects of crises on EM bond markets through keywords like "financial crisis" and "contagion," highlighting the critical nature of these terms in assessing market volatility and capital flows; 3) Yield curve, dealing with the influence of macroeconomic policies on yield curve dynamics, where "yield curve" emerges as a pivotal term due to its significant influence shaped by local and global policy drivers; 4) Corporate Bonds, focusing on risk modeling and political impacts, particularly in China, underlining the importance of corporate bonds and their geopolitical influences; 5) Islamic bonds (Sukuk), which examine the integration and co-movement between sukuk and conventional bonds, indicating the profound impact of "Islamic finance" and "Sukuk" within this financial segment.Together, these clusters reveal diverse aspects and dynamics within the bond market, providing a comprehensive framework for understanding various influences and interactions in this field.The study identifies limited keywords within clusters related to key research themes such as price discovery, yield curve dynamics, contagion spillovers, corporate bond credit risk modeling, and Sukuk integration.Based on these findings, the study highlights unexplored areas in these themes and suggests several questions for future research directions.These questions aim to further investigate the gaps and extend the understanding of how these key areas influence and interact within the broader context of emerging bond markets. Even though this bibliometric review provides a detailed analysis of the field's academic richness, intellectual bases, and future research themes, the study suffers from some drawbacks.First, the study data is confined to the correctness and thoroughness of articles found in the Scopus database.Thus, it would be interesting to see a bibliometric review using different data acquisition platforms, such as the Web of Science, and a comparative analysis of their results with this review article. Second, other bibliometric analysis tools, such as co-citation analysis and country collaboration, could also be employed in the future. Figure 4 Figure4depicts the co-occurrence network of keywords within the field, and all the clusters are closely related.In contrast, Table 4 reveals a detailed look at the results Figure Figure 4. Co-word network Source: Authors' Calculations Table 1 . Core journals in the emerging bond market Note: TC= Total Citations, NP=Number of Publications Table 3 . Influential articles Kenourgios and Padhi's (2012)al.(2016)article, with 112 citations, explores the resilience of Islamic financial markets during global financial crises, suggesting a protective decoupling effect that shields these markets from broader economic shocks.This notion supports the idea that Islamic bonds could serve as a stabilizing factor in investment portfolios during turbulent times.Lastly,Kenourgios and Padhi's (2012)study, which gathered 105 citations, provides a detailed analysis of contagion effects across financial crises in stock and bond markets of emerging economies, offering crucial insights into the dynamics of market responses and the significant role of stock and bond markets in transmitting shocks during the subprime crisis. Table 4 . Keyword cluster Kearns et al. (2023) (2019)eign bonds (red cluster), financial crisis (blue cluster), the yield curve (orange cluster), corporate bonds (violet cluster), and Islamic bonds (grey cluster), have been identified.The summaries of each cluster are presented next.The first cluster relates to emerging financial markets' sovereign bond market segment.The most prominent keywords in this cluster are sovereign bonds and CDS, has the highest betweenness and closeness scores of 37.2857 and 0.05882, respectively, which highlight their high knowledge dissemination and influential potential to other clusters in the network.Sovereign bonds also have the highest PageRank score of 0.0781, which signifies their prominent nature in the cluster.Other significant keywords in the sovereign bonds cluster include CDS, credit risk, volatility, and price discovery.This cluster comprises a total of 24 papers, reflecting a focused yet substantial body of research within this sub-domain.This research cluster explores the dynamics of price discovery in sovereign bond markets, highlighting studies like those byAktug et al. (2012)andHassan et al. (2015), which examine the intricate interactions between CDS and bond markets in response to credit risk information.Li and Scrimgeour (2021)carry forward this by assessing the impact of CDS-bond deviations on market volatility, noting increased risks during high volatility periods.The cluster comprises 26 papers and explores the transmission of volatility and returns during financial crises, with a strong emphasis on the effects of major central bank policies on EM economies.The studies includeApostolou and Beirne (2019)which found EM bond markets are highly sensitive to balance sheet adjustments by the Federal Reserve and European Central Bank.Azis et al. (2021)further revealed that during crises like the 2008 financial crisis and the COVID-19 pandemic, U.S. monetary policy has a pronounced impact on EM bonds and equity markets.Similarly,Kearns et al. (2023)observed a growing influence of the European Central Bank on bond yields, in contrast to the steady impact of the Federal Reserve, while MacDonald (2017) identifies how capital market frictions affect responses to unconventional asset purchases in EM markets.Additionally, Qin et al. (2023)(2023)ration of sovereign bonds with other asset markets is also investigated.Balli et al. (2020)investigate spillover effects from developed to emerging markets, whileDimic et al. (2021)look at how global uncertainties affect stock-bond correlations.These studies suggest that global factors often dominate over local influences, as supported byInaba (2021), who finds that sovereign bond returns are cyclically dependent on global factors across 41 economies.Further research byKhalid and Ahmad (2023)andQin et al. (2023)highlights regional disparities in market integration and suggests that expanding geographical scope and incorporating diverse financial instruments could enrich understanding.Collectively, these studies provide deep insights into how sovereign bonds interact with global financial markets, offering a foundation for future research on economic policies and financial stability's impact on these markets.
7,152.6
2024-05-30T00:00:00.000
[ "Economics", "Business" ]
Quasi-3D Thermal Simulation of Integrated Circuit Systems in Packages : The problem of thermal modeling of modern three-dimensional (3D) integrated circuit (IC) systems in packages (SiPs) is discussed. An e ff ective quasi-3D (Q3D) approach of thermal design is proposed taking into account the specific character of 3D IC stacked multilayer constructions. The fully-3D heat transfer equation for global multilayer construction is reduced to the set of coupled two-dimensional (2D) equations for separate construction layers. As a result, computational di ffi culties, processor time, and RAM volume are significantly reduced, while accuracy can be provided. A software tool, Overheat-3D-IC, was developed on the base of the generalized Q3D package numerical model. For the first time, the global 3D thermal performances across the modern integrated circuit / through-silicon via / ball grid array (IC-TSV-BGA) and multi-chip (MC)-embedded printed circuit board (PCB) packages were simulated. A ten times decrease of central processing unit (CPU) time was achieved as compared with the 3D solutions obtained by commercial universal 3D simulators, while saving the su ffi cient accuracy. The simulation error of maximal temperature T MAX determination for di ff erent types of packages was not more than 10–20%. Introduction The general trend of the progress of electronic devices and systems is to increase functionality, operation speed, power capacity, heat dissipation capability, while at the same time reduce size and weight. Flip-chip BGA packages which couple the flip-chip interconnections with a heat spreader attachment have demonstrated an effective package solution for higher pin count and superior heat dissipation (see Figure 1) [6]. Various types of BGA-like packages have been proposed as follows: high performance (HP), extra performance (XP), plastic (P), multi-chip (MC), and others. These BGA's were often soldered to moderately complex flexible printed circuit boards (FPCBs) using surface mount technology (SMT) assembly [1,6]. However, around 2005, the thick film ceramic technology based on the BGA format was no longer capable of supporting the rapidly increasing part count and required vertical interconnects. Three-dimensional (3D) packages have been implemented to meet this need. Internal stacking modules (ISM) are the package stacked and molded within the base package. They often have been used for mobile phone chip sets. For current applications, package-on-package (PoP) are very popular (see Figure 2) [2]. Both ISM and PoP provide adequate connection between dies. However, the number of connections between dies in the 3D stack is limited. In addition, because the connections between different dies go through the substrate, the parasitic load for these connections is high and an appreciable portion of power is consumed by the connection [7]. In stacked IC-TSV-BGA packages, the through-silicon vias are used for direct vertical connections of the silicon dies. The sizes of the TSVs as compared with the wire bond pads (ISM) or solder bumps (PoP) sizes are significantly smaller, and therefore the parasitic load and input-output (IO) power could be reduced [3]. The comparison between PoP and 3D TSV chip packaging is presented in Figure 3 [4]. Various types of BGA-like packages have been proposed as follows: high performance (HP), extra performance (XP), plastic (P), multi-chip (MC), and others. These BGA's were often soldered to moderately complex flexible printed circuit boards (FPCBs) using surface mount technology (SMT) assembly [1,6]. However, around 2005, the thick film ceramic technology based on the BGA format was no longer capable of supporting the rapidly increasing part count and required vertical interconnects. Three-dimensional (3D) packages have been implemented to meet this need. Internal stacking modules (ISM) are the package stacked and molded within the base package. They often have been used for mobile phone chip sets. For current applications, package-on-package (PoP) are very popular (see Figure 2) [2]. Both ISM and PoP provide adequate connection between dies. However, the number of connections between dies in the 3D stack is limited. In addition, because the connections between different dies go through the substrate, the parasitic load for these connections is high and an appreciable portion of power is consumed by the connection [7]. Various types of BGA-like packages have been proposed as follows: high performance (HP), extra performance (XP), plastic (P), multi-chip (MC), and others. These BGA's were often soldered to moderately complex flexible printed circuit boards (FPCBs) using surface mount technology (SMT) assembly [1,6]. However, around 2005, the thick film ceramic technology based on the BGA format was no longer capable of supporting the rapidly increasing part count and required vertical interconnects. Three-dimensional (3D) packages have been implemented to meet this need. Internal stacking modules (ISM) are the package stacked and molded within the base package. They often have been used for mobile phone chip sets. For current applications, package-on-package (PoP) are very popular (see Figure 2) [2]. Both ISM and PoP provide adequate connection between dies. However, the number of connections between dies in the 3D stack is limited. In addition, because the connections between different dies go through the substrate, the parasitic load for these connections is high and an appreciable portion of power is consumed by the connection [7]. In stacked IC-TSV-BGA packages, the through-silicon vias are used for direct vertical connections of the silicon dies. The sizes of the TSVs as compared with the wire bond pads (ISM) or solder bumps (PoP) sizes are significantly smaller, and therefore the parasitic load and input-output (IO) power could be reduced [3]. The comparison between PoP and 3D TSV chip packaging is presented in Figure 3 [4]. In stacked IC-TSV-BGA packages, the through-silicon vias are used for direct vertical connections of the silicon dies. The sizes of the TSVs as compared with the wire bond pads (ISM) or solder bumps (PoP) sizes are significantly smaller, and therefore the parasitic load and input-output (IO) power could be reduced [3]. The comparison between PoP and 3D TSV chip packaging is presented in Figure 3 [4]. Embedded die packaging is the next step in further miniaturization and increased functionality of most electronic systems. For the systems with signal frequencies in the order of several GHz and more, much shorter and impedance-matched interconnections are required. This could be achieved using the chip embedding technology based on chip-on-flex organic substrates with high density build-up layers and microvias equipped on both sides with surface mount passive components and active chips in packages (see Figure 4) [5,8]. On the basis of the presented review, it is seen that the technologies keep the miniaturization trend and advance from a 2D to a 3D system-in-package technology. The miniaturization of modern LSI circuits leads to an increase in the power dissipation density, which causes their increased heating. The heat drain from electron device active areas is the main factor limiting the device functionality and reliability. The packages are the key elements determining the effectivity of heat dissipation in electronic components and systems. Therefore, the analysis of Embedded die packaging is the next step in further miniaturization and increased functionality of most electronic systems. For the systems with signal frequencies in the order of several GHz and more, much shorter and impedance-matched interconnections are required. This could be achieved using the chip embedding technology based on chip-on-flex organic substrates with high density build-up layers and microvias equipped on both sides with surface mount passive components and active chips in packages (see Figure 4) [5,8]. Embedded die packaging is the next step in further miniaturization and increased functionality of most electronic systems. For the systems with signal frequencies in the order of several GHz and more, much shorter and impedance-matched interconnections are required. This could be achieved using the chip embedding technology based on chip-on-flex organic substrates with high density build-up layers and microvias equipped on both sides with surface mount passive components and active chips in packages (see Figure 4) [5,8]. On the basis of the presented review, it is seen that the technologies keep the miniaturization trend and advance from a 2D to a 3D system-in-package technology. The miniaturization of modern LSI circuits leads to an increase in the power dissipation density, which causes their increased heating. The heat drain from electron device active areas is the main factor limiting the device functionality and reliability. The packages are the key elements determining the effectivity of heat dissipation in electronic components and systems. Therefore, the analysis of On the basis of the presented review, it is seen that the technologies keep the miniaturization trend and advance from a 2D to a 3D system-in-package technology. The miniaturization of modern LSI circuits leads to an increase in the power dissipation density, which causes their increased heating. The heat drain from electron device active areas is the main factor limiting the device functionality and reliability. The packages are the key elements determining the effectivity of heat dissipation in electronic components and systems. Therefore, the analysis of their thermal modes is of great interest, in particular, the analysis of heat transfer by 3D integrated systems. State of the Art In this section we analyze the thermal management solutions which have been obtained using computational studies for different types of chip packaging. BGA Packages In [1], the commercial FloTHERM 3.2 version [9] was used to model and simulate the HP-and XP-BGA packages. The sub-modeling technique was used. The global model of the package module was divided into local models. The flip-chip die, heat spreader (metal lid), and solder ball were modeled with 3D elements; the flip-chip bumps and heat source were modeled with 2D elements. Convection-free conditions were taken into consideration. The 2D and 3D temperature maps for the total package and its partial elements were not presented. In [10], thermal modeling of the global 3D constructions of BGA and XP-BGA packages was performed. For comparison, the following two software tools were used: the universal 3D simulator COSMOS and the Overheat BGA program, which are based on the quasi-3D package model. The Pentium 4 CPU time for the XP BGA was 3 h and 15 min, accordingly, it was shown that the quasi-3D models reduced the CPU time by an order of magnitude. In [11], a simple construction of small outline package (PSOP) combined with a heat spreading mass (copper slag) was simulated using FloTHERM software. The recommendations of package thermal regimes optimization were developed. Unfortunately, the CPU time and the comparison with experimental data were not sited. 3D-IC-TSV Packages In [12], the Cadence general multistep conception of 3D ICs with TSVs design was proposed. Within this conception, thermal analysis is needed to ensure hot spots and thermal leakages are below specified limits. In [3], the commercial software ABAQUS was used to perform thermo-mechanical simulation and analysis of Xilinx IC-TSV-BGA stack module comprised of FPGA IC, analog IC, and a transceiver. The sub-modeling approach was used to divide the global model into local sub-models. The sub-models effected the convergence of the solution and appropriate accuracy for partial details. The full 3D temperature distribution and 2D temperature cross-sectional views were not presented. The methodology for sewing together the sub-models and forming the global package model was not discussed. In [21], a numerical 3D model of a two-die 300 mW package with and without TSVs was built using ANSYS. Temperature and heat flux maps were presented. To simplify the solutions, the homogenization and sub-modeling approaches were performed. The COMSOL software was used for thermal simulation in two-and three-layer stacked middle power [14] and high power [15] ICs. The special effects for TSVs, i.e., the heat flow detouring around the TSV, Cu pistoning in through-silicon holes, and thermo-mechanical stress caused by thermal heating were investigated in [3,13,21]. Complete thermal modeling of the global IC-TSV-BGA multilayers construction without dividing into local models was performed in [16] using the quasi-3D approach. The full 3D temperature distribution and 2D temperature maps of all the device structure layers were received and analyzed. In all the works mentioned above, the 3D-IC-TSV modeling procedure was iterated several times until the module offered satisfied performance. Therefore, different thermal model reduction techniques were used to produce lighter models. In [19], the Green function-based analytical spectral method was used assuming the homogeneous thermal conductance of vias. A significant (3-100 times) speed-up over FDM-based thermal simulator COMSOL was achieved. In several works, a combination of analytical and numerical methods was used for thermal modeling of 3D ICs. In [17], a simple analytical model was proposed to estimate the temperature distribution in active layers of IC chips. ANSYS was used for 2D general construction simulation assuming uniform heat generation in the homogeneous medium with constant properties to avoid a convergence problem. In [20], the analytical model assumed heat flow only in vertical directions, and neglected heat spreading in the device plane. Each active die was associated with two significant thermal resistances, i.e., silicon-SOI substrate and metal-SiO 2 . A numerical solution for the simplified model was used to analyze the role of TSVs in heat dissipation. Embedded Die Packages The general conception "design to manufacturing" of embedded PCBs was developed in the HERMES project (high density integration in embedding chips for reduced size module and electronic systems) [22]. The multistep workflow with a consistent model library was proposed. One of the steps was thermo-mechanical modeling to identify the high stress areas within components at different PCBs. In [23], in addition to the general design workflow the specific features of the power PCB Embedded Technology were discussed. In both publications [22,23], only the results of stress simulation were presented. The thermal models were not discussed, and the results of temperature and heat flow distributions in the embedded die board structures were not presented. Unfortunately, we did not find publications in which the thermal modeling problem of multi-chip embedded circuit boards was considered completely. Summarizing the cited above review, we can conclude that the complex problem of heat transfer in 3D packages was numerically solved using the following universal 3D simulation tools: FloTHERM [1,11], COMSOL [14], ANSYS [24], MSC/PATRAN [25], COSMOS [26], and others. However, the assessment of sensitivity to various geometric and material parameters for large finite-difference or finite-element models requires long processor times and large RAM volume. Correct simplification of a 3D thermal model is an effective means of obtaining an accurate solution with acceptable processor time. In this direction lead two ways: a sub-modeling technique (the global model of the package is divided into local models of its functional blocks) and a quasi-3D approach (the 3D problem is reduced to a system of coupled 2D equations for the set of device structure layers). In this paper, the quasi-3D approach is introduced as an effective way to solve the problem of thermal simulation of the modern 3D-IC-TSV and multi-chip embedded circuit board packages. Quasi-3D Numerical Model of IC Packages The temperature distribution in the global construction of a 3D IC module is described by the Joule heat transfer 3D partial differential equation: where λ is the thermal conductivity and P is the power density. In the multilayer structure ( Figure 5), the temperature distribution along the z-axis in each structural layer can be considered to be linear, since layer thickness is much smaller than its horizontal dimensions, i.e., L X , L Y L Z . Due to this, the 3D problem can be reduced to a system of 2D equations on the horizontal surfaces of the layers [10]. 6 The system of 2D equations describes the temperature distribution on the top surface of the package T1(x,y), on the surfaces of the package inner layers Tξ(x,y), ξ = 2, …, N, and on the surface of the PCB TN+1(x,y). These equations have the following form: • convective heat transfer occurs on the top surface of the package ( ) • for inner layers ( , ), , 2,..., . 0, x y x y x y x x y y Z Z P x y for active layers with power source N for passive layers where Tξ(x,y) is the layer temperature; TAMB is the ambient temperature; P is the power density on die surface; α is the convective heat transfer coefficient; λξ and zξ are the thermal conductivity coefficient and thickness of the package structural layer ξ = 1,2, …, N; N is the quantity of package layers XS YS package horizontal sizes. On the PCB surfaces, the temperature is assumed to be constant and equal to the ambient temperature, TN+1(x,y) ≡ TAMB, or other heat exchange conditions are established, for example, a coefficient of convective heat exchange can be set. Appropriate boundary conditions for Equations (2) and (3) are established on the side surfaces of the package. The system of partial differential Equations (2) and (3) is solved by the finite difference method. A non-uniform difference grid is generated automatically. The system of linear algebraic equations is solved by the method of successive over relaxation. The software tool guided by 3D IC chip package thermal simulation was developed. The maximum quantity of structural layers is 20 and the difference grid maximum size is 700 × 700 nodes. The software is able to simulate thermal processes The system of 2D equations describes the temperature distribution on the top surface of the package T 1 (x,y), on the surfaces of the package inner layers T ξ (x,y), ξ = 2, . . . , N, and on the surface of the PCB T N+1 (x,y). These equations have the following form: • convective heat transfer occurs on the top surface of the package • for inner layers ∂ ∂x λ ξ (x, y) where T ξ (x,y) is the layer temperature; T AMB is the ambient temperature; P is the power density on die surface; α is the convective heat transfer coefficient; λ ξ and z ξ are the thermal conductivity coefficient and thickness of the package structural layer ξ = 1,2, . . . , N; N is the quantity of package layers X S Y S package horizontal sizes. On the PCB surfaces, the temperature is assumed to be constant and equal to the ambient temperature, T N+1 (x,y) ≡ T AMB , or other heat exchange conditions are established, for example, a coefficient of convective heat exchange can be set. Appropriate boundary conditions for Equations (2) and (3) are established on the side surfaces of the package. The system of partial differential Equations (2) and (3) is solved by the finite difference method. A non-uniform difference grid is generated automatically. The system of linear algebraic equations is solved by the method of successive over relaxation. The software tool guided by 3D IC chip package thermal simulation was developed. The maximum quantity of structural layers is 20 and the difference grid maximum size is 700 × 700 nodes. The software is able to simulate thermal processes in different types of BGA packages, such as 3D integrated IC-TSV-BGA, multi-chip stack embedded PCB, and others. The input data are the following: 1. Package structural parameters, i.e., the number of layers, type of layer, sizes, and physical parameters of the layer; 2. The powers or power densities of the active dies; 3. Computational parameters, i.e., difference network sizes M X × M Y and accuracy of computations. The output data are the following: 1. The temperature distribution plots T ξ (i,j) in the x,y plane; 3. Average T AV and maximal T MAX values of layer temperatures. The CPU time for a typical BGA package thermal simulation is about 30 min for the IBM PC Intel Core i7. For comparison, the process of simulation using the universal fully-3D simulator ANSYS requires 330 min of CPU time. Simulation Results The results of using the Q3D package models for thermal simulation of different types of BGA packages have been presented in our previous work [10]. Next, the Q3D modeling results of modern generation chip packages, i.e., stacked IC-TSV-BGAs and multi-chip embedded circuit boards, which are expected to take hold in the industry and become mainstream technologies, are presented [11]. Stacked IC-TSV-BGA Module The 3D module under test is shown in Figures 6 and 7. It is similar to the Xilinx module [3] which consists of three active dies (17,18,19 in Figure 6) placed on a passive silicon interposer. The interposer consists of TSVs, metal layers for connecting the die with the die, microbumps for connecting the die with the interposer, and C4 bumps for connecting the interposer with the die. The package sizes are 35 × 35 × 3.25 mm. The total power is 20 W. The thermal conductivities of the package constructive materials are shown in Table 1. The value TMAX = 145 °C observed at the surface of the middle die in Figure 6b is critical because it is very close to that established for the semiconductor ICs' upper limit of 150 °C. The thermal regime of the middle die must be improved to decrease TMAX. The copper TSVs integrated into the silicon interposer are sensitive to temperature influence. A primary reliability concern for 3D integration is Cu pistoning as a result of the coefficient of thermal expansion mismatch with Si. Therefore, the temperature of the silicon interposer is the important factor. It is seen in Figure 6c that the TMAX of interposer is about 106 °C, which means that stress and degradation of TSVs cannot appear. It is necessary to note that the complete thermal solution for the global IC-TSV-BGA construction was obtained [12]. Multi-Chip Stack Embedding Package Embedded die modules have enabled continued electronic packaging size reduction while at the same time improved performance. The wafer and board level device embedded (WABE) technology is used to embed die in multi-layer flexible PCB [27]. Three structures of embedded modules were examined. In Figure 8, the temperature distribution in the structure of a single-layer package with 38 × 38 mm 2 total area is shown. The analogous pictures are shown in Figure 9 for the two-layer package with 38 × 20 mm 2 area, and in Figure 10 for the three-layer package with 20 × 20 mm 2 area. The power of each die is 10 W. It is interesting to analyze critical temperatures for a multi-chip TSVs stack, i.e., maximal temperatures for the semiconductor dies and the interposer with built-in copper TSVs. To simplify the thermal analysis of the package structure in Figure 6a, we established at the PCB surface, the condition T PCB = T AMB , neglecting the heat flow caused by a free convection. In Figure 6b,c, the temperature distributions on the surfaces of active dies with power 8 W (Dies A) and 4 W (Die B) and on the upper surface of a passive silicon interposer are shown. In Figure 7 the temperature distribution along the vertical line L (see Figure 6a) is shown. The value T MAX = 145 • C observed at the surface of the middle die in Figure 6b is critical because it is very close to that established for the semiconductor ICs' upper limit of 150 • C. The thermal regime of the middle die must be improved to decrease T MAX . The copper TSVs integrated into the silicon interposer are sensitive to temperature influence. A primary reliability concern for 3D integration is Cu pistoning as a result of the coefficient of thermal expansion mismatch with Si. Therefore, the temperature of the silicon interposer is the important factor. It is seen in Figure 6c that the T MAX of interposer is about 106 • C, which means that stress and degradation of TSVs cannot appear. It is necessary to note that the complete thermal solution for the global IC-TSV-BGA construction was obtained [12]. Multi-Chip Stack Embedding Package Embedded die modules have enabled continued electronic packaging size reduction while at the same time improved performance. The wafer and board level device embedded (WABE) technology is used to embed die in multi-layer flexible PCB [27]. Three structures of embedded modules were examined. In Figure 8, the temperature distribution in the structure of a single-layer package with 38 × 38 mm 2 total area is shown. The analogous pictures are shown in Figure 9 for the two-layer package with 38 × 20 mm 2 area, and in Figure 10 for the three-layer package with 20 × 20 mm 2 area. The power of each die is 10 W. The series of Figures 8-10 illustrates two facts. Firstly, the stacking technique reduces the total device area in the horizontal plane, twice for the stacked module in Figure 9 and thrice in Figure 10. Secondly, the price of the area reduction is the rapid TMAX increase of the active dies, i.e., 44, 55, and 72 °C for the constructions, presented in Figures 8-10 accordingly. The largest value of 72 °C for the structure, shown in Figure 10, correlates well with the experimental value of 85 °C taken for the embedded die module reliability testing in [11,27]. The complete solution of 3D thermal performance in modern multi-chip embedded module fabricated by WABE technology was developed for the first time. Validation of the Q3D Model The validation of the Q3D model was carried out by the following two ways: (1) comparison with simulated results obtained using standard fully-3D FEM simulators and (2) comparison with measured characteristic temperatures or thermal resistances, i.e., junction to case (ΘJC) and junction to board (ΘJB), for different types of packages. Comparison with Results Obtained Using Standard Fully-3D FEM Simulators For thermal characterization, the TSV-based 3D stacked ICs module, presented in Figure 11a, was selected [14]. The complete model consists of three silicon layers each with the size 1 × 1 × 0.1 mm, 16 TSVs in the form of a 4 × 4 matrix placed on each of the silicon layers, and 64 copper bumps divided into four groups, 4 × 16. The bottom layer of bumps is in contact with the FR4 circuit board and the top layer of bumps is in contact with the heat sink. The silicon die in the model is partitioned The series of Figures 8-10 illustrates two facts. Firstly, the stacking technique reduces the total device area in the horizontal plane, twice for the stacked module in Figure 9 and thrice in Figure 10. Secondly, the price of the area reduction is the rapid T MAX increase of the active dies, i.e., 44, 55, and 72 • C for the constructions, presented in Figures 8-10 accordingly. The largest value of 72 • C for the structure, shown in Figure 10, correlates well with the experimental value of 85 • C taken for the embedded die module reliability testing in [11,27]. The complete solution of 3D thermal performance in modern multi-chip embedded module fabricated by WABE technology was developed for the first time. Validation of the Q3D Model The validation of the Q3D model was carried out by the following two ways: (1) comparison with simulated results obtained using standard fully-3D FEM simulators and (2) comparison with measured characteristic temperatures or thermal resistances, i.e., junction to case (Θ JC ) and junction to board (Θ JB ), for different types of packages. Comparison with Results Obtained Using Standard Fully-3D FEM Simulators For thermal characterization, the TSV-based 3D stacked ICs module, presented in Figure 11a, was selected [14]. The complete model consists of three silicon layers each with the size 1 × 1 × 0.1 mm, 16 TSVs in the form of a 4 × 4 matrix placed on each of the silicon layers, and 64 copper bumps divided into four groups, 4 × 16. The bottom layer of bumps is in contact with the FR4 circuit board and the top layer of bumps is in contact with the heat sink. The silicon die in the model is partitioned into four 2 × 2 matrices of power grids, as shown in Figure 11b. Each grid represents a different function block. Energies 2020, 13, x FOR PEER REVIEW 12 of 18 into four 2 × 2 matrices of power grids, as shown in Figure 11b. Each grid represents a different function block. The 3D view of temperature distribution for three-layer chip in the package obtained using the COMSOL 4.1 software [28] is shown in Figure 12 [14]. This module was simulated using the Overheat-3D-IC software tool with a developed quasi-3D model. The temperature distributions on the surfaces of the upper and the bottom silicon layers with TSVs are presented in Figure 13. They are in good agreement with the temperature distribution presented in Figure 12. The comparison with the simulated results obtained using COMSOL software is presented in Table 2. The 3D view of temperature distribution for three-layer chip in the package obtained using the COMSOL 4.1 software [28] is shown in Figure 12 [14]. Energies 2020, 13, x FOR PEER REVIEW 12 of 18 into four 2 × 2 matrices of power grids, as shown in Figure 11b. Each grid represents a different function block. The 3D view of temperature distribution for three-layer chip in the package obtained using the COMSOL 4.1 software [28] is shown in Figure 12 [14]. This module was simulated using the Overheat-3D-IC software tool with a developed quasi-3D model. The temperature distributions on the surfaces of the upper and the bottom silicon layers with TSVs are presented in Figure 13. They are in good agreement with the temperature distribution presented in Figure 12. The comparison with the simulated results obtained using COMSOL software is presented in Table 2. This module was simulated using the Overheat-3D-IC software tool with a developed quasi-3D model. The temperature distributions on the surfaces of the upper and the bottom silicon layers with TSVs are presented in Figure 13. They are in good agreement with the temperature distribution presented in Figure 12. The comparison with the simulated results obtained using COMSOL software is presented in Table 2. It is seen that the Q3D thermal model is valid and gives a solution very close to the complete numerical solution obtained using a standard fully-3D FEM simulator. In an analogous way, the simulation results were duplicated obtained using FloTHERM for the Analog Devices Power Small Outline Package (PSOP) with sizes 15.9 × 11 × 3.15 mm and total power 2 W working at ambient temperature TAMB = 85 °C [11]. The maximal difference in internal module temperature distributions obtained by two different tools in the range + 85-130 °C was not more than 5-6 °C These examples confirm the fact that the developed Q3D thermal model is valid and describes the temperature distribution in different types of 3D IC packages adequately to the fully-3D model. The Standard BGA Package The standard 17 × 17 mm 2 BGA package was used with 8.2 × 8.2 mm 2 die, three-row peripheral ball array, 156 perimeter balls, 16 central thermal balls; the chip power dissipation was 1 W [26]. The comparison of the measured and simulated results for thermal resistance junction to ambient ΘJA is shown in Table 3, showing that good agreement was achieved. It is seen that the Q3D thermal model is valid and gives a solution very close to the complete numerical solution obtained using a standard fully-3D FEM simulator. In an analogous way, the simulation results were duplicated obtained using FloTHERM for the Analog Devices Power Small Outline Package (PSOP) with sizes 15.9 × 11 × 3.15 mm and total power 2 W working at ambient temperature T AMB = 85 • C [11]. The maximal difference in internal module temperature distributions obtained by two different tools in the range + 85-130 • C was not more than 5-6 • C. These examples confirm the fact that the developed Q3D thermal model is valid and describes the temperature distribution in different types of 3D IC packages adequately to the fully-3D model. The Standard BGA Package The standard 17 × 17 mm 2 BGA package was used with 8.2 × 8.2 mm 2 die, three-row peripheral ball array, 156 perimeter balls, 16 central thermal balls; the chip power dissipation was 1 W [26]. The comparison of the measured and simulated results for thermal resistance junction to ambient Θ JA is shown in Table 3, showing that good agreement was achieved. Because the Xilinx FSGD2104 package was used as an element of global TSV-IC-BGA module it was selected as the device under test. Simulated and measured thermal resistances Θ JB and Θ JC of the package FSGD2104, which is used for field-programmable gate arrays (FPGA) [26] are presented in Table 4. It is seen that the proposed quasi-3D thermal model of the package provides the results with reasonable accuracy. High-Performance Flip-Chip BGA (HP-fcBGA) High-performance flip-chip BGA (HP-fcBGA) is a popular package solution for higher pin count and superior heat dissipation. In [1], the measurement results of these packages thermal mode, as well as the results of their simulation using the FloTHERM soft tool were presented. We have compared the measured and simulated by FloTHERM thermal resistance junction to ambient Θ JA with this resistance simulated using our software Overheat-3D-IC. It is seen in Figure 14 that our simulation results are in good agreement with this experiment. Conclusions The quasi-3D approach for thermal modeling of 3D integrated circuits systems in package (SiPs) was developed. It takes into account the following specific attributes of modern SiP constructions: Conclusions The quasi-3D approach for thermal modeling of 3D integrated circuits systems in package (SiPs) was developed. It takes into account the following specific attributes of modern SiP constructions: (a) 3D integration of ICs and board; (b) Large number of thinned layers of different materials; (c) Vertical z-axes interconnections. The classic heat-transfer equation for a 3D multilayer structure is reduced to the set of coupled 2D equations for separate construction layers. As a result, the computational difficulties, processor time, and RAM volume are greatly reduced, while saving the accuracy. The software tool Overheat-3D-IC was developed on the base of the generalized Q3D package numerical model. Two modern types of 3D SiPs, i.e., IC-TSV-BGA and MC-embedded PCB, were analyzed using Overheat-3D-IC and universal 3D simulator COMSOL. For the IC-TSV-BGA, the CPU time was 30 min and 330 min, accordingly. The difference in the T MAX determination was not more than 10%. The complete 3D thermal solution for the global IC-TSV-BGA construction was obtained. The heating problems for temperature sensitive IC chips and TSVs were discussed using the set of 2D temperature maps. We can confirm that the Q3D analysis is more effective than the sub-modeling analysis used in [3]. The complete 3D thermal simulation of the MC-embedded PCB module fabricated by the novel WABE technology was carried out for the first time. It was shown that multi-chip stack embedding technology cardinally reduced the horizontal area of module, and at the same time increased, in equivalent proportion, the maximal temperatures of the dies. Validation of the Q3D model was carried out. The simulated and measured values of thermal resistances Θ JA , Θ JB , Θ JC , and maximal temperature T MAX were compared for different types of packages. The simulation error was 10-20%. In particular, the quasi-3D model is applicable to modern 3D IC packages and also to widely used packages of flip-chip BGA series. Conflicts of Interest: The authors declare no conflict of interest.
8,162
2020-06-12T00:00:00.000
[ "Engineering" ]
Thermal Spraying of Ultra-High Temperature Ceramics: A Review on Processing Routes and Performance Ultra-high temperature ceramics (UHTCs) are materials defined as having melting points over 3000 (cid:3) C and withstand temperatures beyond 2000 (cid:3) C without losing functionality. As service environments become even more extreme, such materials will be needed for the next generation of aeronautic vehicles. Whether it is atmospheric re-entry or sustained hypersonic flight, materials with resistance to extreme temperature will be in demand. Due to the size and shape limitations encountered by current processing methods of bulk UHTCs research of UHTC coatings, specifically thermal spray UHTC coatings, is accelerating. This paper first presents a general summary of UHTC properties, followed by a comprehensive summary of the processing routes and microstructures of current UHTC thermal spray coatings. Then, a detailed review of the oxidation and ablation resistance of UHTC thermal spray coatings is outlined. Finally, potential avenues for the development of new UHTC coating compositions are explored. Introduction Ultra-high temperature ceramics (UHTCs) are materials typified by melting points higher than 3000°C and stability above 2000°C. This group of ceramics is made up of carbides, borides and some nitrides of group four and five transition metals (Ti, Zr, Hf, V, Nb and Ta); they present strong covalent bonds, which are responsible for the elevated stability at high temperatures. UHTCs combine stability at extreme temperatures with high hardness, thermal conductivity, elastic modulus, good wear resistance and low coefficient of thermal expansion. Due to the combinations of properties UHTCs possess, they have been under investigation for some time for use in extreme aerospace applications, where inevitably, materials are required to operate at extreme temperatures in oxidizing environments. These applications include rocket propulsion components, leading edges, control surfaces and nose cones for hypersonic flight and atmospheric re-entry craft (Ref [1][2][3][4][5]. During sustained hypersonic flight and atmospheric re-entry, operating temperatures can be as high as 2200°C ( Ref 6,7). With the modern proliferation of private spaceflight companies utilizing reusable craft and the desire to develop hypersonic flight technology for military and commercial purposes, UHTCs have remained materials of significant scientific interest (Ref 8). While much research in UHTCs has focused on sintered bulk materials, UHTC coatings have also been investigated. UHTC coatings have the advantage of being near net shape while the size and shape of bulk UHTCs are limited by the processing routes needed to densify them (Ref 9,10). Using current processing methods, such as spark plasma sintering or hot pressing, such high temperatures and pressures are needed to densify UHTCs that only small, simple shaped components can be fabricated. UHTC coatings have been used to reduce wear in machine parts and bearings, provide oxidation resistance for C or SiCbased composites, provide corrosion resistance and act as diffusion barriers ( . Coatings can be deposited in numerous ways; for example, UHTC coatings have been produced using vapor deposition methods such as physical vapor deposition (PVD) and chemical vapor deposition (CVD) ( Ref 14,[16][17][18][19][20]. While vapor deposition techniques have been used to form UHTC coatings and have the advantage of creating dense coatings at temperatures below the melting points of UHTCs, these processes can be limited by coating thicknesses (*20 lm), deposition efficiency and size of the area that can be coated ( Ref 21). In order to deposit thick UHTC coatings, thermal spray methods have to be used; however, the extreme melting points and potential for oxidation pose some problems. This review will focus on thermal spraying of UHTC borides and carbides, specifically TiB 2 , ZrB 2 , HfB 2 , TiC, ZrC, HfC and TaC. The use of UHTCs in cermet (ceramic with a metallic binder) coatings is beyond the scope of this work; however, ceramic-ceramic composites will be discussed. The first section will give a general overview of the physical, mechanical and thermodynamic properties of these bulk UHTCs. The following section will give a brief introduction to various thermal spray processes used to deposit UHTC coatings and how the parameters used within these processes affect the microstructure and properties of UHTC coatings. Of the properties discussed, particular attention will be paid to the high temperature performance of UHTC coatings; the effect of a range of particle reinforcements on the oxidation and ablation resistance of UHTC composite coatings will also be examined. Finally, pathways for the next generation of UHTC coatings will be discussed. Physical, Mechanical and Thermodynamic Properties of UHTCs UHTC Borides As early as the 1960s, at the height of the space race, UHTCs (specifically ZrB 2 and HfB 2 ) were investigated as solutions for the extreme temperatures encountered in the first generation spacecraft by Kaufman and Clougherty (Ref 22) at the United States Air Force Materials Laboratory. At the same time, in the Soviet Union, similar work was conducted by Samsonov at what is now the Frantsevich Institute for Problems in Materials Science in Kiev ( Ref 23,24). Owing to their excellent thermal and mechanical properties (especially high hardness, high modulus, high thermal conductivity, and low thermal expansion coefficient), UHTC materials were found to be of interest for heat shields, rocket and structural components in these early spacecraft. More recently, these compounds have become subject to increased research for wear resistant applications such as ball bearings, machine tools and engine valves ( Ref 25). Given the success of Kaufman and Clougherty in characterizing the high temperature properties of UHTC borides, much work into UHTCs over the subsequent years was focused on these compounds. Fahrenholtz et al. (Ref 26) provided a detailed summary of the properties of ZrB 2, and HfB 2 while work by Munro (Ref 27) provides similar information for TiB 2 . Key physical, mechanical and thermal properties for these materials are outlined in Table 1, where the high melting temperature and hardness can be appreciated (Fig. 1). The After the value of UHTCs unique combination of properties had been determined, in the 1970s researchers began studies in an effort to understand the oxidation behavior of these materials, with much of the early work in this area again emanating from the Frantsevich Institute in Kiev and the USA ( Ref 31,32). UHTC borides undergo stoichiometric oxidation according to the reaction shown in Eq 1, where M is a group four or five transition metal ( Ref 33,34). UHTC borides form, at temperatures below 1200°C, a protective liquid B 2 O 3 layer. Oxygen diffusion through this protective liquid limits further oxidation. At higher temperatures, the B 2 O 3 evaporates, leaving a nonprotective porous, metal oxide skeleton leading to rapid oxidation. Due to the higher melting point and low vapor pressure of Zr and Hf oxides (2715 and 2758°C, respectively), ZrB 2 and HfB 2 have more high temperature resistance than other UHTC borides ( Ref 35). To further increase the oxidation resistance of these materials, the addition of silicon carbide (SiC), or other silicon containing compounds (such as MoSi 2 or TaSi 2 ) creates a borosilicate glass outer layer which is stable up to temperatures of *1600°C (Ref 33). UHTC Carbides Like UHTC borides, the UHTC carbides were investigated in the 1960s by NASA and various defence agencies and continued through to the 1990s and 2000s for use at high temperatures (Ref [36][37][38][39]. ZrC has been investigated for various nuclear fuel applications ( Ref 40). Carbides, in general, are renowned for their excellent hardness at high temperatures; in fact Miyoshi and Hara (Ref 41) showed that even at 800°C TiC maintained a microhardness of *1700 Hv (*17 GPa). Due to their high hot hardness, UHTC carbides have also been used in cutting tool applications ( Ref 42,43). Key physical, mechanical and thermal properties for the UHTC carbides covered in this review are listed in Table 2. As with the UHTC borides, the hardness and melting points stand out as being extreme. Compared to UHTC borides, the carbides have lower thermal conductivities meaning despite having higher melting temperatures, they are less attractive for use in heat shield applications at ultra-high temperatures. Although UHTC carbides have lower elastic moduli than borides at room temperature, they do maintain their strength at elevated temperatures ([ 1000°C) better than the borides. This means carbides are preferred in applications where higher thermal and mechanical loads are encountered ( Ref 44). Unlike borides, the UHTC carbides are stable across a range of stoichiometries as can be seen in the phase diagrams in Fig. 3 (Ref 53, 54). TiC, ZrC and HfC are all stable between *37.5 and up to 50 at. % C, while TaC is stable between *47.5 and 50 at. % C. This range of stable stoichiometries means UHTC carbides have potentially tailorable physical and mechanical properties. As can be appreciated from the data and Table 2 and the phase diagram in Fig. 3, HfC and TaC have some of the highest melting points of all materials. UHTCs will generally oxidize following the reaction in Eq 2, where M is Ti, Zr or Hf, and Eq 3, where M is Ta ( . In environments with low oxygen pressure, carbon may remain un-oxidized. Oxidation of these compounds can be affected by a number of variables such as chemical composition (it can be seen from the phase diagrams in the previous section that these carbides are not line compounds can present a variety of stoichiometries), grain size and porosity. Thermal Spraying of UHTCs As described previously, the current processing routes for bulk UHTCs, such as spark plasma sintering and hot pressing, limit the size and shapes of components that can be produced. Thermal spraying techniques are already widely used in many industries to coat large areas relatively quickly. This section of the review will focus on the thermal spray processes used in research and their effect on the microstructure, mechanical properties, wear resistance, oxidation and ablation resistance of UHTC coatings. UHTC Boride Coatings Deposition and Microstructure of UHTC Boride Coatings Atmospheric Plasma Spraying Arguably the most versatile thermal spray process is atmospheric plasma spraying (APS). APS uses a radio frequency or, more commonly, direct current arcs to ionize process gases creating a plasma jet. As these unstable plasma ions reform into their gaseous states, a large amount of thermal energy is released, creating extremely high temperatures, up to 14,000 K, within the plasma jet. The primary process gas typically used in APS is argon, with hydrogen, nitrogen, helium or a combination thereof being used as secondary gases to modify the properties of the thermal plasma. Feedstock particles are injected into the gas stream, where particle velocities can be between 20 and 500 mm/s depending on the size of the particle (Ref 63 Coefficient of thermal expansion (K -1 ) 6.7 x 10 -6 7.7 x 10 -6 6.6 x 10 -6 6.3 x 10 -6 Thermal conductivity (W m -1 K -1 ) 2 1 2 4 3 0 2 2 showed the boride phases to be dominant but with some Zr/ HfO 2 . The influence of spraying power on ZrB 2 coatings has been studied by Hu Fig. 4. At 95 kW, the residual stresses in the coating caused a certain degree of peeling; hence, 75 kW was found to be the optimum spray power. No difference in phase composition was reported for different coatings with ZrB 2 being the main phase detected, but ZrO 2 and ZrO were also present. Conversely, Feng et al. found that when depositing a ZrB 2 -SiC coating at 30, 75 and 97 kW all the coatings were highly porous (58, 43 and 53 % porosity, respectively) regardless of spray power. The coatings deposited at the two higher powers showed a higher degree of fully melted feedstock. ZrO 2 was also detected in the coating deposited at 97 kW while it was not present in the other two coatings. The particle size of powder feedstocks typically utilized in HVOF thermal spraying and APS is limited between 10 and 100 lm. Using powders of this size ensures the powder particles have enough momentum upon injection to penetrate the middle of the jet, where the highest temperatures are to be found, yet are small enough to melt completely in a very short period of time (Ref 74). Using nano-and submicron scale feedstocks can lead to reducing splat size, reduced porosity and improved properties. To get around this, a technique called suspension thermal spraying has been developed. This is where small particles (\10 lm) are suspended in a liquid, which can flow through the feed system and has sufficient momentum to penetrate the high temperature region of the flame. Using suspension plasma spraying (SPS), Yvenou et al. (Ref 75) deposited a TiB 2 feedstock with a median particle size of 1.4 lm. XRD results showed no oxide phases present in the coating; however, porosity was high as particles were not melting within the plasma plume. Plasma Spraying in Inert Atmospheres As discussed in the previous section, when using APS to spray boridebased feedstocks, many researchers have reported the presence of oxide phases in the deposited coatings ( used APS and CAPS systems to spray ZrB 2 powder. After it had been sprayed into the water (to retain the feedstock as powder after spraying) via both systems, XRD diffractograms of the powder showed the APS technique to have large peak intensity for ZrO 2 phases, indicating a high degree of oxidation during the spraying process. Comparatively, XRD analysis of the powder sprayed by CAPS was shown to have large peak intensity for ZrB 2 phases while only some ZrO phase was detected. Depending on the spraying parameters used, the microhardness of the coatings deposited using CAPS was in the range of 9.8 to 15.7 GPa with microhardness generally increasing with the power of the torch and pressure inside the spraying vessel. The use of the CAPS system also ensured that the coating microstructures were all dense with minimal porosity. Similarly, Rietveld refinement was used by Kahl et al. to identify and quantify the phases present in APS and CAPS ZrB 2 coatings. Using the CAPS system, with an argon atmosphere, reduces the amount of total oxide phases by 45.7 wt. % compared to the APS coating. The average hardness of the coating was increased from 14.0 to 18. ) compared an APS coating to one produced by VPS using a ZrB 2 ? 20 vol. % MoSi 2 composite feedstock. XRD diffractograms of the two coatings showed the presence of ZrO 2 phase in the coating deposited by APS; the VPS coating showed no oxide phase. The microstructure of the APS coating showed interconnected porosity; meanwhile, the VPS coating had smaller, closed porosities. The porosity was measured as being 9.3 and 6.8 %, respectively. Like CAPS, ZrB 2 -based coatings deposited using VPS show no oxidation of the feedstock during spraying; however, these studies measured porosity in the coatings to be as high as *10 % ( Ref 85,86). A comparison between LPPS and HPPS ZrB 2 -based coatings was made by Bartuli et al. (Ref 87). Characterization of single splats showed distinct morphologies for each process, as shown in Fig. 5. The splats deposited using HPPS show disc-like morphology while the LPPS splats have a branched structure, indicating particles were fully molten when they impacted the substrate. The difference in morphology was due to the higher particle velocities achieved in LPPS, which created splashing as the particles impinged the substrate. The authors suggest that the splats created by HPPS would offer improved cohesive and adhesive strength. Shrouded Plasma Spraying: In an effort to maintain the inert atmosphere of VPS and CAPS while reducing the cost, some researchers have utilized a technique called shrouded plasma spraying to spray ZrB 2 -based coatings (Ref [88][89][90]. Instead of the expensive vacuum and furnace systems required in CAPS and VPS, shroud plasma spraying creates a contained or un-contained Ar or N curtain via an attachment on the end of the plasma torch, limiting the interaction between air and particles within the plasma jet. A detailed study on the effect of various shroud gas flow rates was conducted by Torabi et al. (Ref 90). This work found that increasing the Ar flow rate from 0 l/min (unshrouded) to 30 l/min and finally 150 l/min reduced the ZrO 2 phase content from 41.6 wt. % to 14.5 wt. % to 4.8 wt. %, respectively. Increasing the shroud gas flow also altered the microstructure and splat morphology of the coatings. The unshrouded coating featured many un-melted particles and had a porous microstructure, while increasing shroud gas flow led to a combination of fully melted splats and partially melted particles, as shown in Fig. 6, as well as less porous microstructures ( Fig. 7 and 8). Reactive Plasma Spraying Some researchers have combined self-propagating high temperature synthesis (where constituent elements of a compound are reacted together at high temperatures) or reduction reactions with thermal spraying techniques in what is known as reactive plasma spraying (RPS). During RPS, reactions between precursor particles inside the plasma jet create the desired coating material in situ. An % B 4 C feedstock resulted in the highest relative peak intensity of the ZrB 2 phase, however, both 15 and 30 wt. % feedstocks showed the presence of residual B 4 C. The ZrB 2 coating had a microhardness of 1.6 GPa, much lower than a ZrO 2 coating sprayed using similar parameters. The low hardness is linked to the highly porous coating microstructures; the authors suggest two reasons for this: unmolten ZrB 2 particles, a consequence of the ZrB 2 particles being formed in situ and having a short residence time in the high temperature plasma jet, or the boron carbide reduction reaction continuing after the coating has been deposited, releasing gases. reaction between constituent elements, in this case Ti and B, becomes thermodynamically favorable in inert atmospheres. SHS relies on the ability of these highly exothermic reactions to be self-sustaining and, therefore, energetically efficient (Ref 94). The use of LPPS eliminated oxidation of the feedstock; the coating had a high degree of porosity; meanwhile, the APS coating had improved density due to the use of Cr as a binder. In terms of composition, the coating produced using APS was made up of TiB 2 and TiN phases with Ti 2 O 3 and TiO 2 as well. Comparatively, the LPPS coating was mainly comprised of TiC 0.3 N 0.7 and TiB 2 with no oxide phases (the authors suggested residual N remained in the atmosphere despite the low-pressure vacuum). Microhardness values for the LPPS coating were measured to be 4.9 GPa with the low hardness being attributed to the level of porosity in the coating; the corresponding value for the APS coating was 7.1 GPa. High Velocity Oxy Fuel Thermal Spraying High velocity oxy fuel (HVOF) thermal spraying is a form of flame spraying whereby a gas or liquid fuel (for example, hydrogen, kerosene, acetylene, propylene or natural gas) is ignited in the presence of oxygen. This creates a high temperature, highly pressurized mixture of gases within the combustion chamber into which the feedstock is injected either radially or axially. The feedstock is heated to the molten or semi-molten state within the hot gas stream. A small diameter nozzle accelerates the particles and gas stream to supersonic velocities and directs them towards the substrate. In HVOF thermal spraying, particle velocities can reach 1000 m/s with jet temperatures of approximately 3000 K ( Ref 63). Coatings produced by HVOF thermal spraying typically present a lower amount of oxidized phases than coatings produced by plasma spray since the temperatures are lower and the particle velocities are higher. The high impact velocity means HVOF thermal spraying can create coatings with higher densities than other thermal spray processes. Attempting to prevent oxidation of the feedstock, Cheng et al. used an HVOF thermal spray system to produce a ZrB 2 ? 20 vol. % SiC ? 10 vol. % MoSi 2 composite coating ( Ref 95). XRD of the coating showed the presence of no oxide phases. This could be due to the hydrogen/ oxygen ratio used in the combustion, where excess hydrogen (3:1 as opposed to stoichiometric 2:1) created a reducing flame ( Ref 96). The surface of the coating showed poorly bonded particles indicating the feedstock was not fully melted during spraying. Table 3 outlines the spraying systems and parameters used in the APS, CAPS, VPS and HVOF thermal spraying studies discussed in this section. Despite various feedstocks, spraying systems and spraying parameters employed, what is clear is that obtaining a dense, oxide free diboride coating is very difficult to achieve without using vacuums or controlled atmospheres. High Temperature Properties of UHTC Boride Coatings Many researchers have attempted to characterize the oxidation mechanisms of boride coatings over the years. In one of the earliest studies, TGA analysis of an LPPS coating by Bartuli Ref 78,85) reported that the same mechanism could be applied to VPS ZrB 2 -MoSi 2 and ZrB 2 -Si coatings, with a thick, protective SiO 2 layer being detected after 6 hours at 1773 K. In comparison, a ZrB 2 -MoSi 2 coating deposited by APS was found to totally fail after 6 hours; the authors suggested this failure was due to increased porosity within the as-sprayed APS coating, meaning a continuous SiO 2 protective layer could not form. The poor oxidation resistance of APS coatings was further characterized in work by Feng et al. (Ref 66). In this study, three ZrB 2 -SiC coatings were deposited using various plasma spray parameters and equipment. Oxidation products were detected after 9 hours at 873 K, with the authors suggesting complete evaporation of B 2 O 3 due to its vapor pressure. While after oxidation at 1273 K, the coatings had totally failed. The addition of AlN to a ZrB 2 -SiC coating was investigated by Grigoriev et al. (Ref 67). The coating was subjected to a thermocycling test where the sample was heated to *2273 K, held for 2 min and then allowed to air cool for 10 min; this was repeated for 15 cycles. The addition of AlN drastically altered the oxidation mechanism of the coating. The authors reported the formation of an Al 2 SiO 5based solid solution around spheroidal ZrO 2 particles, on top of this a protective SiO 2 -Al 2 O 3 solid solution layer formed, which acted as an effective barrier to the diffusion of O 2 . The authors suggested this coating showed excellent stability above 2173 K and offered more protection than typical UHTC coatings. One area where ZrB 2 -based coatings have been researched heavily over recent years is to protect carbon-based composites from high temperature oxidation (Ref 99). These composites are ideal for use as high temperature structural components for atmospheric re-entry vehicles due to their excellent high temperature mechanical properties. In use, these components will undergo thermochemical ablation due to oxidation at very high temperatures ([1800°C) and high gas flow rates. However, carbon-based composites will oxidize readily at temperatures above 500°C; thus, protective, oxidation-resistant coatings are required. Due to the high melting points of their oxides (2700 and 2800°C, respectively), Zr-and Hfbased ultra-high temperature ceramics have been the main focus of research, as any liquid phases will be removed by the high gas flow rates, reducing the protection of the underlying component. As explained in previously, the addition of SiC and other Si containing ceramics to ZrB 2 improves the oxidation resistance of the composite. ZrB 2 -SiC composite coatings have been produced using thermal spraying, and these coatings are explored for use in protecting graphite, carbon/carbon (C/C) and carbon/silicon carbide (C/SiC) composites. greatly reduced the ablation rates of the coatings, the coating deposited using no shroud had a mass ablation rate of 1857 mg/s while increasing the shroud gas flow rate 150 l/min reduced the ablation rate to 39.3 mg/s. As the shroud gas flow rate was increased, the oxide phase content and porosity of the coating were reduced leading to the greater ablation resistance. The mechanism of ablation from this study is shown in Fig. 10; note how the SiC interlayer also oxidizes and liquid SiO 2 fills the pores created by the oxidation of ZrB 2 . Using LPPS as the deposition method, Wang et al. (Ref 86) found that the addition of TaSi 2 to a ZrB 2 -SiC composited could effectively reduce the ablation rate. The reasons for the reduction in ablation rate were twofold, the addition of TaSi 2 produced a denser coating, and during ablation, a higher fraction of protective glassy SiO 2 phase was produced, which could fill any pores in the oxide scale and prevent subsequent oxidation. A summary of the ablation tests conducted on UHTC boride coatings is shown in Table 4 where possible the heat flux, surface temperatures and ablation rates have been reported. Tribology and Wear of UHTC Boride Coatings The tribology of bulk UHTC borides has been researched widely (Ref 101-106 UHTC Carbide Coatings Deposition and Microstructure of UHTC Carbide Coatings Atmospheric Plasma Spraying As with the boride coatings discussed earlier, due to the extreme melting points of UHTC carbides, plasma spraying is the most popular deposition technique. In the 1980s and 1990s, APS TiC coatings were investigated to protect nuclear fusion device components from thermal shock (Ref [107][108][109][110][111][112]. Some of these early coatings suffered from high porosity, oxidation and decarburization (Ref 110, 113, 114). More recently, a detailed characterization of a TiC APS coating was carried out by Hong et al. (Ref 68, 115). The phases present in the coating were quantified as being 87 wt. % TiC, 9 wt. % TiO 2 (rutile) and 4 wt. % TiO. Porosity was measured at 8.0 %. As with previous studies, the assprayed surface showed melted and un-melted particles while the microstructure was largely dense and well bonded with some microcracks caused by stresses upon cooling. Hardness and elastic modulus were also measured for this coating, 7.7 GPa and 189.7 GPa, respectively. The authors suggested that these mechanical properties were lower than reported for bulk ceramics because of porosity levels, inter-splat strength and phase composition. Mahade et al. (Ref 116) deposited a TiC feedstock with a median particle size of 2.21 lm using SPS. The XRD diffractogram of the coating showed the main phases were titanium oxycarbide (TiC 0.1 O 0.9 ), TiC and Ti 2 O 3 with smaller peak intensities of TiO 2 (both anatase and rutile). The as-sprayed surface of the coating showed very fine (*3 lm) melted splats and some un-melted particles. The microstructure revealed uniformly distributed porosity, a few un-melted particles and good adhesion between splats, see Fig. 11. When depositing ZrC coatings with APS, researchers have typically found a small degree of oxidation with ZrC forming monoclinic and tetragonal ZrO 2 with small relative peak intensities relative to ZrC when characterized with XRD (Ref 117-120). Generally, decarburization has been minimal; however, other works have found more severe oxidation of ZrC with relatively large peak intensities of ZrO 2 and other oxidation products detected (Ref 121,122). Interestingly, in a study by Wu et al. (Ref 123), XRD detected small peak intensities of cubic ZrO 2 . Cubic ZrO 2 is formed above 2370°C, whereas between 1170 and 2370°C tetragonal is the stable phase (monoclinic being formed below 1170°C). The presence of this phase could indicate higher temperatures were achieved in the plasma plume using this set of parameters compared to the other studies. The coating microstructures produced in all these studies are similar, with the surface showing a combination of melted and un-melted splats and the cross-sectional microstructure appearing fairly dense with minimal pores; a typical example from Wu et al. is shown in Fig. 12. Fewer studies have investigated APS of HfC coatings, but the results were similar (Ref 124 -126). During spraying, some oxidation of HfC was reported, the microstructures of the coatings were dense, and the as-sprayed surfaces showed some melted and un-melted splats. Controlled Atmosphere and Vacuum Plasma Spraying As with thermal spraying of most non-oxide ceramics, researchers have turned to spraying in inert atmospheres or vacuums to protect the feedstock from oxidation. A comparison between APS and VPS ZrC coatings was made by Compared to agglomerated powder prepared by spray drying (SD), with the use of IPS a higher degree of melting was observed on the as-sprayed coating surface, porosity was reduced from 10.7 to 4.6 %, and deposition efficiency was increased. In an early study, Varacelle et al. (Ref 133) investigated the effect of three VPS parameters on TiC coatings, specifically arc current, primary gas flow and secondary gas flow, using a Taguchi style design of experiment. The lowest porosity (0.49 %) and highest hardness (9.4 GPa) were found in the coating deposited using the highest power to gas flow volume ratio, meaning high spray powers and relatively low primary gas flows led to a greater degree of melting of the TiC feedstock, better deposition efficiency and less porosity. In another early study, the effect of Ar and N 2 atmospheres on CAPS TiC coatings was investigated (Ref 134). Minimal differences were noted between the two atmospheres; the microstructures appeared similar, the hardness of the coatings was similar (12.5 GPa for Ar and 12.75 for N 2 ), and the decarburization was minimal in both cases. This led the authors to believe, when spraying TiC in a controlled atmosphere, the cheaper N 2 gas could be used. to deposit a HfC feedstock with a median particle size of 7.08 lm. Due to the density of HfC; the powder had to be further crushed to *200 nm particle size in order to make a stable suspension. Using a suspension with 20 wt. % solid loading a coating of *50 lm was produced. Despite spraying in a vacuum, the XRD diffractogram of the coating presented with large relative peak intensities for HfO 2, which was attributed to oxygen present in the ethanol in which the HfC particles were suspended. VPS has also been used to deposit TaC and TaC-based composite coatings. Researchers have noted the formation ) also used HVOF thermal spray to deposit a TiC coating, this time using a suspension of TiC particles between 2 and 3 lm in size in water. This study used three water-based suspensions; one comprised of 20 wt. % of TiC powder, the second containing 20 wt. % milled TiC powder, and the final containing 20 wt. % of the powder with an added dispersant and the pH adjusted in an effort to make a more stable suspension. During spraying all the feedstocks experienced significant oxidation, XRD diffractograms identified the main phases present in all of the coatings as being TiO 2 (rutile and anatase) and TiC. SEM images of the as-sprayed surface also showed a combination of melted and un-melted particles. The microstructure was mainly dense with some carbide pullout and microcracking. The coating produced from the first suspension (TiC powder and water) had the lowest porosity, 1.9 %, and the highest hardness, 5.2 GPa. Table 5 outlines the spraying systems and parameters used in the APS, CAPS, VPS and HVOF thermal spraying studies discussed in this section. High Temperature Properties of UHTC Carbide Coatings As with UHTC boride coatings, one area where UHTC carbide coatings have potential applications is in the protection of carbon-containing composites. Thus, the high temperature properties, namely the ablation resistance, of these carbide coatings have been widely researched. The behavior of ZrC coatings, when subjected to ablation by oxyacetylene torch, has been studied by Wu et al. . Despite different deposition methods, the mechanism of ablation described by the authors was largely similar. In all cases, the only phase detected after ablation testing was monoclinic ZrO 2 . At high temperatures, ZrO 2 will have a tetragonal or even cubic crystal structure, but upon cooling, it will transition to the monoclinic phase; the volume change associated with this phase change has resulted in the formation of cracks after testing, while escaping CO and CO 2 gases due to the oxidation process created pores. While the mechanism reported in these studies was similar, interestingly, some of the results were different. A comparison between the ablation resistance of VPS and APS deposited ZrC coatings was made by Hu et al. ( Ref 97). The VPS coating offered better protection to ablation due to its less porous microstructure and lower oxidation during spraying, allowing a dense ZrO 2 layer to be formed during ablation. In order to improve the ablation resistance of ZrC coatings, many researchers have focused on the additions of other materials to form composites. Similar to UHTC boride coatings, Si-containing materials such as SiC and MoSi 2 are common additives to carbide composite coatings, as it forms a protective SiO 2 layer at high temperature. Jia Due to extreme temperatures, the ZrC became molten and exposed the ZrC-SiC layer below, causing volatilization of SiO and the formation of many pores on the surface of the coating. A more thorough investigation into the mechanism by which SiC addition can improve the ablation resistance of ZrC-based coatings was conducted by Jia et al. (Ref 120). In this work, a ZrC composite coating containing 20 vol. % SiC was subjected to ablation testing at three temperatures under a heat flux of 2.4 MW/m 2 . At 2011 K, a glassy SiO 2 phase was formed, encapsulating the ZrO 2 and protecting the structure from further oxidation. When the temperature was increased to 2378 K, SiO 2 evaporated, leaving behind a porous, unprotective ZrO 2 coating, and the linear ablation rate increased to 2.5 lm/s, and the mass ablation rate was 0.49 mg/s. However, as the temperature was increased further to 2543 K, the authors suggest the temperature was high enough for the composite oxide ZrO 2 -SiO 2 to be semimolten, even as SiO 2 was evaporated. The semi-molten phase offers protection from further oxidation and is viscous enough not to be removed mechanically by the gas stream. In one final experiment, the authors increased the heat flux to 4.2 MW/m 2 ; the coating failed completely with the increased heat flux. The pre-treatment of a ZrC-SiC feedstock was examined using induction plasma spheroidization (IPS) by Pan et al. (Ref 132). A coating made with this feedstock showed lower consumption during ablation testing compared to coating produced with a spray dried (SD) agglomerated feedstock. The authors suggested that this was due to the reduced porosity in the coating produced with the IPS treated feedstock, allowing a dense, protective oxide scale to form. Liu et al. (Ref 127) compared the ablation resistance of ZrC-SiC, ZrC-MoS i2 and multilayer ZrC-SiC/ZrC-MoSi 2 coatings. Both the single-layer coatings were found to offer insufficient protection. While a protective, liquid SiO 2 layer was formed, which filled pores and bonded ZrO 2 on the surface of the ZrC-SiC coating, this caused a layer underneath to become porous as active oxidation of SiC caused SiO to diffuse towards the surface of the coating. The authors believed this would lead to weakened adhesion between the oxidized coating layers and any remaining material beneath, eventually causing failure of the coating. As for the MoSi 2 containing coating, the build-up of the oxidation product MoO 3 , which unlike other oxidation products CO and CO 2 was unable to pass through the ZrO 2 layer, created a bubble which, when the pressure was high enough, burst and ruptured the coating. In comparison with the single-layer coatings, the multilayer coating performed very well. The outer ZrC-SiC layer was able to form protective SiO 2 , which prevented the formation of destructive MoO 3 in the ZrC-MoSi 2 inner layer. Oxidation of the inner layer produced Si, which was able to diffuse upwards, oxidize and eliminate the porous lower layer seen in the ZrC-SiC coating. Diagrams for all three of these ablation mechanisms are shown in Fig. 17. In another work looking at ZrC-MoSi 2 coatings, by reducing the heat flux from 3.01 to 1.94 MW/m 2 , the authors suggested that MoSi 2 could be a suitable additive for ablation resistance coatings (Ref 147). The rate of SiO 2 evaporation from the surface was lower than the rate of formation of SiO 2 from the oxidation of MoSi 2 . A stable SiO 2 layer in turn would prevent the formation of the destructive MoO 3 species, preserving the coating. As MoSi 2 content was increased from 0 to 20 to 40 vol. %, the mass ablation rate reduced from -2.80 to -0.92 to -0.68 mg/s, respectively. While some researchers have had success using SiC containing composites, they are limited by how rapid they can be depleted when active oxidation of SiC occurs and SiO 2 vaporizes and leaves behind a porous structure. Instead of SiC, recent research has focused on the addition Table 5 A summary of process parameters for thermal spraying of UHTC carbide ceramics 125,126) found that a single-phase HfC coating was not enough to protect from ablation. Similar to the behavior of ZrO 2 , the authors reported that when the HfC was oxidized, the HfO 2 became porous and loose, allowing oxygen to diffuse into the coating. In these studies, the authors added 10, 20 and 30 vol. % TaC to HfC coatings. Under ablation, the coatings oxidized to form liquid Ta 2 O 5 and solid HfO 2 and Hf 6 Ta 2 O 17 . At 10 vol. % TaC addition, Ta 2 O 5 was able to seal any cracks and pores on the oxide surface, as shown in Fig. 18. As TaC content was increased a composite Ta 2 O 5 -HfO 2 liquid oxide was formed and subsequently 144). Both works found that TaC coatings with SiC additions provided the best protection from ablation. Singlephase TaC coatings oxidized to liquid Ta 2 O 5 , which was removed by the shearing effect of the gas flow. When SiC was added, a Ta 2 O 5 -SiO 2 mixed oxide was formed which had a higher density and could withstand erosion. A summary of the ablation tests conducted on UHTC boride coatings is shown in Table 6. Where possible, the heat flux, surface temperatures and ablation rates have been reported. Tribology and Wear of UHTC Carbide Coatings Due to having the highest hardness of the UHTC carbides, TiC is the most widely researched for wear resistant applications. In fact, it is the only thermal spray coating material of all the UHTC carbides to have its tribological properties investigated thoroughly. Hong et al. (Ref 68) prepared a TiC coating using APS, which was subjected to wear test under 20 and 50 N loads against a WC-Co ball. Giving COF of 0.53 and 0.49 and wear rates of 0.07 x 10 -5 and 2.42 x 10 -5 mm 3 N -1 m -1, , respectively, the wear mechanisms were described as fatigue and tribo-oxidation under both sets of conditions. The TiC coating showed much lower wear rates under both loads than a TiB 2 coating tested under the same conditions. In a further study, the authors tested the same TiC coating against a range of different ball materials under 50 N load; specifically, WC-Co, 304 stainless steel and Si 3 N 4 balls were used ( Ref 115). Against the steel ball, the coating showed a low wear rate of 2.55 x 10 -6 mm 3 N -1 m -1 due to the relative softness of the ball. A COF of 0.65 was attributed to the wear debris of the coating acting as an abrasive and ploughing the softer steel ball; some evidence of adhesive wear was also detected. When tested with the Si 3 N 4 ball, a low COF of 0.46 and wear rate of 9.76 x 10 -6 mm 3 N -1 m -1 were reported, due to the oxidation of the ball to form SiO 2 . The fluctuation of COF was high, however, due to the spallation of this oxide. Due to the high hardness of the WC-Co ball, the wear rate was much higher (2.42 x 10 -5 mm 3 N -1 m -1 ). The tribological properties of VPS TiC coatings against WC-Co balls were also tested by Guo et al. (Ref 135). After testing under loads of 20 and 50 N, the authors found that the addition of Mo to the coating reduced the wear rate and COF at both conditions. The added ductility of the Mo also helped change the wear mechanism from particle pullout and fatigue wear to abrasive wear. An SPS TiC coating was deposited by Mahade et al. (Ref 116); SPS allows the deposition of feedstocks with extremely fine particle sizes, potentially improving wear resistance by reducing splat and pore size. The coating was subjected to a sliding wear test against a WC-Co pin under 5 kgf, which resulted in a 0.2129 mm 3 coated material and tested at three loads, 0.5, 1 and 2 kg loads. The wear rates, however, were high, which was attributed to poor bonding between the TiC and C r2 O 3 . Reinforced UHTC Coatings As explored in previous sections, a range of particle reinforcements (SiC, MoSi 2 , etc.) to UHTC coatings have already been investigated by researchers, with the primary aim of improving the high temperature performance. As with many ceramics, however, UHTCs suffer from intrinsic brittleness, which can limit their application. Research into sintered UHTCs over the years has covered various toughening mechanisms that can be incorporated into a UHTC composite, largely focussed on continuous fiber reinforcement with C or SiC fibers (Ref 150 High Entropy UHTC Coatings Borrowing from previous work on high entropy alloys (HEAs) and highentropy ceramics (HECs), high entropy ultra-high temperature ceramics (HE-UHTCs) have garnered significant interest over the last five years (Ref 164,165 showed that up to 1200°C weight gain was much lower than some of the constituent borides, for example, TiB 2 and ZrB 2 . While HEC and HE-UHTC thermal spray coatings are yet to be developed, HEA coatings have been deposited, using a variety of thermal spray processes, to provide wear, corrosion and oxidation resistance (Ref 169). Summary As the next generation of spacecraft and hypersonic flight applications is developed, UHTCs will become materials of great importance because of their high melting points and good mechanical properties. Due to the limitations of current processing methods, only small, simple shaped bulk UHTC components can be formed. To alleviate this problem, UHTC coatings can be employed, and as C-and SiC-based composites become more widely as structural components in aeronautics, protective coatings will be required to protect them from the most extreme of environments. While much work on UHTC coatings has been done outside of the public domain, close collaboration with industrial partners must be sought for future research. Due to the applications UHTC coatings are suited to, this will help produce viable processing conditions that can be achieved on an industrial scale and testing procedures that will represent expected service environments. This paper has presented a detailed review of UHTC coatings produced by various thermal spray processes. Because of the ultra-high melting temperatures, plasmabased thermal spray techniques have been found to be the most popular for depositing UHTC coatings due to the temperatures which can be reached within the plasma plume itself. To prevent oxidation of UHTC feedstocks, spray systems have often been contained within inert atmospheres or vacuums. While successful at eliminating oxide phases within the coatings, such setups remain expensive. To this end, shrouded plasma spray systems have shown promise as a lower cost alternative; however, further development is needed to deposit completely oxidefree coatings. The oxidation and ablation resistance of UHTC-based coatings has been widely reported, and the mechanisms are largely understood. Various UHTC composite coatings have been investigated as a means to improve oxidation and ablation resistance, and composite coatings with Si containing materials (such as SiC and MoSi 2 ) have proved to be particularly effective at this. Despite widespread research on the tribology of bulk UHTCs, investigations into the wear resistance of UHTC thermal spray coatings have been sporadic. For example, thin film TiB 2 , TiC and Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
10,477.6
2022-04-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Inhibiting Endocannabinoid Hydrolysis as Emerging Analgesic Strategy Targeting a Spectrum of Ion Channels Implicated in Migraine Pain Migraine is a disabling neurovascular disorder characterized by severe pain with still limited efficient treatments. Endocannabinoids, the endogenous painkillers, emerged, alternative to plant cannabis, as promising analgesics against migraine pain. In this thematic review, we discuss how inhibition of the main endocannabinoid-degrading enzymes, monoacylglycerol lipase (MAGL) and fatty acid amide hydrolase (FAAH), could raise the level of endocannabinoids (endoCBs) such as 2-AG and anandamide in order to alleviate migraine pain. We describe here: (i) migraine pain signaling pathways, which could serve as specific targets for antinociception; (ii) a divergent distribution of MAGL and FAAH activities in the key regions of the PNS and CNS implicated in migraine pain signaling; (iii) a complexity of anti-nociceptive effects of endoCBs mediated by cannabinoid receptors and through a direct modulation of ion channels in nociceptive neurons; and (iv) the spectrum of emerging potent MAGL and FAAH inhibitors which efficiently increase endoCBs levels. The specific distribution and homeostasis of endoCBs in the main regions of the nociceptive system and their generation ‘on demand’, along with recent availability of MAGL and FAAH inhibitors suggest new perspectives for endoCBs-mediated analgesia in migraine pain. Introduction: Migraine Pain Signaling Pathways as Target for Antinociception Migraine is a primary headache disorder in which one of the worst symptoms is the severe throbbing pain [1]. The molecular mechanisms underlying migraine pain are still mostly unknown, but current evidence supports the involvement of both central and peripheral mechanisms in this common neurological disorder [2,3]. It is widely accepted that migraine pain originates from the meninges in the trigeminovascular complex composed by nociceptive Aδ-and C-fibers, projecting from the trigeminal ganglion (TG) and innervating local vasculature and connective tissues in the meninges (Figure 1) [4]. This local trigeminal nerve terminals can release the neuropeptide calcitonin gene-related peptide (CGRP), which plays a central role in migraine pain and represents the important target for anti-migraine interventions [5]. In addition, there is a release of histamine, serotonin and cytokines from mast cells, ATP and nitric oxide from endothelial cells, substance P and acetylcholine from the peripheral nerve terminals fibers [6][7][8][9][10]. These pro-nociceptive events can be initiated by different triggers. Among them are mechanical forces coming from pulsating intracranial vessels, which can activate mechanosensitive Piezo1/2 receptors expressed in the meningeal afferents and degranulation of multiple meningeal mast cells, which can be initiated by stress or by cortical spreading depression (CSD) [11][12][13][14][15]. Moreover, the release of CGRP and the degranulation of mast cells could be induced by antidromic spiking, which comes from central to peripheral nerve endings in the meninges [12]. Most of these in the meninges [12]. Most of these local pro-inflammatory molecules can directly activate and sensitize meningeal peripheral nerve endings, making them highly susceptible to chemical and mechanical stimuli [13]. These structures can interact with each other via chemical or mechanical communications forming a vicious circle, which promotes and supports neuroinflammation, activation and sensitization of nociceptors. Nociceptive signalling (red arrows) can be initiated by mechanical forces, from pulsating dural vessels, CSD-or stress-induced degranulation of mast cells or by antidromic spiking directed to the meninges and associated with the release of several neuropeptides including CGRP. Migraine-related nociceptive signalling is transmitted from the meninges through the brainstem (zoomed down in grey box) trigeminal nucleus caudalis (TNC) and thalamus (purple), to the higher pain centres in the cortex performing the function of pain perception. Opposite to the ascending nociceptive signalling, the descending inhibitory control of the brainstem provides the antinociceptive function (dark grey arrow). A migraine attack can start with cortical spreading depression (CSD), a phenomenon typical for migraine with aura with massive depolarization of neurons and glial cells slowly propagating along the cortex. Together, these pro-inflammatory and pro-nociceptive molecules released by interacting nerve fibers, vessels and immune cells are forming a sort of vicious circle, which further promotes the sustained state of inflammation, persistent activation and sensitization of nociceptors [14]. Blocking the release of CGRP represents one of several possible mechanisms to disrupt this pro-nociceptive vicious circle. Likewise, this positive pro-nociceptive loop can be broken by the stabilization of local mast cells, which form a neuro-immune synapse with trigeminal nerve endings [10]. Apart from the important role in the initiation of migraine pain in peripheral meningeal afferents, there are studies proposing a pro-nociceptive role of somas of trigeminal neurons located in the ganglion and cross-talking with the surrounding satellite glial cells [16,17]. Interestingly, the release of CGRP from meningeal fibers and from somas These structures can interact with each other via chemical or mechanical communications forming a vicious circle, which promotes and supports neuroinflammation, activation and sensitization of nociceptors. Nociceptive signalling (red arrows) can be initiated by mechanical forces, from pulsating dural vessels, CSD-or stress-induced degranulation of mast cells or by antidromic spiking directed to the meninges and associated with the release of several neuropeptides including CGRP. Migraine-related nociceptive signalling is transmitted from the meninges through the brainstem (zoomed down in grey box) trigeminal nucleus caudalis (TNC) and thalamus (purple), to the higher pain centres in the cortex performing the function of pain perception. Opposite to the ascending nociceptive signalling, the descending inhibitory control of the brainstem provides the anti-nociceptive function (dark grey arrow). A migraine attack can start with cortical spreading depression (CSD), a phenomenon typical for migraine with aura with massive depolarization of neurons and glial cells slowly propagating along the cortex. Together, these pro-inflammatory and pro-nociceptive molecules released by interacting nerve fibers, vessels and immune cells are forming a sort of vicious circle, which further promotes the sustained state of inflammation, persistent activation and sensitization of nociceptors [14]. Blocking the release of CGRP represents one of several possible mechanisms to disrupt this pro-nociceptive vicious circle. Likewise, this positive pro-nociceptive loop can be broken by the stabilization of local mast cells, which form a neuro-immune synapse with trigeminal nerve endings [10]. Apart from the important role in the initiation of migraine pain in peripheral meningeal afferents, there are studies proposing a pro-nociceptive role of somas of trigeminal neurons located in the ganglion and cross-talking with the surrounding satellite glial cells [16,17]. Interestingly, the release of CGRP from meningeal fibers and from somas of neurons in the trigeminal ganglion can be differently sensitive to the inhibitory action of anti-migraine drugs, such as the agonists of serotonin 5-HT1 receptor [18]. Together, these data suggest that, at the periphery, there are two distinct triggering zones for migraine pain ( Figure 1). However, despite the fundamental role of the peripheral structures, long-lasting headache also involves the central mechanisms, which, finally, results in central sensitization [2,3]. Such a broad view can better explain the whole spectrum of phenomena typical for migraine, which in many senses is similar to other diffused chronic pain conditions [19]. In the CNS, the brainstem trigeminal nucleus caudalis (TNC) collects and further transmits the incoming nociceptive signals from meninges to the thalamus ( Figure 1) and then, to the anterior cingulate cortex (ACC), amygdala and insular cortex, the structures related to the emotional perception of migraine pain [20]. On the other hand, the descending anti-nociceptive control of the brainstem can counterbalance and eventually block the nociceptive traffic from the periphery (Figure 1) to keep the 'gates' for pain signaling closed in normal conditions but, probably, open them during the migraine attack [10]. The early involvement of cortical areas in migraine pathology takes place in the less frequent form of migraine with aura, which typically starts with the development of CSD ( Figure 1), a wave of strong depolarization of cortical neurons and glial cells [21]. This is an example of one of the key migraine events when the origin of the attack is localized within the CNS. Brain oedema, associated with CSD [21], can mechanically compress the meningeal tissues, facilitating the activation of mechanosensitive Piezo1/2 channels in local nerve fibers [11]. From the therapeutical perspective, CSD represents a therapeutic target for damping down the harmful hyperexcitable neuronal state, associated with elevated glutamate release [22,23]. To summarize, migraine pain is initiated and supported by interactions between the peripheral meningeal nociceptive system, brainstem network and central pain centers [24]. Thus, migraine pain can potentially be blocked at different levels by targeting distinct structures and receptor systems specifically expressed within these structures. A deeper knowledge of location and the leading mechanism of the multicomponent migraine pain may give a chance to block pain most efficiently in a personalized manner in a given migraine patient. Figure 1 illustrates pain triggering peripheral zones and several relay stations for pain generation and transmission, which is finally culminating in the CNS. For the heterogeneous in nature migraine, acute and prophylactic pharmacotherapy [3,25] may work differently in distinct patients according to the prevailing involvement of distinct pain-related structures. The most clear example, which requires a specific approach, is migraine with aura, where the main aim of preventing therapy is the reduction of cortical hyperexcitability. Currently, the field of personalized medicine is under active development and effective treatments such as new types of 5-HT1 agonists, CGRP receptor inhibitors, recently approved anti-CGRP monoclonal antibodies and botulinum neurotoxin serotype A (reviewed in [5,26]) suggest a spectrum of various promising therapeutic strategies. However, despite clear progress with these innovative approaches, many patients still remain untreated [26], demonstrating a need for more innovative types of migraine therapy. Apart from the synthetic antimigraine drugs mentioned above, an alternative strategy could be to enhance the efficiency of endogenous protective mechanisms inhibiting pain. For this aim, the natural anti-nociceptive drive mediated by serotonergic and noradrenergic agents, endogenous opioid system, or other endogenous molecules and inhibitory neuronal networks can be employed [10]. Relying on this strategy, in this review, we aimed to show promising perspectives of engaging the endogenous endocannabinoid system (ECS) in order to inhibit migraine pain at its origin sites or key points of transmission of nociceptive signals to the higher pain centers. Main Components of the ECS as a Target for Analgesia In general, the ECS works as a homeostatic regulator in essentially all organ systems to control many physiological processes, including nociception [27]. ECS is composed by the primary endoCBs 2-arachidonoyl glycerol (2-AG) and N-arachidonoyl ethanolamide (alias anandamide, AEA) and their synthetic enzymes diacylglycerol lipase (DAGL) and NAPEspecific phospholipase D (NAPE-PLD), respectively. There are also endoCBs degrading enzymes monoacylglycerol lipase (MAGL) and fatty acid amide hydrolase (FAAH) and at least two G-protein-coupled CB1 and CB2 receptors, mediating the signaling induced by endoCBs [28]. Figure 2A shows the main steps in the synthesis and degradation of endoCBs. The primary endoCB 2-AG is produced locally, on demand, according to the intensity of the neuronal activity, from the membrane lipid precursors as a result of activation of phospholipase C (PLC) in cells that also express DAGL [29][30][31]. DAGL converts the PLC product diacylglycerol (DAG) into 2-AG or another monoacylglycerol, called 2-oleoylglycerol (2-OG) [29]. 2-AG is degraded by enzymatic hydrolysis into glycerol and free arachidonic acid by several enzymes, primarily, by the membrane attached presynaptic MAGL ( Figure 2A), but also by the recently identified alpha-beta hydrolase domain proteins (ABHD6, and ABHD12) [29,32,33]. Instead, AEA and other N-acyl ethanolamines (NAEs), such as palmitoylethanolamide (PEA) and oleoylethanolamide (OEA), are synthesized from N-acyl-phosphatidylethanolamine (NAPE) by NAPE-PLD, [29,34]. AEA, like other NAEs are hydrolyzed by FAAH, which is also a membrane-bound enzyme ( Figure 2A) [29], and N-acylethanolamine-hydrolyzing acid amidase (NAAA), which is typically more active in peripheral tissues [35]. Main Components of the ECS as a Target for Analgesia In general, the ECS works as a homeostatic regulator in essentially all organ systems to control many physiological processes, including nociception [27]. ECS is composed by the primary endoCBs 2-arachidonoyl glycerol (2-AG) and N-arachidonoyl ethanolamide (alias anandamide, AEA) and their synthetic enzymes diacylglycerol lipase (DAGL) and NAPE-specific phospholipase D (NAPE-PLD), respectively. There are also endoCBs degrading enzymes monoacylglycerol lipase (MAGL) and fatty acid amide hydrolase (FAAH) and at least two G-protein-coupled CB1 and CB2 receptors, mediating the signaling induced by endoCBs [28]. Figure 2A shows the main steps in the synthesis and degradation of endoCBs. The primary endoCB 2-AG is produced locally, on demand, according to the intensity of the neuronal activity, from the membrane lipid precursors as a result of activation of phospholipase C (PLC) in cells that also express DAGL [29][30][31]. DAGL converts the PLC product diacylglycerol (DAG) into 2-AG or another monoacylglycerol, called 2-oleoylglycerol (2-OG) [29]. 2-AG is degraded by enzymatic hydrolysis into glycerol and free arachidonic acid by several enzymes, primarily, by the membrane attached presynaptic MAGL (Figure 2A), but also by the recently identified alpha-beta hydrolase domain proteins (ABHD6, and ABHD12) [29,32,33]. Instead, AEA and other N-acyl ethanolamines (NAEs), such as palmitoylethanolamide (PEA) and oleoylethanolamide (OEA), are synthesized from N-acyl-phosphatidylethanolamine (NAPE) by NAPE-PLD, [29,34]. AEA, like other NAEs are hydrolyzed by FAAH, which is also a membrane-bound enzyme ( Figure 2A) [29], and N-acylethanolamine-hydrolyzing acid amidase (NAAA), which is typically more active in peripheral tissues [35]. On the left, MAGL, in contrast to FAAH, is the prevalent endoCBs hydrolysing enzyme in the trigeminal ganglion, whereas, in the brain (on the right), both MAGL and FAAH are highly active. Despite the high active state of both FAAH and MAGL in the brain, due to higher synthesis, the basic level of 2-AG in the brain is much higher than that of AEA. In contrast, in the trigeminal ganglion, the level of AEA appears to be high due to lower FAAH activity. Apart from the enzymatic degradation, extracellular endoCBs levels are maintained physiologically low presumably by uptake processes whose nature remains not fully resolved [29]. Indeed, AEA sequestration has been associated with different mechanisms mediated by fatty acid binding proteins (FABPs) [36], heat shock proteins [37], sterol carrier protein 2 [38] located in lipid rafts [39], or bidirectional membrane transporters [40]. It is under investigation whether similar mechanisms also regulate 2-AG uptake and/or sequestration [41]. The ECS is involved in performing several vital functions in both the CNS and periphery, including the modulation of excitability and neurotransmission via presynaptic CB1 receptors and the regulation of the immune system, mainly through CB2 receptors. Recently, the ECS has been considered as one of the main targets for achieving analgesia in chronic pain [42]. This type of analgesia could be a desirable alternative to opioids, which produce an effective pain relief but at the expense of several serious side effects, including psychotropicity, tolerance and addiction [43]. Thus, a range of cannabis-related chemical tools have emerged recently, including phytocannabinoids, synthetic cannabinoids and endoCBs [44]. Among them, endoCBs are especially attractive as they are naturally produced locally and 'on-demand' in the key regions of the nociceptive system and, due to ther physiological properties, have less side effects than plant cannabinoids. Some studies have already revealed that the enhanced levels of 2-AG and AEA in certain areas of the nervous system after inhibition of their respective degrading enzymes, MAGL and FAAH, produced analgesic effects almost free of side effects [45]. More detailed description of MAGL-and FAAH-targeted analgesia via endoCBs is presented in the Sections 3 and 4. MAGL and FAAH Activity in Migraine-Related Areas of the Nervous System The endoCBs-degrading enzymes MAGL and FAAH are expressed in structures related to pain origin, nociceptive transmission and perception of pain ( Figure 2B) [46,47]. However, the relative activity of these two enzymes, the major factor determining the functional role of 2-AG and AEA as endogenous analgesics, is not equally present in the PNS and CNS. As shown in Figure 2B, endoCBs hydrolysis, MAGL and FAAH, are differentially active in the trigeminal ganglion, which is a part of the peripheral nociceptive system and in the brain areas, where pain is finally perceived [47]. Indeed, based on the activity-based protein profiling method (ABPP), identifying active serine hydrolases, including MAGL and FAAH, we found that, in the trigeminal ganglion, the MAGL activity is much higher than that of FAAH ( Figure 2B) [47]. Likewise, the level of endoCBs at the periphery is expected to be non-equally present in favor of accumulated AEA, while the amount of 2-AG should be basically low due to the active degradation by MAGL. Notably, this imbalance could be changed by the inhibition of MAGL activity. Thus, in the trigeminal ganglion, the MAGL/2-AG axis is a highly tunable target for pharmacological interventions aiming to reduce peripheral mechanisms of migraine pain through enhanced level of endoCBs. In contrast to the peripheral trigeminal nociceptive system, FAAH and MAGL activity is comparable at the cortical level ( Figure 2B) [47]. Thus, in the CNS, the dual inhibition of these two endoCBs degrading enzymes could be an attractive option in order to reduce the central transmission of migraine-related pain signalling. There is, however, clear evidence that, in the CNS, the level of 2-AG is much higher than AEA [48], suggesting the leading role of 2-AG in the 'natural' modulation of pain processing in the brain. Indeed, the high 2-AG synthesis can be achieved in the brain after increased neuronal activity by following enhancement of phospholipase C (PLC) and diacylglycerol lipase (DAGL) activities along with the rise of calcium in neurons and in astroglia, making the synthesis of 2-AG greater than the AEA one [49]. Notably, even the similar level of endoCBs at the same location does not predict their equal activity, as, for instance, AEA is a partial agonist at CB1/CB2 receptor, while 2-AG is a full agonist at both receptor types [50]. 6 of 19 The inhibition of MAGL, the main 2-AG degrading enzyme at the periphery ( Figure 2) [47], represents a potential mechanism for blocking the early events in the transmission of migraine pain. However, the sustained nociceptive signalling in the meningeal trigeminovascular system could be modulated by AEA acting on local immune cells [51]. Thus, dura mater is enriched with mast cells [6,52], where their degranulation can trigger a nociceptive cascade of signalling in trigeminal afferents via the release of serotonin [8,10,53]. Notably, one of the analogs of AEA, methanandamide, inhibits the degranulation of dural mast cells through CB2 receptors [53], supporting the notion that these immune cells might also be a target for raised endoCBs, in particular, to AEA. Therefore, various treatments promoting 2-AG and AEA signalling at the local environment, surrounding meningeal afferents, can potentially reduce the generation and transmission of pain to the second order brainstem neurons [54]. In conclusion, in addition to the evident role of 2-AG, there are data showing the role of FAAH/AEA-mediated signaling as a target for peripheral analgesia. To summarize, endoCBs with their specific receptors, synthesizing and degrading enzymes are widely but not equally expressed in structures involved in migraine pain generation, transmission and perception [47,55,56]. Thus, the selective enhancement of 2-AG and AEA via MAGL and FAAH inhibition, respectively, can provide a beneficial reduction of pain triggering, transmission and excessive cortical excitability, underlying migraine pathophysiology. Distribution of CB1 and CB2 Receptors and Retrograde endoCB Signaling in the Nociceptive System According to the traditional view, endoCBs mediate their physiological effects via two main inhibitory Gi/o-protein-coupled cannabinoid CB1 and CB2 receptors [28]. Both in the CNS and the periphery, the modulation of neurotransmission is mainly mediated by neuronal presynaptic CB1 receptors [54]. CB1 receptors are specifically abundant at the central neuronal networks [57]. In contrast to CB1, CB2 receptors are widely presented in the immune cells, enriched in the meninges, as well as in microglia, but they are also found in brainstem neurons [45,58,59]. It is important that, unlike adenosine, which selectively blocks the release of glutamate but not of GABA [60], the activation of CB1 receptors inhibits transmitter release from both GABAergic and glutamatergic neurons [61][62][63][64]. Figure 3 shows that, in the primary nociceptive afferents, activation of CB1 by en-doCBs results in the inhibition of CGRP release from peripheral terminals, while in the central processes, endoCBs are blocking glutamate release, which mediates transmission of nociceptive signals to the second order neurons in the TNC [65]. Thus, a combination of these two inhibitory effects of secretion provides an added value for the anti-nociception by endoCBs. Within the CNS, endoCBs are produced locally at the postsynaptic membranes from where they are released and trans-synaptically travel, in a retrograde manner, to activate presynaptic CB1 receptors. Indeed, depolarization-induced suppression of transmitter release in excitatory and inhibitory synapses, called DSE/DSI, mediated by retrograde endoCBs signaling, is a well-studied phenomenon in the CNS [64,66,67]. Notably, in the phenomenon of DSE, the role for 2-AG is much more important than the one of AEA [68], consistent with its leading role in the control of synaptic transmission. The anti-nociceptive potential of cannabinoid CB1 receptors is well established [69,70]. In the synapse coupling the primary afferent with the second order nociceptive neuron (Figure 3), glutamate, via metabotropic mGluR receptors, enhances the activity of phospholipase C (PLC), which, in turn, stimulates 2-AG synthesis by DAGL from the precursor molecule diacylglycerol (DAG) [71]. Calcium influx, promoted mainly by post-synaptic NMDA receptors, further supports 2-AG and AEA synthesis from the membrane lipid precursors [72]. Together, these concerted actions represent an efficient endogenous negative feedback mechanism limiting pain signal transmission in a use-dependent manner. Notably, the performance of this mechanism of autoinhibition also critically depends on The CB1-mediated opening of potassium ion channels reduces excitability and diminishes nociceptive spiking. AEA also acts as a direct agonist of TRPV1 receptors, thus opposing peripheral anti-nociception via CB1 mechanism. Peripheral terminals also express mechanosensitive TRPM3 and Piezo ion channels (in the red box), which can potentially be modulated by endoCBs through modifications of the lipidic environment. In the central nerve terminal, glutamate release stimulates endoCBs synthesis by postsynaptic Ca 2+ influx through NMDA receptor and PLC enhancement following mGluR activation. EndoCBs retrogradely approaching presynaptic terminals reduce glutamate release by blocking VGCC. The action of endoCBs is mediated by CB1 receptors but they can also work as allosteric modulators, directly targeting sodium ion channels and thus, further affecting the generation and propagation of nociceptive spikes. Plus (+) and minus (−) symbols indicate the enhancement or inhibition of ion channels by endoCBs, respectively. Within the CNS, endoCBs are produced locally at the postsynaptic membranes from where they are released and trans-synaptically travel, in a retrograde manner, to activate presynaptic CB1 receptors. Indeed, depolarization-induced suppression of transmitter release in excitatory and inhibitory synapses, called DSE/DSI, mediated by retrograde endoCBs signaling, is a well-studied phenomenon in the CNS [64,66,67]. Notably, in the phenomenon of DSE, the role for 2-AG is much more important than the one of AEA [68], consistent with its leading role in the control of synaptic transmission. The anti-nociceptive potential of cannabinoid CB1 receptors is well established [69,70]. In the synapse coupling the primary afferent with the second order nociceptive neuron (Figure 3), glutamate, via metabotropic mGluR receptors, enhances the activity of phospholipase C (PLC), which, in turn, stimulates 2-AG synthesis by DAGL from the precursor molecule diacylglycerol (DAG) [71]. Calcium influx, promoted mainly by postsynaptic NMDA receptors, further supports 2-AG and AEA synthesis from the membrane lipid precursors [72]. Together, these concerted actions represent an efficient endogenous negative feedback mechanism limiting pain signal transmission in a use-dependent manner. Notably, the performance of this mechanism of autoinhibition also critically depends on the activity of MAGL and FAAH, which limits the level of both endoCBs. In addition to the signaling via neuronal CB1 receptors, at the spinal and supraspinal parts of the CNS, endoCBs can suppress pain by acting via glial CB2 receptors [54]. Nociceptive spiking, Ca 2+ -dependent CGRP release in the peripheral nerve terminal (left), and glutamate release in the central nerve terminal (right), are the main targets for endoCBs leading to pain inhibition. In the peripheral nerve terminal, the activation of CB1 receptors by endoCBs results in the inhibition of voltage gated calcium ion channels (VGCC), resulting in reduced CGRP release. The CB1-mediated opening of potassium ion channels reduces excitability and diminishes nociceptive spiking. AEA also acts as a direct agonist of TRPV1 receptors, thus opposing peripheral anti-nociception via CB1 mechanism. Peripheral terminals also express mechanosensitive TRPM3 and Piezo ion channels (in the red box), which can potentially be modulated by endoCBs through modifications of the lipidic environment. In the central nerve terminal, glutamate release stimulates endoCBs synthesis by postsynaptic Ca 2+ influx through NMDA receptor and PLC enhancement following mGluR activation. EndoCBs retrogradely approaching presynaptic terminals reduce glutamate release by blocking VGCC. The action of endoCBs is mediated by CB1 receptors but they can also work as allosteric modulators, directly targeting sodium ion channels and thus, further affecting the generation and propagation of nociceptive spikes. Plus (+) and minus (−) symbols indicate the enhancement or inhibition of ion channels by endoCBs, respectively. At the molecular level (Figure 3), activation of presynaptic CB1 receptors, operating via inhibitory Gi/o-proteins, by blocking presynaptic voltage-gated calcium-channels, inhibits release of glutamate as well as CGRP, from the presynaptic neuron [73]. Moreover, the activation of CB1 receptors has been linked to the opening of inward rectification potassium channels [74]. These channels contribute to the maintenance of the resting membrane potential and their activation should reduce the neuronal excitability as an additional anti-nociceptive mechanism (Figure 3). CB1 receptors' activation also leads to decreased cAMP levels and to PKA inhibition [75], thus reducing the neuronal sensitization. Together, these numerous complementary mechanisms determine a multicomponent antinociceptive effect of endoCBs. Pro-Nociceptive Effects of EndoCBs via TRPV1 Receptors In addition to interaction with the canonic inhibitory CB1 and CB2 receptors, endoCBs are able to engage the noncannabinoid receptor-mediated neuromodulation. For instance, AEA has been reported to activate, although at high concentrations, the transient recep-tor potential vanilloid receptor (TRPV1), which may trigger CGRP release and promote nociceptive signaling (Figure 3) [76,77]. Thus, the TRPV1 receptor, which is forming a calcium-permeable ion channel, can function as an ionotropic cannabinoid receptor under both physiological and pathological conditions [50]. In the context of migraine, TRPV1 receptors are highly expressed in nociceptive meningeal afferents [78]. These receptors are also detected in other migraine related areas such as the spinal cord, thalamus, cerebellum, cortex, and limbic system [79,80]. Notably, while the action of AEA via CB1 receptors represents an antinociceptive effect due to the reduced release of glutamate as well as of substance P and CGRP (Figure 3) [81], the final functional outcome of interactions between AEA and TRPV1 receptors in in vivo conditions remains unclear. Interestingly, endoCBs-mediated CB1 activation can decrease the sensitivity of TRPV1 receptors [46], thus potentially reducing pain [82]. Nevertheless, as higher AEA concentrations can be achieved locally after a complete inhibition of FAAH, the resulting AEA interaction with TRPV1 receptors should be taken into consideration when planning treatment options based on raised levels of both endoCBs. Modulation of Nociception by EndoCBs via Membrane Lipid Environment and Direct Interaction with Ion Channels Meningeal afferents in the trigeminovascular system express many pain-related ion channels. In addition to the well-established interaction of AEA with TRPV1 receptors, there are potentially more molecular targets for AEA and 2-AG among the plethora of ion channels shaping nociceptive signaling in meningeal C-and Aδ fibers. Thus, nociceptive spike generation and propagation primarily depends from sodium ion channels, which profile is specific for C-and Aδ fibers [83,84]. Nociceptors also widely express ATP-gated P2X receptors [85] and recently discovered mechanosensitive Piezo1/2 channels [11,86,87] as well as sex hormones sensitive TRPM3 receptors [88]. The activity of most of these transmembrane channels, primarily of mechanosensitive gigantic Piezo proteins, largely depends on the profile of membrane lipids, in particular, on the level of phosphatidylinositol 4,5-bisphosphate (PIP2) [89] and specific fatty acids [90]. Mechanosensitive channels are of special interest in the context of migraine, as this disorder is associated with such symptoms as allodynia, mechanical hyperalgesia and pulsating pain [11,86]. Given the lipid nature of endoCBs and their link to the lipid profile of the membrane, in particular, their transformation to arachidonic acid (AA), it is likely that ECS activity can modulate mechanosensitive ion channels through this noncanonical signaling. If proven, such modulation of mechanosensitive TRPM3 and Piezo receptors by endoCBs via membrane lipids, analogous to the AA-mediated control of mechanosensitive K2P channels [91], could be a novel mechanism of neuromodulation which deserves further exploration. Apart from the lipid environment of the ion channels, endoCBs potentially can serve as allosteric modulators, directly targeting ion channels to deliver the diverse functional effects [27,92]. Of key importance for the generation and propagation of nociceptive spikes is the ability of endoCBs to affect certain subtypes of potassium and sodium channels, either via CB1 receptors., or independently from CB1 activity (Figure 3). In line with this anti-nociceptive mechanism, cannabidiol (CBD), one of the key phytocannabinoids, acts as an inhibitor of Na V channels [93]. Whether endoCBs mediate a similar direct effect in order to dampen the nociceptive action potentials in trigeminal afferents is poorly explored. However, it has been found that AEA can prevent the activity of Na V and L-type calcium channels in rat ventricular myocytes [94]. Consistent with the direct action on ion channels, 2-AG has been found to decrease sodium currents in frog parathyroid cells that lack CB1 and CB2 receptors [95,96]. Potassium channels are presented as a large family of membrane proteins, which have different properties directed, in general, to stabilize the membrane potential and limit or prevent spike generation. The typical coupling of CB1 receptors to opening of inward rectifying potassium channels (Figure 3) has been extended recently to show that endoCBs have mechanisms of action on potassium channels other than as cannabinoid receptors. Thus, the recent review by Lin [27] combined data demonstrating that BK, I A , KATP, TASK-1 and potassium channels can be the targets for cannabinoid receptor independent modulation. In trigeminal neurons, AEA did not affect the P2 × 3 receptor, but down modulated the inhibitory GABA A receptors, which operate via the opening of chloride ion channels to prevent excitation [97]. The latter might indicate that, at the brainstem or in other parts of the CNS, accumulation of AEA might be associated with reduced GABAergic inhibition, adding more complexity in the action of endoCBs in the central synapses. Further investigation into the molecular mechanisms underlying the direct and indirect interactions between endoCBs and ion channels is needed for improving the efficiency and selectivity of endoCB-based therapies [27]. Current Approaches to Treat Migraine Pain and the Need for New Treatment Options In the clinical setting, modern medications directed against migraine pain can abort a migraine attack when it starts but their use is often associated with side effects and, eventually, can result in medication overuse symptoms [98,99]. Frequently administered acute migraine treatments such as triptans, ditans and opioids still have numerous side effects [10,99,100]. In most chronic migraine patients, an alternative preventive treatment is needed, including β blockers [101], anticonvulsants [102,103] and calcium channel blockers, which are effective also in targeting aura symptoms [104]. Innovative preventive strategies for management of migraine are permanently under development, both in clinical trials and in preclinical research. New, already approved options include CGRP antagonists and CGRP antibodies [105,106], as well as drugs targeting serotonin receptor subtypes [10,107]. In the meantime, ECS is already discussed as an additional approach to modulate chronic pain [55,108]. Based on recently established data on the activity of endoCBs hydrolyzing enzymes in the migraine related areas of the PNS and CNS [47], the possibility to engage ECS for the treatment of migraine pain is now getting stronger support. Preventing Endocannabinoid Hydrolysis as a Novel Analgesic Strategy The selective enhancement of AEA and 2-AG levels in the tissues can be achieved by administration of the MAGL or FAAH inhibitors, respectively. An efficient and specific MAGL and FAAH inhibition should prevent 2-AG and AEA hydrolysis, thereby increasing their levels in the nervous system and other migraine related tissues. The raised levels of endoCBs can provide a multitude of anti-nociceptive effects counteracting key events in migraine pathogenesis discussed above. An additional anti-nociceptive benefit from inhibition of AEA and 2-AG hydrolysis relies on the fact that it is diminishing the levels of their degradation product AA and its pro-nociceptive downstream products such as PGE2, as well as endovanilloids hydroxyeicosatetraenoic acid (HETE) and hydroperoxyeicosatraenoic acid (HPETE), the lipid agonists of TRPV1 receptors. It should also be noted that the activity of MAGL and FAAH could be changed by oxidative stress and during neuroinflammation [109,110], conditions which contribute to migraine pathology. There is a continuous ongoing progress in the development of pharmacological agents which can serve as the specific FAAH or MAGL inhibitors, as well as a small group of dual inhibitors targeting both enzymes. The spectrum of recently established inhibitors in shown in Table 1. The first reported FAAH inhibitors, oleoyl and arachidonoyl derivatives of trifluoromethyl ketones and fluorophosphonates, were structurally similar to the natural substrates, giving a relatively strong but very unspecific effect due to the inhibition of several different hydrolases [111]. Later, more effective and potent FAAH inhibitors were developed, including a reversible compound OL-135 [112][113][114], irreversible URB597 [112,113,115,116] and PF3845 [112,117], which all have an analgesic effect (Table 1). In particular, the FAAH-inhibitor OL135 was efficient in a rat model of neuropathic pain, increasing AEA levels in the whole brain and in the spinal cord [114]. Its antinociceptive effect was likely based on its dual activity by targeting CB1 receptors as well as promoting desensitization of TRPV1 ion channels [118]. PF3845 also reduced pain and mechanical allodynia in the model of inflammatory pain [119,120]. The general FAAH inhibition by URB597, as well as the peripheral FAAH inhibition by URB937, reduced migraine related NTG-induced trigeminal hyperalgesia (Table 1) [121,122]. These encouraging results increased the interest in developing the FAAH inhibitors as analgesic drugs, and stimulated exploration of even more efficient and selective inhibitors. Other recently published potent FAAH inhibitors include JNJ-1661010, AKU-009, AKU-010 [123] and JZP327A [124], which have not yet been tested in migraine pathophysiology. During the past years, the FAAH inhibitors were considered as more attractive because of their high selectivity and availability [45,112]. However, MAGL inhibitors have acquired importance because of their higher relative potency and the important role of MAGL substrate 2-AG [111]. The high expectations are also related to the elevated activity of MAGL in certain areas of the nociceptive system [47] and lead, among other endoCBs, the functional role of 2-AG signaling in the brain. Interestingly, in one of the recent reports, 2-AG was proposed to be degraded by both MAGL and FAAH [125]. However, in contrast to the inhibition of MAGL, it seems that FAAH inhibition is not able to increase 2-AG levels in the brain [126]. The latter is further supported by in vitro studies [28]. Among the first reported MAGL inhibitors was N-arachidonoyl maleimide (NAM), which produced an irreversible effect with low specificity [127], as well as the non-selective MAGL inhibitors methyl arachidonoyl fluorophosphonate (MAFP) and arachidonoyl trifluoromethyl ketone (Table 1) [111,127,128]. The majority of MAGL inhibitors reported thus far lack high specificity, and most of them are non-specific with regard to also affecting other hydrolases [111]. Focusing on more selective MAGL inhibitors to be used for migraine pain treatment, URB602 [129] and JZL184 [130] are able to reduce trigeminal hyperalgesia in rat NTG models of migraine (Table 1) [131]. The well-studied inhibitor JZL184 was shown to be highly specific to target MAGL, as well as KML29 [130]. They both are inducing an important analgesic and anti-allodynic effect in vivo (Table 1) [45,[132][133][134]. In particular, JZL184 had a strong behavioral and peripheral antinociceptive effect on the formalin pain model [135,136] and in other neuropathies [137]. Another MAGL-inhibitor, MJN110 (more potent than JZL184 in MAGL inhibition) [138], was highly potent in attenuating mechanical allodynia and thermal hyperalgesia in neuropathic pain models (Table 1) [139]. However, MJN110 was never tested in migraine pain models. Interestingly, FAAH can often be partially inhibited by many MAGL inhibitors [45,111]. This multiple targeting, typical for MAGL inhibitors, could represent an advantage, since it has been hypothesized that the double inhibition of MAGL and FAAH could be more effective than a complete inhibition of only one of these enzymes [140]. It was recently reported that the specific MAGL inhibitor JJKK-048 has a very high potency in vitro (IC 50 < 0.4 nM) [141]. Therefore, it might be considered as a potential drug candidate for migraine pain treatments [47]. Given a relatively high activity in the brain (Figure 2), it appears that the FAAH inhibition has the potential to be targeted primarily in the CNS and to increase the level of AEA in order to activate neuronal CB1 receptors, which are highly expressed in the brain and spinal cord [142]. Instead, MAGL inhibitors increasing the levels of 2-AG, a full agonist of CB1 and CB2 receptors, are able to achieve its anti-nociceptive effects both in the central and peripheral nervous systems [45,58,59]. A powerful tool for targeting both MAGL and FAAH in either the trigeminovascular system or in the CNS is the recently developed dual MAGL/FAAH inhibitor AKU-005, which shows a high activity even at nanomolar concentrations (IC 50 value 0.2-1.1 nM) [141]. Consistent with the concept of dual inhibition, the well-established dual MAGL/FAAH inhibitor JZL195 has already demonstrated its ability to relieve inflammatory pain and reduce trigeminal hyperalgesia [137,[143][144][145]. Finally, it should be noted that the full inhibition of both key endoCBs degrading enzymes can potentially be associated with so-called cannabimimetic effects including catalepsy, hypothermia and hypomotility, and a desirable aim consists in the pattern of MAGL and FAAH inhibition that provides a sufficient level of analgesia without such side effects [134]. ECS as a Target for Treating Migraine with Aura? Because of its specific mechanisms related to the generation of CSD, which is linked to neuronal hyperexcitability [146], migraine with aura needs the particular tools to reduce the hyperexcitable state of the cortex. The ability of cannabinoids to reduce the release of glutamate may suggest that the activation of the ECS modulates this type of migraine-related event. Although not sufficiently explored, this field of research remains controversial. Thus, one study revealed that either AEA or the CB1/2 agonist WIN 55,212-2 do not affect characteristics of CSD elicited by high potassium application [97]. The other study showed, however, that WIN55.212-2, inhibited the amplitude, duration and velocity of CSD propagation, while JWH 133, a CB2 receptor agonist, devoid of any effects in this phenomenon [147], highlights the leading role of CB1-mediated signaling in the control of neuronal mechanisms underlying CSD. The latter results suggest that CSD might be sensitive to CB1 activation, which fits with their role in reducing glutamate release from presynaptic sources, as described in the previous sections of this review. There are also studies describing functional interactions between CB1 and NMDA receptors [148], which play a key role in CSD generation and propogation [149]. Likewise, there is a report on the functional interaction between endoCBs and the activity of kynurenic acid, an endogenous NMDA receptor antagonist [150]. However, whether the recently developed endoCBs hydrolase inhibitors are also effective in counteracting CSD hyperexcitability in migraine with aura remains unexplored. Therefore, based on our recent findings of the high activity of both MAGL and FAAH in the highly excitable occipital cortex [47], there is an attractive possibility to test whether CSD could be reduced by the dual inhibition of MAGL and FAAH. If proven, this could extend the therapeutic potential of MAGL/FAAH inhibition to migraine with aura. [141] Inhibitor potencies defined by IC50 values in rat brain membranes (OL135, URB597, URB937, JZL184, URB602, KML29, MJN110, JZL195), Colo cell line (PF3845) and rat cerebellar membranes (JJKK-048, AKU-005). Conclusions Migraine pain is a common and disabling condition which remains often intractable, and despite the huge number of patients debilitated by migraine pain, an effective therapy free of side effects is still lacking. Several recent studies suggest endoCBs as a new promising treatment for migraine pain given the overlap between ECS and key regions for the nociceptive system at most of the stages of pain signal generation, transmission, and perception. Therapeutically optimal levels of endoCBs AEA and 2-AG, aiming to provide analgesia but minimize the unwanted cannabimimetic effects, can be achieved by administration of emerging potent MAGL and/or FAAH inhibitors. The strength of this therapy relies on the specificity and selectivity of the compounds, confining their anti-nociceptive effects to sites where endoCB could be efficiently mobilized proportionally to the local neuro-immune activity. This field of research needs further investigation, which now become possible by combining various modern methods including highly sensitive ABPP assays to evaluate activities and the sensitivity to inhibition of endoCBs hydrolases, LC/MS spectrometry to determine endoCB levels in specific tissues, along with electrophysiological tools and behavioral testing in animals. Identification of novel treatments acting specifically on druggable molecular targets in the brain and in the peripheral meningeal trigeminovascular nociceptive system suggests a promising approach to control migraine pain, ultimately limiting the undesired side effects of new treatments. Conflicts of Interest: The authors declare that they have no conflicts of interest.
9,501.8
2022-04-01T00:00:00.000
[ "Biology" ]
Black holes in presence of cosmological constant: Second order in 1/D We have extended the results of arXiv:1704.06076 upto second subleading order in an expansion around large dimension D. Unlike the previous case, there are non-trivial metric corrections at this order. Due to our `background-covariant' formalism, the dependence on Ricci and the Riemann curvature tensor of the background is manifest here. The gravity system is dual to a dynamical membrane coupled with a velocity field. The dual membrane is embedded in some smooth background geometry that also satisfies the Einstein equation in presence of cosmological constant. We explicitly computed the corrections to the equation governing the membrane-dynamics. Our results match with earlier derivations in appropriate limits. We calculated the spectrum of QNM from our membrane equations and matched them against similar results derived from gravity. Recently it has been shown that in large number of dimensions, black hole solutions simplify a lot. 1 The effect of the black hole is essentially confined around its event horizon in a parametrically thin region whose thickness is proportional to the inverse of the number of dimension. Further, the spectrum of the linearized fluctuation (Quasi Normal Modes or QNMs) develops a large gap proportional to the number of dimension. In [2] authors have shown how one can formulate the autonomous nonlinear theory of the low lying modes. They combine to form a dynamical black hole solution to Einstein equation which could be determined in an expansion in inverse powers of D. In [1] the authors have extended the calculation of [2] (which was for pure Einstein Gravity) to solutions in presence of cosmological constant and in general for any asymptotic background provided it is a solution of the gravity equation. The method used in [1] has manifest background covariance but the calculation were done only upto the first subleading order in 1 D . In this note, we would like to extend the calculation of [1] to the second subleading order. The key motivation is two-fold. Firstly from the result of [1] we know that at the first subleading order the background curvature does not appear explicitly in any of the equation or the solution. However it should appear explicitly at second subleading order (which, very roughly speaking, captures the effect of two derivatives on the background). Secondly from the experience of the 'flat space computation', it is expected that at this order we should see the entropy production from a dynamical black hole. However, in this note we shall confine ourselves only to the computation of the membrane equation of motion and the metric correction upto the second subleading order in 1 D expansion. We leave the 'study of entropy production' for future. As a consistency check of our results we shall linearize our membrane equation and compare the spectrum with that of the low lying QNMs (already determined in [16]). We shall find a perfect match upto the relevant order. The organization of this note is as follows. In section (2) we have described the basic set-up of our problem in terms of equations and also the final result for the corrections to metric and the membrane equations. Next in section (3) we gave a sketch of the computation, which turns out to be quite tedious in this case. Many of the details we collected in the appendices. In section (4) we have performed several checks. Some of them are about the internal consistency of our set of equations (see subsection -4.1) and the rest are about the calculation of the linearized spectrum of our membrane around different static backgrounds. We have also matched them against the known results of QNMs (see subsection -4.2). Finally in section (5) we discuss about the future directions. Set up and final result In this section we shall briefly define the basic set-up of our problem in terms of equations. It is essentially an extension of section-2 of [1]. So we shall be very brief here. We are dealing with pure gravity in presence of cosmological constant. The Action and the equation of motion are given by the following. Where the dimension (denoted as D) dependence of Λ is parametrized as follows Varying (2.1) with respect to the metric we get the equation of motion Our aim is to solve these equations perturbatively as a series in inverse power of D. Schematically our solution will take the form AB + · · · (2.4) We take our starting ansatz G AB to be the following 2 G (0) Here g AB is the background metric which could be any smooth solution of the starting equation (2.1). O ≡ O A dX A is a one-form that is null with respect to the background metric g AB . It turns out that this starting solution has an event horizon, given by the null hypersurface -S : ψ = 1. We define the function ψ in a way so that ψ = 1 is the horizon to all order in 1 D expansion. Further it satisfies the following equation (which we shall refer to as 'subsidiary condition-1') ∇ 2 ψ −D = 0 (2.6) 3 We can always determine ψ explicitly in an expansion in 1 D solving equation (2.6) with the initial condition that ψ = 1 coincides with the horizon [6]. We fix the normalization of O A by demanding that the inner product between O A and the unit normal to the ψ = constant surface (viewed as a hypersurface embedded in the background g AB ) is always one. In terms of equation this implies Note that using the above normalization we can define a unit normalized velocity field u A . It turns out that u A is the null generator of the ψ = 1 hypersurface (viewed as a null hypersurface embedded in G (0) AB ). However, just the normalization cannot fix all components of the null one-form O A everywhere. We fix this ambiguity demanding that O A satisfies the following geodesic constraint ( which we shall refer to as 'subsidiary condition-2') (2.9) (2.9) again could be solved in an expansion in 1 D provided we have an unambiguous initial condition to all order. We fix this condition by demanding that u A as defined in (2.8) is the null generator of the horizon to all order [6]. We shall determine the metric corrections in terms of the well defined ψ and O A fields and their derivatives. Solution at the first subleading order As mentioned in the introduction, G AB -the metric correction at first subleading order has already been determined [1]. For convenience, here we shall quote the first order solution. It turns out that Einstein equations could be solved provided the extrinsic curvature of the ψ = 1 hypersurface (viewed as a hypersurface embedded in the background) and the velocity field u A together satisfy the following constraint equations on the horizon. The constraint equation can be written as an intrinsic equation to the membrane. Hereĝ µν denotes the induced metric on the membrane (ψ = 1 hypersurface) and∇ is the covariant derivative with respect toĝ µν . The velocity field u µ is the pull back of the bulk velocity field u A and K µν is the pull back of the extrinsic curvature of the membrane onto the hypersurface 4 . K is the trace of the extrinsic curvature. For every solution of the above constraint equations we could determine G AB . It turns out G (1) AB simply vanishes given our choice of subsidiary conditions. In this note our goal is to find corrections to equation (2.10) to the next order in But before getting into any details of the computation, we shall first present our final result . Final Result: Metric and membrane equation at second subleading order In this subsection we shall present the subleading correction to the membrane equation (2.10) and the solution to G (2) AB The metric correction would take the following form. In terms of equations what we mean is the following. The space time form of the extrinsic curvature is given by u µ and K µν is defined as where X M denotes the coordinates of the full space time and y µ denotes coordinates on the membrane. where Where,R ABCD is the Riemann tensor 5 of the background metric g AB and∇ is defined as follows: for any general tensor with n indices W A 1 A 2 ···Añ x 0 y e y e y − 1 dy Riemann tensor is defined by the relation As we can see that our solution is parametrized by the shape of the constant ψ hypersurfaces (encoded in its extrinsic curvature K AB ) along with the velocity field u A . However, because of our subsidiary conditions if we know K AB and u A along one constant ψ hypersurface, they are determined everywhere else. In this sense the real data our class of solutions are to be provided only along one simple surface; the most natural choice of which is the horizon or the hypersurface ψ = 1. As we have mentioned before, we cannot choose any arbitrary shape of the membrane and velocity field as our initial data. The metric, presented above, would solve Einstein equation (2.3) only if the data satisfy some constraint -the equation (2.10) with subleading corrections. This will lead to the following corrected membrane equation at this order. Sketch of the computation It turns out that though the computation to determine the second order metric correction is tedious, conceptually it is a straightforward extension of what has been done in [1]. Therefore in this section, we shall omit most of the derivations and mention only those where there are some differences from [1]. We shall follow the same convention as in [1]. In particular our choice of gauge is also the same, namely AB = 0 With this gauge choice the second order correction could be parametrized as Here s n , [v n ] A , [t n ] AB are different independent scalar, vector and tensor structures, constructed out of the membrane data. we got a set of coupled, ordinary but inhomogeneous differential equation for the unknown functions in equation (3.1). Boundary conditions for these differential equations are set by the following physical conditions. 1. The surface (ψ = 1) or (R = 0) is the event horizon and therefore a null hypersurface to all orders. 2. u A is the null generator of this event horizon to all orders. 3. Bulk metric G AB to all orders approaches g AB as R → ∞. These conditions translate to the following constraints on the unknown functions. The homogeneous part H AB (i.e., the part that acts like a differential operator on the space of unknown functions appearing in G (2) AB ) is universal. It will have the same form as in the 'first order' calculation and we do not need to recalculate it. For convenience, here we shall quote the results for the homogeneous part as derived in [1]. where, Here for any R dependent function, X ′ (R) denotes dX(R) dR . The 'source' parts of these equations are determined by evaluating the Einstein equation on the first order corrected metric. By construction the order O(D 2 ) and order O(D) pieces of these equations will vanish and first non-zero contribution, relevant for the computation of this note , will be of O(1). From the above discussion it follows that the key part of the computation is to determine the source term, which we denote here by S AB . Since G (1) AB vanishes, just like in [1] here also the source will be given by E AB calculated on G (0) AB , however the complication lies in the fact that the calculation has to be carried out upto order O(1). Here we are presenting the final result for the source. See appendix A for the details. For convenience, we shall decompose S AB into its different components. The explicit expression for the different components are the following. See equation (2.13) for the definitions of s 1 , s 2 , v C , t AB . ∇ is defined as follows: for any general tensor with n indices W A 1 A 2 ···Añ The final set of coupled differential equations that we have to solve is simply H AB + S AB = 0 (3.10) As explained in [1], the homogeneous part H AB could be decoupled after taking its appropriate projection on different directions. Similar projections applied on S AB will generate the sources for the scalar, vector, tensor and the trace sectors. However, just as in the first order calculation, there is an 'integrability' condition. Note that H (1) and H (V 1 ) C vanish at R = 0 6 . Hence consistency demands that S (1) and S (V 1 ) C should also vanish on R = 0. In other words, these set of equations could be consistently solved only if on the horizon the velocity field u A and the extrinsic curvature of the ψ = 1 membrane (viewed as a hypersurface embedded in the background) together satisfy the following equations. By appropriate pull-back these equations could be recast as an intrinsic equation on the hypersurface and they generate the next order correction to the constraint equation (2.10). We have described them in equations (2.17). Once the constraint equations are satisfied, we could see that in the source S AB only two scalar structures (s 1 and s 2 ), one vector structure (v C ) and one tensor structure (t AB ) appear. So altogether we have 6 unknown functions (2 functions for the scalar sector, 2 in the trace sector, 1 in the vector sector and 1 in the tensor sector) into solve for. The decoupled ODEs for different unknown metric functions: Tensor sector: Here the explicit form of the equation is as follows We can integrate this equation. After imposing t(R = 0) = finite and lim R→∞ t(R) = 0 6 To see the vanishing of H (1) at R = 0 we have to use the fact that v n (R) vanishes at R = 0 as a consequence of our boundary condition. See equation (3.2) we find the result as presented in the first equation of (2.15). Vector sector: Here the explicit form of the equation is as follows After imposing v(R = 0) = 0 and lim R→∞ v(R) = 0 we find the result as presented in the second equation of (2.15). Trace sector: The equations for h n (R) is simply given by − N 2 2 n h ′′ n s n = 0 (3.14) Integrating this differential equation with the boundary condition (3.2), we found correction in the trace sector vanishes i.e., h n (R) = 0 Scalar sector: The equations for f 1 (R) and f 2 (R) are given by Checks In this section we shall perform several checks on our calculation. Roughly the checks could be of two types. The first is the internal consistency of our solutions and the systems of equations, i.e, to verify that if we simply substitute our solution in the system of equations (3.10), each and every component of it vanishes upto corrections of order O 1 D . The details of it would be presented in subsection 4.1. The second type of checks are the ones where we shall take several limits and match our results with some answers, known previously. One trivial check in this category that we have tried on every stage of our computation is to match with the known results in asymptotically flat case [4], by setting the cosmological constant Λ to zero. The corrected constraint equation (2.17) manifestly matches with equation no (4.5) and (4.12) respectively of [4], if we set Λ to zero. At this stage it is difficult to match the two metrics even after setting Λ to zero, since our subsidiary conditions are different from that of [4] and we leave it for future. The other significant check that we have performed is the matching of the spectrum of linearized fluctuation derived from our constraint equations to that of the Quasi-Normal modes already calculated in [16]. In subsection 4.2 we shall give the details of this computation. Check for internal consistency In this subsection we shall explicitly verify that our solution for the metric along with the membrane equations constraining the membrane data, does satisfy equation (3.10) i.e., each of its components vanishes upto corrections of order O 1 D . Let E AB denote the LHS of equation (3.10). From the list of the decoupled ODEs (see the discussion below equation (3.10)) it is clear that the 4 of the 7 independent components of E AB must be satisfied since we have solved for the metric functions by integrating them. These components are vanish at ψ = 1 and membrane equations ensure that the same is true for the source. As explained in [1], if we consider 'the variation of the metric as we go away from the horizon' as 'dynamics', then the membrane equations play the role of 'constraint equations', whereas the equations we solved to determine the metric corrections are like the 'dynamical' ones. Now in any theory of gravity, it is enough to solve the 'dynamical equations' everywhere and the constraint equation only along one constant 'time slice' (in our case which would be a constant ψ slice); gauge invariance will ensure that the full set of equations are solved everywhere [38]. This theorem guarantees that the rest of the three independent components of E AB must vanish provided we have solved the equations correctly. These components Therefore the fact that these components do vanish on our solution is an important consistency check of our whole procedure and the final answer. Computationally it turns out to be quite non-trivial. In fact we have to take help from Mathematica to prove them. Vanishing of E (2) From eq (3.4) it follows that Here we have used the fact that metric correction in the trace sector (i.e., h n (R)) vanishes. Also we have used equation (3.16) for the divergence of v C and the last three equations from (2.15) for the expressions of f n (R) and v(R). From equation (3.6) we could see that H (2) is exactly the minus of S (2) as required. Vanishing of E (tr) This follows trivially from (3.6) and (3.4), as both S (tr) and H (tr) vanish at this order. Vanishing of E should also vanish on our solution. The equation below checks that this is true. In the second line we have used the identity (see Appendix B.1 for the derivation), In the last line we have used the first and the second equation of (2.15) for the expressions of v(R) and t(R). Quasinormal Modes for Schwarzschild black hole in background AdS/dS spacetime Now as a check for our membrane equations, we will calculate the light quasinormal mode frequencies for Schwarzschild black hole in AdS/dS background. As expected, we find that the answers for the frequencies of light quasinormal modes match exactly with those derived in [16] from gravitational analysis. As before, we shall follow [1] for the computation. Many steps and arguments are exactly same as in [1]. For such steps we shall simply refer to [1] or quote them in the appendix. And here we shall present only those parts of computation where we have to do some extension of what has been done in [1]. We shall write the background AdS/dS in global coordinates as And the Schwarzschild black hole in this coordinate system is Where, r 0 is an arbitrary constant. Note that the position of horizon is From now on we choose r H = 1 or in other words r 0 will be set to for convenience. We will later reinstate the factors of r 0 from dimensional analysis. A small fluctuation around a static black hole corresponds to a small fluctuation around a spherical membrane along with a small fluctuation in the velocity field, which is purely in the time direction at zeroth order. We will work upto linear order in the amplitude of fluctuations, which we denote by ǫ. Here, we denote the angle coordinates along (D − 2) dimensional sphere by a and the coordinates µ on the membrane worldvolume contain time t and angles a. The induced metric on the membrane worldvolume (viewed as a hypersurface embedded in the background metric (4.4)) upto linear order in ǫ is (where we denote the metric components by g The membrane equations arê In (4.11) and (4.12), the covariant derivative with respect to metric (4.9) is denoted by∇. The extrinsic curvature of membrane is denoted by K µν and its trace by K. The projector orthogonal to u µ is denoted by P µν . It turns out that E tot t vanishes at linear order in ǫ. Using (C.14) and (C.15), we evaluate the vector membrane equation in the angular directions Where, in (4.13) we have neglected the terms which are order O (1/D 2 ) or higher. We denote the covariant derivative with respect to a unit sphere metric in D − 2 dimensions by∇ a . Similarly we evaluate the membrane equation (4.11) We choose to divide the fluctuation δu a in two parts (see Section (5) of [3]) Substituting (4.15) into (4.14) we find Now we reinstate the factors of r H . 7 We expand the shape fluctuations where, Y lm are the scalar spherical harmonics on S D−2 for which Now, we substitute (4.18) in (4.17) and solve for the scalar QNM frequencies Upto the required order, the answer (4.20) agrees with the respective answer given in equations (D.3),(D.4) of [16]. Similarly we now calculate the vector QNM frequencies. Note that we have solved (4.17). So, the δr and Φ terms in (4.13) will drop out and we have We expand the δv a fluctuations as where, Y lm a are the vector spherical harmonics on S D−2 for which We Substitute (4.22) in (4.21) and solve for vector QNM frequencies Upto the required order, the answer (4.24) agrees with the respective answer given in equation (D.2) of [16]. Quasinormal Modes for AdS Schwarzschild black brane Now we shall repeat the above analysis for the case of uniform planar membrane in AdS. This membrane corresponds to AdS Schwarzschild black brane with horizon topology of R D−2 × R in Poincare patch metric. Here we consider membrane fluctuations in time and all the D − 2 spatial brane directions. The background metric in Poincare patch coordinates is ds 2 = −r 2 dt 2 + dr 2 r 2 +r 2 dx a dx a (4.25) Where we have set AdS radius L = 1, i.e. Λ = (D − 1)(D − 2). For our convenience we use the following notation for this section We consider a uniform planar membrane located at the locationr = r 0 . We find it convenient to perform the following rescalinĝ With this rescaling, the background metric (4.25) becomes Where now r = 1 is the location of the uniform membrane. We will consider the time dependence of the shape and velocity fluctuations of the form This choice means that the new coordinates in (4.28) are all dimensionless. We consider the fluctuations around the uniform planar membrane as Where ǫ is the amplitude of fluctuations and we work upto linear order in ǫ. Upto linear order, the induced metric on the membrane worldvolume becomes Upto linear order, u µ g µν (ind) u ν = −1 implies The covariant derivative with respect to induced metric (4.30) is denoted by∇ and of the background metric (4.28) is denoted by ∇. Also K µν and K are defined in the same way as the previous subsection. So we now again consider the membrane equations (4.11) and (4.12). Substituting the equations (4.29) and (4.31) in the LHS (4.12) (see appendix (D) for details) we find that E tot t is of order O(ǫ 2 ), and the 'a' components of the equation becomes E tot a = ǫ − ∂ t δu a − ∂ 2 t δu a n + ∂ 2 δu a n + ∂ 4 t δu a n 3 − 2 ∂ 2 t ∂ 2 δu a n 3 + ∂ 2 ∂ 2 δu a n 3 + 2 ∂ 2 t δu a n 2 − 2 ∂ 2 δu a n 2 + 2 Similarly the expansion of equation (4.11) to linear order in fluctuation leads to the following equation∇ .u = 0 = ǫ∂ a δu a + ǫ(n − 1)∂ t δr (4.33) Now to find the scalar QNM frequencies, the relevant equations are (4.33) and ∂ a E tot a . Finding ∂ a E tot a and substituting (4.33) we get We consider the plane wave expansion of the shape fluctuations δr = δr 0 e −iωt e ikax a (4.35) We then substitute (4.35) into (4.34) and solve for scalar QNM frequencies (where we take k ∼ O( √ n)) 8 8 It turns out, as in 1st order, that the orders of temporal and spatial frequencies are related by factor of 1 √ n . This can be seen from the equation (4.33), where there is a relative factor of (n − 1) between the divergence of velocity fluctuations and the shape fluctuations. So we cannot have both the temporal and spacial frequencies of the same order. Here we demanded that the temporal frequency is of order O(1), but no restriction was put on the spatial frequencies. Such scaling is consistent with the present 1 D expansion. See [1] and [5] for detailed explanation. Hence we can write the most general solution of (4.34) δr = δr 0 1 e −iω 1 t e ikax a + δr 0 2 e −iω 2 t e ikax a (4.37) where, Similarly, we can write the form of the most general solution of (4.33) and (4.32) (Note that there is only one vector QNM frequency) where V 1 a and V 2 a are vectors along k a , and v a is any vector which satisfies v a k a = 0. Substituting (4.39) into (4.33) and (4.32) and solving we find Thus, we see that there is no subleading correction to ω v . Collecting the results for light QNM frequencies Upto the required order, the answers (4.41) agree with the respective answers given in equations (4.23),(4.24),(4.25) of [16]. Future directions In this note we have found new dynamical 'black-hole' type solutions of the Einstein equations in presence of cosmological constant in an expansion in the inverse powers of dimension. We have done the calculation upto second subleading order. The space-time, determined here, will necessarily possess an event horizon. The dynamics of the horizon could be mapped to the dynamics of a velocity field on a dynamical membrane, embedded in the asymptotic background. We have determined the equation for this dual dynamics of the membrane and the velocity field also in an expansion in 1 D . There are several directions along which we could proceed from here. As we have mentioned in the introduction, one of our key motivation for this second subleading calculation is to have some insight in entropy production, which is expected to take place only at this order. Calculation of this entropy production along with the effective stress tensor for the membrane (see [6] for the stress tensor at first order) could be one immediate project. As a check we have matched the spectrum of the Quasi-Normal modes. This gives a check on the equation of motion for the membrane. Another important check would be to match the metric with the large dimension limit of known black hole solutions. Apart from just a check on our results, this exercise could also give hints to some exact but non-trivial solutions of our membrane equations. This might lead to some techniques to solve the membrane equation analytically. It would also be interesting to see how these solutions compare with another perturbative techniques to solve Einstein equations, namely derivative expansion and the correspondence with fluid dynamics (Along these lines, see [7] for a detailed study of the comparison between the Improved large D membrane formalism and the Fluid Gravity). Acknowledgment It is a great pleasure to thank Shiraz Minwalla for initiating discussions on this topic and for his numerous suggestions throughout the course of this work. We would also like to thank Bidisha Chakrabarty and Arunabha Saha for collaboration at the initial stage. We would like to thank Suman Kundu, Poulomi Nandi for illuminating discussions. P.B. would like to acknowledge the hospitality of ICTS, HRI and SINP while this work was in progress. Y.D. would like to acknowledge the hospitality of ICTS, IIT-Kanpur and IISER-Pune while this work was in progress. The work of Y.D. was supported by the Infosys Endowment for the study of the Quantum Structure of Spacetime, as well as an Indo Israel (UGC/ISF) grant. We would also like to acknowledge our debt to the people of India for their steady and generous support to research in the basic sciences. A Calculation of the sources -S AB In this section we shall give the details of the calculation of S AB . As mentioned before, the source will be given by E AB calculated on G (0) We shall follow Appendix(B) of [1] for computation. The first step would be to decompose the source in the following way. At first we present the calculation of δR As previously, in this case also, Term-2=Term-3=Term-4=0; Now we need to calculate Term-1. Let us note the presence of 'K(∇ · u) ' term in δR AB | non-lin. . From the membrane equation at first subleading order, it follows that this term is of order O(1) on ψ = 1 hypersurface. This is sort of 'anomalous', since naive order counting suggests that this term should be or order O(D 2 ) and this may not be the case once we are away from the membrane. Now for any generic term, which is of order O(1) when evaluated on (ψ = 1) hypersurface, will have corrections of order O 1 D (or further suppressed) as one goes away from ψ = 1. While integrating the ODEs, this is the reason we could ignore all the implicit ψ dependence in the source. However from the above discussion we could see that such reasoning does not work for 'K(∇ · u) ' (or in fact any such 'anomalous' term). Below we shall examine this term in more detail. We can expand (∇ · u) in ψ − 1 = R D as follows Where E A is given in equation (3.8). In the second line we have used the following two identities (to prove them we have used Mathematica Version-11), Clearly the second and the third term in the last line of equation (A.12) (which encode the value of (∇ · u) off the membrane) could contribute in δR AB | non-lin. at order O(1). Substituting (A.12) in equation (A.11) we find where Now we shall calculate those terms in Ricci tensor that are linear in Similarly, we will get T 2 by interchanging A and B indices Adding T 1 , T 2 , T 3 we get the expression for δR AB | lin. Now, we shall decompose the source in the way as mentioned in (3.5). Note that the decomposition of a general 2-index symmetric tensor (C AB ) is the following Using (A.20) we shall first decompose each of the tensor structure appearing in (A.19) Where, We shall massage the above expression for δR (S 2 ) a little more. Let us note the presence of 'K(∇ · u) ' term in δR (S 2 ) . From the discussion just below the equation (A.11) it is clear that we need to take the expansion of∇ · u in ψ − 1. The ψ − 1 expansion of (∇ · u) is given by (A.12) Substituting equation (A.31) in equation (A.30) we find Now it turns out that it is possible to rewrite the last three lines of equation (A.32) in terms of the already defined scalar structures s 1 plus few extra terms which could be expressed as functions of membrane equation. We have used Mathematica Version 11 for this purpose 9 This type of rewriting helps to see the consistency of the set of coupled ODEs manifestly (see section -4.1). Let us continue with derivation for the rest of the components of the source. More precisely Mathematica has been used to rearrange δR (S2) on R = 0 hypersurface . Away from the membrane the calculation is relatively less tedious and could be done by hand. On ψ = 1 i.e., on R=0, δR (S2) becomes For Mathematica computation we do have to choose a specific background and coordinate system. Since we have an independent proof that the final answer is 'background-covariant', such a choice does not imply any loss of generality. However, we need to do an appropriate 'geometrization' of the answer that we get from Mathematica, so that we could write it in a 'background covariant form' as desired. See [3], [4] for details of this procedure. In the last line we have used the following two identities (see appendix B.4 and B.5 for derivation) is the subleading (see equation (3.8) ) membrane equation, and v A is given by Note that the simplification of δR However, because of the membrane equation at first subleading order, this is of O(1) on ψ = 1 hypersurface. Away from the hypersurface this may not be the case and we have to expand the first line around ψ = 1 and take into account at least the first term in the expansion. This is what has been done in the second line of equation (A.37). In the final step we have re-written δR (V 1 ) lin in terms of already-defined vector structure v A plus terms proportional to membrane equation. The rest of the components of S AB are easy to compute without any further subtlety. Where, In deriving (A.42) we have used the following identity Which follows from the subsidiary condition. B Some identities In this appendix we shall prove some of the identities that we have used to compute the metric correction. B.1 The derivation of the Identity (4.3) [ After a bit of straight forward calculation the each of the above terms become B.2 The derivation of scalar structure s 2 (3.16) The scalar structure s 2 is defined as Now, Now using We get the final expression The derivation of the Identity (A.9) In the last line we have used the following relation In the last line we have used The derivation of the identity (A.38) Where, we have used The third one follows from the fact that, is the leading order membrane equation. Now, In the last line we have used, Using (B.23) in (B.22) we get, 2-nd Term Using the following identity whose derivation is a bit lengthy, and we are skipping the derivation Now, 2-nd Term Finally, we get B.5 The derivation of the identity (A.39) We can divide the L.H.S. of (A.39) as follows where W is what we get by subtracting off P C A ∇ 2 u C − P C A ∇ 2 n C from the LHS of equation (B.29). First we shall simpify W In the first line we have used And, in the last line we have used In the last line we have used And, divergence of leading order vector membrane equatioñ Adding (B.32) (B.33) and (B.36) we get Putting (B.41) in (B.40) we get As we have mentioned before derivation of P C B (n · ∇)[(n · ∇)u C ] is lengthy, we shall use the result mentioned in eq(B.26) Using (B.26) for P C B (n · ∇)[(n · ∇)u C ] we get the final expression for P C Where, in the last step we have used the following identity C QNM for AdS/dS Schwarzschild Black hole: Details of the calculation In this subsection we shall present several computational details. We shall follow [3] and [1]. Steps are tedious but a straightforward extension of what has been done in [1]. The answers for non-zero components of Christoffel symbols for metric (4.4) are (denoting the metric on unit sphere byḡ ab , its Christoffel symbols byΓ a bc and the covariant derivatives with respect toḡ ab by∇ a ) The normal to membrane evaluates to , ∇ a n t = (−ǫ∂ t∇a δr) 1 − ∇ a n r = (ǫ∇ a δr) The projector P B A = δ A B − n A n B evaluates to P r r = 0, P t t = 1, P a b = δ a b , P t a = 0, P a t = 0, (−ǫ∂ t δr), P r a = ǫ∇ a δr, P a r = 1 The spacetime form of Extrinsic curvature K AB = Π C A ∇ C n B evaluates to (ǫ∇ a δr) Answers for the nonzero components of Christoffel symbols for metric (4.9) are The projector P µ ν ≡ δ µ ν + u µ u ν evaluates to C.1 Computation of K µν We define K µν as the pullback of Extrinsic curvature K M N (which is a spacetime tensor) on the membrane surface where we denote the coordinates in spacetime (r, t, θ a ) by X M and the coordinates on the membrane worldvolume (t, θ a ) by y µ . The extrinsic curvature K AB is defined as Now equation (C.9) evaluated upto linear order for the QNM calculation implies that K µν = ǫ(∂ µ δr)K rν + ǫ(∂ ν δr)K rµ + K µν + O(ǫ 2 ) (C.11) From (C.5) we see that K rN = O(ǫ). Using this fact along with (C.11) gives us Trace of Extrinsic curvature (C.12) evaluates to C.2 Computation of the terms relevant for the membrane equation Here, we report the relevant terms needed to evaluate the membrane equation upto linear order. The relevant terms at leading order evaluate to The relevant terms at subleading order evaluate to Firstly, for convenience, rewrite the membrane equation (4.12) as So, we get We can see for a uniform membrane configuration with spherical symmetry that E a would be zero and hence we have E a ∼ O(ǫ) in case of fluctuations. Also we see that P t t = 0 and P a t ∼ O(ǫ). Hence we see from (C. 16) that E tot t is identically zero at the linear order. Similarly because P t a = O(ǫ), only O(ǫ 0 ) pieces of E t are relevant for evaluating E tot a at linear order. Hence in subsection C.2 we evaluated only those terms in E µ that are relevant for the linearized analysis. Substituting the expressions derived in subsection (C.2) in the linearized vector membrane equation in the angular directions we finally get (4.13). D QNM for AdS Schwarzschild black brane: Details of the calculation Just like previous section, here we shall provide the details of the computation required to determine the QNM frequencies for AdS Schwarzschild black brane. The answers for nonzero components of Christoffel symbols for the background metric (4.28) are Γ r rr = −1 r , Γ r ab = −r 3 δ ab , Γ a rb = 1 r δ a b , Γ r tt = r 3 , Γ t rt = 1 r (D.1) Normal to the membrane evaluates to n r = 1 r , n a = −ǫ∂ a δr r , n t = −ǫ∂ t δr r (D.2) Non zero components of ∇ M n N evaluate to ∇ r n r = 0, ∇ r n t = 2ǫ∂ t δr r 2 , ∇ t n r = ǫ∂ t δr r 2 , ∇ t n t = − ǫ∂ 2 t δr r − r 2 , ∇ r n a = 2ǫ∂ a δr r 2 , ∇ a n r = ǫ∂ a δr r 2 , ∇ t n a = −ǫ∂ t ∂ a δr r , ∇ a n t = −ǫ∂ t ∂ a δr r , ∇ a n b = −ǫ∂ a ∂ b δr r + r 2 δ ab (D. 3) The projector P B A = δ B A − n A n B evaluates to P r r = 0, P t t = 1, P a b = δ a b , P a t = 0, P t a = 0, P r t = ǫ∂ t δr, P t r = −ǫ∂ t δr r 4 , P r a = ǫ∂ a δr, P a r = ǫ∂ a δr r 4 D.1 Computation of K µν As done previously, K µν is defined as the pullback of spacetime form of extrinsic curvature K M N on the membrane worldvolume. Doing this procedure we find that the nonzero components of K µν evaluate to K tt = −ǫ∂ 2 t δr−(1+2ǫδr), K ta = −ǫ∂ t ∂ a δr, K ab = −ǫ∂ a ∂ b δr+(1+2ǫδr)δ ab (D.9) Trace of Extrinsic curvature K µν evaluates to K = n + ǫ∂ 2 t δr − ǫ∂ a ∂ a δr (D. 10) where we raised the index a in (D.10) with δ ab . D.2 Computation of the terms relevant for membrane equation At leading order the relevant terms evaluate to u ν K νa = −ǫ∂ t ∂ a δr + ǫδu a u ν∇ ν u t = O(ǫ) u ν∇ ν u a = ǫ∂ t δu a + ǫ∂ a δr ∇ t K = O(ǫ) ∇ a K = ǫ∂ a ∂ 2 t δr − ǫ∂ a ∂ 2 δr ∇ 2 u t = O(ǫ) ∇ 2 u a = −ǫ∂ 2 t δu a + ǫ∂ 2 δu a (D.11) P a t ∼ O(ǫ) and also E b ∼ O(ǫ), hence E tot t vanishes upto linear order. Note that P t a ∼ O(ǫ), hence only O(ǫ 0 ) pieces of E t contribute when we evaluate E tot a upto linear order. Keeping these facts in mind we calculated only those terms that are relevant in subsection D.2. Substituting the expressions derived in subsection (D.2) in the linearized vector membrane equation in the angular directions we finally get (4.32).
10,038
2018-05-01T00:00:00.000
[ "Physics" ]
Selection of Embedding Dimension and Delay Time in Phase Space Reconstruction via Symbolic Dynamics The modeling and prediction of chaotic time series require proper reconstruction of the state space from the available data in order to successfully estimate invariant properties of the embedded attractor. Thus, one must choose appropriate time delay τ∗ and embedding dimension p for phase space reconstruction. The value of τ∗ can be estimated from the Mutual Information, but this method is rather cumbersome computationally. Additionally, some researchers have recommended that τ∗ should be chosen to be dependent on the embedding dimension p by means of an appropriate value for the time delay τw=(p−1)τ∗, which is the optimal time delay for independence of the time series. The C-C method, based on Correlation Integral, is a method simpler than Mutual Information and has been proposed to select optimally τw and τ∗. In this paper, we suggest a simple method for estimating τ∗ and τw based on symbolic analysis and symbolic entropy. As in the C-C method, τ∗ is estimated as the first local optimal time delay and τw as the time delay for independence of the time series. The method is applied to several chaotic time series that are the base of comparison for several techniques. The numerical simulations for these systems verify that the proposed symbolic-based method is useful for practitioners and, according to the studied models, has a better performance than the C-C method for the choice of the time delay and embedding dimension. In addition, the method is applied to EEG data in order to study and compare some dynamic characteristics of brain activity under epileptic episodes Introduction For the theory of state space reconstruction suggested by Packard, Takens et al. [1,2] is the base for data-driven analysis and prediction of chaotic systems. It can be proved through Taken's theorem [2] that the strange attractor of the chaotic systems could be properly recovered from only one projection of the dynamic system. The fundamental theorem of reconstruction of Takens establishes a sufficient condition (but not necessary) given by p ≥ 2d + 1, where d is the fractal dimension of the underlying chaotic attractor, and p stands for the embedding dimension used for phase space reconstruction. Nevertheless, no condition is given regarding the time delay. A popular method for state space reconstruction is the method of delays. It consists of embedding the observed scalar time series {X t } t∈I in one p-dimensional space X τ p (t) = (X t , X t+τ , . . . , X t+(p−1)τ ) for t ∈ I, where τ is the time delay for the reconstruction, p is the embedding dimension, and I is a set of time indexes of cardinality T. Notice that the number of points inserted in the p-dimensional space is M = T − (p − 1)τ and all dynamic properties, such as dependencies, periodicity, and complexity changes, can be extracted from it. That is, there is a differentiable homomorphism from the orbits of the chaotic attractor in the reconstructed space R p to the original system. The selection of the parameters p and τ * is a challenge. An improper choice can result in a spurious indication of a nonlinear complex structure when the system is linear. Albeit specialized literature provides different methods to select the parameters for state space reconstruction, none of them that are superior to the remaining ones in all aspects. In general, the optimal strategy for parameter selection will depend on the time series and a complexity measure (e.g., Lyapunov Exponents or Correlation Dimension). There are two different approaches to the selection of the parameters p and τ * . The first approach considers that p and τ * are selected independently from each other. For example, the G-P algorithm for the selection of p proposed by Albano et al. [3] and different proposals for the selection of the time delay τ * based on Mutual Information [4], autocorrelation and highorder correlations [5], filling factor [6], wavering product [7], average displacement AD [8], and multiple autocorrelation [9]. The second approach considers that the parameters p and τ * are closely related when the time series under consideration is noisy and has finite length. A great number of experiments indicate that p and τ * are related with the time delay for independence of the time series through τ w = (p − 1)τ * . Therefore, a bad selection of the parameters will directly impact the equivalence between the original system and the reconstructed phase space. Thus, some authors are in favor of jointly selecting p and τ * as, for example, the small-window solution [10], C-C method [11], and automated embedding [12]. Many researchers consider that the second approach (joint selection) is more reasonable than the first one (independent selection) in the engineering practice. They consider that the estimation of mutual information is rather cumbersome computationally, whereas the autocorrelation function only accounts for the linear dependence and therefore does not properly treat the presence of nonlinearities. On the other hand, the C-C method suggested by Kim et al. [11] is the most popular, which provides the delay time τ * and embedding dimension p simultaneously by using correlation integral, and it has the advantage of low complexity and robustness in finite samples [13]. In the present paper, we propose a new method for selection p and τ * based on symbolic dynamics and Information Theory. Symbolic Dynamics studies dynamical systems on the basis of the symbol sequences obtained for a suitable partition of the state space. The basic idea behind symbolic dynamics is to divide the phase space into finite number of regions and label each region by an alphabetical letter. In this regard, symbolic dynamics is a coarse-grained description of dynamics. Even though coarse-grained methods lose a certain amount of detailed information, some essential features of the dynamics may be kept, e.g., periodicities and dependencies, among others. Symbolic dynamics has been used for investigation of nonlinear dynamical systems (central references for the interested reader are [14][15][16][17][18]; for an overview, see Hao and Zheng [19]). In general terms, there is a broad agreement in that symbolization can increase the efficiency of finding and even quantifying information for characterizing and recognizing temporal patterns (see [20] for a review on experimental data). The process of symbolizing a time series is based upon the method of delay time coordinates, introduced by Takens, in order to carry out the phase space reconstruction. Since the methods of state space reconstruction are based to some extent on detection of delays for which there is some sort of dependence (linear or nonlinear), and Symbolic Dynamics has been used as a statistical tool to detect the presence of dependence in time series [21]; symbolic dynamics is a suitable tool to select the optimal state space reconstruction parameters of chaotic time series. Thus, we will select p and τ * by translating the problem into symbolic dynamics and then we use a entropy measure associated with the symbols space (symbolic entropy) as a tool for the parameter selection. On the one hand, we have compared the performance of the proposed method with other available methods. Results seem to be in favor of this proposal. On the other hand, and from an empirical point of view, we have applied it to EEG data, which allows for understanding some dynamic characteristics of brain activity under epileptic episodes. The rest of the paper is structured as follows: in Section 2, we introduce the basic concepts of symbolic analysis, and we also provide a symbolization procedure that works for estimation of the parameters for Phase Space reconstruction. In Section 3, we show the performance of the symbolic method to estimate phase space reconstruction parameters, and we compare it with the well known Mutual Information based methods and C-C method. In Section 4, the new techniques presented in this paper are applied to a real EEG database obtained from the University of Bonn and studied well for understanding epileptic phenomena. Finally, Section 5 presents conclusions. Definitions and Symbolization Procedure In this section, we will introduce some definitions and basic notations referred to symbolic dynamics. Let {X t } t∈I be a real valued time series. We will use symbolic analysis to study the state space reconstruction parameters associated with it. Symbolic analysis in our context is a coarse-grained approach to the study of time series, consisting of embedding the time series in a p-dimensional space, then constructing a partition of this p-dimensional space and labeling each set of the partition with a symbol. Therefore, all the p-dimensional vectors belonging to the same set of the partition are labelled with the same symbol. Afterwards, with information theory based measures, we will study the distribution of the symbols that will help us in the estimation of the parameters for state space reconstruction. More concretely, in mathematical terms, given a positive integer p ≥ 2, and a time delay τ, we consider that the time series is embedded in an p-dimensional space as follows: The parameter p is usually known as embedding dimension and X τ p (t) p, τ-history. Next, given a positive real number and in order to provide a partition of R p , we define for any element v = (v 1 , v 2 , . . . , v p ) ∈ R p the following indicator function: That is, δ ij (v) = 1 if and only if its entries v i and v j satisfy that |v i | and |v j | are both either smaller or greater than . Let the set Γ p be the set of cardinality 2 p−1 formed by the vectors of length p − 1 with entries in the set {0, 1}. Then, we can define a map f : . The map f defines an equivalence relation in R p such that v 1 v 2 if and only if f (v 1 ) = f (v 2 ). Therefore, this equivalence relation provides a partition of R p in 2 p−1 disjoint sets. Each of these sets is labeled with an element of Γ p . The elements in Γ p are called symbols and f symbolization map. In general, if π ∈ Γ p is a symbol and v ∈ R p is such that f (v) = π, then we will say that v is of π-type. Next, we are interested in the application of the symbolization map f to the p, τ- is a vector (symbol) whose i-th entry provides information on whether |X t | and |X t+i | are both either smaller or greater than . Then, we want to extract information on the dynamics of the time series {X t } t∈I by using information theory based measures on its associated symbols distribution . More concretely, we can estimate the probability of a symbol π ∈ Γ p as Now, under this setting, given a time delay τ and embedding dimension p ≥ 2, we can define the symbolic entropy of a time process {X t } t∈I . This entropy is defined as the Shannon's entropy [22] of the 2 p−1 distinct symbols as follows: Symbolic entropy h(p, τ) is the information contained in comparing p, τ-histories generated by the time process. Notice that 0 ≤ h(p, τ) ≤ ln(n), where the lower bound is attained when only one symbol occurs, and the upper bound for a completely random system (i.i.d temporal sequence) where all possible symbols appear with the same probability. Then, if τ = τ * is an optimal time delay, for a positive integer k, the dependence between X t and X t+kτ * vanishes, and hence the symbolic entropy associated with the time series {X t } should be maximum. Therefore, in order to select the optimal time delay τ * , we select the first τ satisfying With respect to the optimal embedding window τ w = (p − 1)τ * , this can be associated with the mean orbital period P w of low-dimensional chaotic systems that shows pseudoperiodicity. That is, P w can be considered the time dependence of the chaotic time series. Although the chaotic systems oscillate without periodicity, low dimensional chaotic systems show pseudo-periodicity. The mean orbital period could naturally be associated with the mean time between two consecutive visits to a Poincare section [23]. For the time series with mean orbital period P w , all points at a time multiple of P w are in the same Poincare section in phase space. Therefore, a local minimum of symbolic entropy h(p, τ) is reached for τ = kP w and thus To finish this section, we are going to illustrate the symbolization procedure with an easy example. Let {X t } t∈I be the following finite time series: and assume that = 3, τ = 1 and p = 3. Then, the symbols' set remains as Under this setting, we can construct the following five p, τ-histories: 9); and X 1 3 (5) = (−1, 9,14). Then, the symbolization map f associate each p, τ-history to a symbol. Concretely, f 3 (X 1 3 (1) = (1, −7, −12)) = (0, 0) because the first entry of the m-history, 1 is in absolute value smaller than = 3 while the second and the third are both greater than = 3, and hence the agreement indicator that defines the symbolization map takes the value 0. Similarly, we find that is of (0, 1)−type, and X 1 3 (5) is of (0, 0)−type. Thus, we can estimate the symbols distribution by its relative frequency p((0, 0)) = 2 5 , p((0, 1)) = p((1, 0)) = p((1, 1)) = 1 5 and the entropy associated with them Selection of p and for Finite Sample Sizes When determining the parameters of phase space reconstruction of a finite chaotic time series by using the symbolic entropy, one needs to select in advance the values of p and . In addition, sample size T also plays an important role. In [21], some general criteria are recommended to select the embedding dimension p and sample size T in order to compute the symbolic entropy. First, the sample size T should be as larger than the number of symbols 2 p−1 of the symbolization map f . Second, from a statistical point of view, data sets must contain at least five times the number of possible events or symbols. Thus, the embedding dimension will be the largest positive integer p that satisfies 5 · 2 p−1 ≤ T. To select , we propose to use a data driven method which is based on symbolic entropy. Particularly, we partially rely on the methodology described in [24] based on the construction of peak detection functions (FPs). The selected will be the largest that locally maximizes the absolute value of a pick function FP(i, x i ), where FP associates values to the symbolic entropy of a time series. More concretely, define and allows for selecting the time delay τ for which value h(p, τ) is maximum (respectively minimum) in the neighborhood of (τ − k, τ + k). As stated in [24], values of k in the range [3,5] are usually suitable. Notice that, by construction, 0 < < max{X t }. Then, the selected parameter, namely * , will be the one in the interval (0, max{X t }) that satisfies Simulation Analysis The following examples illustrate the performance of the proposed symbolic method when estimating the parameters time delay τ * and embedding dimension p for phase space reconstruction of a chaotic time series. The aim of this set of simulations is, firstly, to empirically evaluate the performance of the new symbolic procedure to select the "correct" parameters. Secondly, we aim to compare with the symbolic method with other competitive available methods that have been commented in the introductory section and that are fully documented in the bibliographical references of this paper. To this end, we extract univariate time series {X t } of length T = 3000 from five chaotic systems that have been extensively studied. In all cases, we set the embedded time series in a six-dimensional space that is p = 6. To evaluate the performance of the novel symbolic method, we compare the performances of the new method with other available selection methods: the C-C method (C-C), the Nearest Neighbor method, and the method based on the first minimum of the autocorrelation function (FAC) selection parameters of these last two methods are based on the Mutual Information (MI) criteria. Scientific literature has shown that the C-C method has a good performance when used for selecting time delays and embedding dimensions. Thus, the C-C method can be thought of as a natural competitor and therefore it is worth comparing the performance against it. For this reason, we will compare results for several well-known dynamic systems. In order to compare and evaluate the performance of each method, we will use the selected parameters of each method for reconstructing the attractor and estimating two complexity measures of each system. These two measures are theoretically known for each of the three systems, and therefore they are used as a base of comparison. A final user will prefer using reconstruction parameters that lead to estimations that are as close as possible to the theoretical ones. Accordingly, we will use the following systems to conduct the comparisons: • Lorenz system [25]: . z = xy − bz The time series was obtained by projecting the x-coordinate of the system defined by the parameters a = 16, c = 45.92, b = 4 , with an integral step 0.01, and initial conditions x 0 = −1, y 0 = 0 y z 0 = 1. The computed optimal ratio is * = 1.2σ x , where σ x is the standard deviation of the chaotic time series under consideration. For this optimal radio, Figure 1 illustrates the series of the normalized symbolic entropy h(6, τ)/6 as a function of the time delay τ for the Lorenz system. Clearly, we observe that the first local maximum is attained at τ * = 12 and the minimum at τ w = 46. Then, an estimated value of embedding dimension p can be computed by solving τ w = (p − 1)τ * obtaining an approximate value of p = 5. For the Mutual Information method, the optimal time delay was τ * = 11, while, for the C-C method, the estimated parameters were τ * = 10 τ w = 100 and p = 11. Notice that the optimal time delay τ * estimated by the three methods are quite close to each other while the estimated time delay window τ w strongly differs between C-C and symbolic methods: • Rossler system [26]: . The time series was obtained by projecting the x-coordinate of the system defined by the parameters d = 0.15, e = 0.2 and f = 10, with an integral step 0.05, and initial conditions x 0 = −1, y 0 = 0 and z 0 = 1. The computed optimal radio is * = 0.4σ x , where σ x is the standard deviation of the chaotic time series under consideration. Figure 2 shows the normalized symbolic entropy h(6, τ)/6 as a function of the time delay τ for the Rossler system. It can be seen that the selected parameters by the symbolic method are τ * = 18 and τ w = 121, and consequently the estimated value for p is 8. The estimated time delay for Mutual Information method is τ * = 20. For the C-C method, the estimated parameters are τ * = 17 and τ w = 191. Again, the optimal time delay τ * estimated by the three methods is very similar while the time delay window τ w estimated by the C-C method is much different than the one estimated with symbolic entropy. • Duffing System [27]: . The time series was obtained by projecting the x-coordinate of the system defined by the parameters g = 0.05, k = 0.25, l = 7.5 and v = 1, with an integral step 0.05, and initial conditions x 0 = −1, y 0 = 0 y z 0 = 1. The computed optimal radio is * = 0.275σ x , where σ x is the standard deviation of the chaotic time series under consideration. Figure 3 shows the normalized symbolic entropy h(6, τ)/6 as a function of the time delay τ for the Duffing system. The estimated optimal time delay and time delay window with the symbolic method are τ * = 14 and τ w = 126, respectively. Then, the estimated embedding dimension is p = 10. As in the previous examples, the estimated time delay for the Mutual Information method (τ * = 12) and for the C-C method (τ * = 12) are fairly close to the one estimated by symbolic method. Again, the time delay window estimated based on the C-C method τ w = 161 is far from the one estimated with symbolic method. These first three models are well-known and well-studied and have served as a base of comparison of new techniques for similar aims as this paper. In order to complete this analysis, we have also considered the next two models that we refer to as the Mackey-Glass model and Chen model: • Mackey-Glass system [28]: The time series was obtained by fixing parameters a = 0;2, b = 0;1, c = 10, y = 17, with initial conditions x(t < 0) = 0 y x(t = 0) = 1;2. The first 2000 iterations were discarded. The computed optimal radio is * = 0.79σ x , where σ x is the standard deviation of the chaotic time series under consideration. The estimated optimal time delay and time delay window with the symbolic method are τ * = 13 and τ w = 49, respectively. Figure 4 shows the normalized symbolic entropy h(6, τ)/6 as a function of the time delay τ for the Mackey-Glass system Then, the estimated embedding dimension is p = 5. In this case, the estimated time delay for the Mutual Information method (τ * = 12) and for the C-C method (τ * = 14) are fairly close to the one estimated by symbolic method (τ = 13). Again, the time delay window estimated based on the C-C method τ w = 166 is far from the one estimated with symbolic method(τ w = 49): The time series was obtained by projecting the x-coordinate of the system defined by the parameters a = 35, b = 3, c = 28, with an integral step 0.01, and initial conditions x 0 = −1, y 0 = 0 y z 0 = 1. The first 2000 iterations were discarded. The computed optimal radio is * = 0.89σ x , where σ x is the standard deviation of the chaotic time series under consideration. The estimated optimal time delay and time delay window with the symbolic method are τ * = 11 and τ w = 60, respectively. Figure 5 shows the normalized symbolic entropy h(6, τ)/6 as a function of the time delay τ for the Chen system. Then, the estimated embedding dimension is p = 6. As in the previous examples, the estimated time delay for the Mutual Information method (τ * = 10) and for the C-C method (τ * = 9) are fairly close to the one estimated by symbolic method. Again, the time delay window estimated based on the C-C method τ w = 104 is far from the one estimated with symbolic method (τ w = 60). Table 1 summarizes for each method the estimated parameters for phase space reconstruction of the five systems. Bold is reserved for the results obtained with the new selecting method. In order to check whether the symbolic method is reliable when estimating the parameters for phase space reconstruction, τ * , τ w , and p, we will compute, based on this estimation, two complexity measures for each one of the systems that needs these parameters for its computation. These complexity measures are the largest Lyapunov exponent LLE [30], which is a measure of the complexity of the time process, and the Correlation Dimension D [31], which is a measure of the dimension of the space occupied by the chaotic attractor. For the computation of these two geometric invariants, the time delay τ * and embedding dimension p are essential parameters, and a bad selection of them would produce a big bias in LLE and D. The largest Lyapunov exponent LLE for the five systems have been computed in [27,[32][33][34][35] and the Correlation Dimension D in [32,33,36,37]. Furthermore, we have completed the study by increasing the sample size to 10,000 observations. Tables 2 and 3 show the values of LLE and D based on the values of the estimated parameters τ * , τ w and p with symbolic and C − C methods together with the reference values, respectively. Table 2. Largest Lyapunov exponent LLE based on the estimation of the phase space parameters τ * , τ w and p if C-C method and symbolic method, together with the reference true value. Values in parentheses report estimated LLE for series of 10,000 observations. We can observe the estimated values of the largest Lyapunov exponent and Correlation dimension based on the Symbolic method in Tables 2 and 3, respectively. Importantly, these symbolic-based estimations are very close to their reference (theoretical) values, regardless of the sample size, which suggests the good behavior of the new method for reconstruction of the dynamics of the system. On the other hand, we were wondering if the symbolic method is competitive with its main competitor, namely, the C-C method. In this regard, we can devise that the estimated values for the Lyapunov Exponents are clearly in favor of the Symbolic method as the estimation is closer to the theoretical reference value than in the case of the C-C method. This is true for the five systems. Similar conclusions can be obtained from the results regarding correlation dimension: the symbolic-based estimated dimensions are closer to the true value than C-C estimation, regardless the studied system. On the other hand, methods based on nearest neighbors and autocorrelation function are reported. Results show the symbolic based method also has better empirical behavior. All of these results could be explained by a wrong selection of delay time window τ w by the C-C method as stated in [23,[38][39][40]. EEG Dynamics under Epileptic Activity The Electroencephalogram (EEG) is a spontaneous bioelectricity activity that is produced by the central nervous system. Therefore, EEG can be understood as a representative signal containing information about the activity of the brain. Currently, EEG is widely used in clinic and neuralelectricity physiological research. The shape of the waves may contain useful information about the state of the brain. EEG does include abundant information about the state and change of the neural system. The dynamics of brain activity is considered to be of a nonlinear nature. Accordingly, EEG signals are studied by means of nonlinear dynamic tools. Indeed, a large body of studies have reported that the EEG was derived from chaotic systems [41][42][43][44]. In this section of the paper, we apply the symbolic-based approach for reconstruction of dynamics generated by empirical EEG recording from a public dataset by the University of Bonn [41]. Epilepsy is characterized by recurring seizures in which abnormal electrical activity in the brain causes the loss of consciousness or a whole body convulsion. From this point of view, our results will contribute to the empirical analysis of role on nonlinear dynamics in epileptology. The Bonn University EEG database is comprised of five types of EEG signals (EEG recordings from healthy volunteer with eyes open and closed, epilepsy patients in the epileptogenic zone during a seizure-free interval and in an opposite brain zone, and epilepsy patients during epileptic seizures) were studied. To conduct this empirical analysis, we firstly use Theiler's method of surrogate data to distinguish between linearity and nonlinearity. To do so, the null hypothesis of linearity is tested against nonlinearity [45]. Chaos cannot come from a linear signal. Secondly, we test for chaoticity against pure stochasticity. Linear signals are expected to be of stochastic nature while nonlinear signals can come from either a stochastic process or a pure chaotic one. The statistical test for chaos [46] tests the null hypothesis of chaos versus the alternative of stochastic process. We also estimate correlation dimension using Theiler's approach in order to exclude time correlated states in the correlation integral estimation [47]. Table 4 collects the outcomes of all procedures. Results firstly indicate that the brain's activity is of a nonlinear nature for a healthy person with open eyes and for records of epileptic person regardless if s/he is under seizure activity or not, whenever measurement is done in the epileptogenic zone. The test for chaos applied to nonlinear signals helps to conclude that only the nonlinear dynamics found for epileptic patients are statistically compatible with chaotic dynamics, while the dynamics are nonlinear stochastic for a healthy person with open eyes. Finally, the estimated correlation dimensions show how (correlation) dimension is reduced as the process moves from stochastic to chaotic, as expected. These results support the nonlinear deterministic structure of brain dynamics related to epileptic activity as earlier reported in [48,49]. Our estimates of correlation dimensions are in line with other previous studies [50] on the same dataset, although with different parameter configurations. Thus, the conclusion in this regard is that epileptic seizures are emergent states with reduced dimensionality compared to non-epileptic activity. This is in line with the clinical common knowledge that establishes that healthy systems evolve with time and their adaptive capability is higher, resulting in higher complexity. On the other hand, the alternations in the structural components and/or decreased functional capability of the subsystem cause dysfunction in the regularity mechanism of the overall system, which results in the loss of complexity, as indicated in [51,52]. Conclusions In this paper, we have introduced a new method based in Symbolic Dynamics, for the estimation of the phase space reconstruction parameters τ * , τ w and p. In the simulation analysis, we applied the Symbolic method to choose the phase space reconstruction parameters from the time series generated from several dynamical models that have been well-studied and used for evaluating the ability of different reconstruction methodologies. The values found for τ * agree well with those found for the mutual information and the C-C method. The values found for τ w do not agree with the values estimated by the C-C method. For this reason, in order to clarify which method for selecting phase space reconstruction parameters is more reliable, we use them in the computation of two complexity measures, namely largest Lyapunov exponent (LLE) and Correlation dimension (D). Results indicate that the parameters estimated by the Symbolic method produces a closer approach to reference (theoretical) values of LLE and D than the C-C method. Finally, the proposed method is used to study the dynamics of brain activity under epilepsy by means of real EEG signals. The empirical results hint that epileptic patients show chaotic dynamics in EEG signals. Furthermore, our results are statistically significant and therefore hint the potential of symbolic based tools in distinguishing healthy and epileptic subjects.
7,358.4
2021-02-01T00:00:00.000
[ "Physics", "Computer Science" ]
Isolation, characterization, and genetic manipulation of cold-tolerant, manganese-oxidizing Pseudomonas sp. strains ABSTRACT Manganese-oxidizing bacteria (MnOB) produce Mn oxide minerals that can be used by humans for bioremediation, but the purpose for the bacterium is less clear. This study describes the isolation and characterization of cold-tolerant MnOB strains isolated from a compost pile in Morris, Minnesota, USA: Pseudomonas sp. MS-1 and DSV-1. The strains were preliminarily identified as members of species Pseudomonas psychrophila by 16S rRNA analysis and a multi-locus phylogenetic study using a database of 88 genomes from the Pseudomonas genus. However, the average nucleotide identity between these strains and the P. psychrophila sp. CF149 type strain was less than 93%. Thus, the two strains are members of a novel species that diverged from P. psychrophila. DSV-1 and MS-1 are cold tolerant; both grow at 4°C but faster at 24°C. Unlike the mesophilic MnOB P. putida GB-1, both strains are capable of robustly oxidizing Mn at low temperatures. Both DSV-1 and MS-1 genomes contain homologs of several Mn oxidation genes found in P. putida GB-1 (mnxG, mcoA, mnxS1, mnxS2, and mnxR). Random mutagenesis by transposon insertion was successfully performed in both strains and identified genes involved in Mn oxidation that were similar to those found in P. putida GB-1. Our results show that MnOB can be isolated from compost, supporting a role for Mn oxidation in plant waste degradation. The novel isolates Pseudomonas spp. DSV-1 and MS-1 both can oxidize Mn at low temperature and likely employ similar mechanisms and regulation as P. putida GB-1. IMPORTANCE Biogenic Mn oxides have high sorptive capacity and are strong oxidants. These two characteristics make these oxides and the microbes that make them attractive tools for the bioremediation of wastewater and contaminated environments. Identifying MnOB that can be used for bioremediation is an active area of research. As cold-tolerant MnOB, Pseudomonas sp. DSV-1 and MS-1 have the potential to expand the environmental conditions in which biogenic Mn oxide bioremediation can be performed. The similarity of these organisms to the well-characterized MnOB P. putida GB-1 and the ability to manipulate their genomes raise the possibility of modifying them to improve their bioremediation ability. to degrade complex organics to digestible byproducts; the minerals themselves can serve as reservoirs of organic carbon (8,9).Work from Yu and Leadbetter has shown that some species of bacteria can derive energy directly from the thermodynamically favorable oxidation of Mn (10). In the environment, Mn oxidation may play a role in the breakdown of plant material.The concentration and redox state of Mn in leaf litter strongly correlate with the rate of litter decomposition (11), and the ability of forest ecosystems to store carbon is negatively correlated with Mn concentration (12).A significant component of plant litter is the cell wall component lignin.Lignin is a large, three-dimensional polymer of phenylpropanoid subunits; its large size and irregular structure render it, especially, difficult to degrade enzymatically (13).However, several species of fungus and bacte ria are capable of lignin degradation (13)(14)(15).These organisms employ both laccase enzymes and a variety of heme-containing peroxidase enzymes, including Mn perox idase.A major mechanism by which lignin-degrading enzymes work is through the production of soluble Mn(III) species via an oxidation reaction (15,16). Biogenic Mn oxides (BMO) and Mn-oxidizing bacteria (MnOB) are actively being investigated for their possible applications in bioremediation due to the highly reactive and sorptive nature of the BMO.BMO generated by E. coli cells genetically modified to express a non-blue laccase from Bacillus sp.GZB have been shown to degrade the endocrine disruptor bisphenol A (17).BMO from the naturally Mn-oxidizing strain Pseudomonas sp.QJX-1 can degrade the herbicide glyphosate, and the bacteria can use the resulting breakdown products as a carbon, phosphate, or nitrogen source (18).Oxidation of pollutants is not the only mechanism of bioremediation by BMO.They have also been shown to remove arsenic from wastewater through precipitation of metal arsenates or adsorption on ferromanganese minerals (19).Breakdown of 17α-ethinyles tradiol (EE2) by BMO was increased 15-fold by the presence of the MnOB Pseudomonas putida MnB1 (20).Thus, optimal bioremediation may require living MnOB, not just the oxides they produce, making it important to identify MnOB that can thrive under a variety of growth conditions. One of the best studied MnOB is Pseudomonas putida GB-1.This gram-negative gamma-proteobacterium has been shown to possess three genes encoding Mn oxidase enzymes that each appear to oxidize Mn(II) to Mn(IV).Two of the oxidases belong to the multi-copper oxidase family of enzymes, encoded by the genes mnxG and mcoA (21).The third oxidase, MopA, is an animal heme peroxidase (22).Mn oxidation in this species is also dependent on a two-component regulatory pathway comprising two sensor kinases -MnxS1 and MnxS2-and a σ 54 -dependent response regulator MnxR (23).Regulation of Mn oxidation in this species appears to be linked to the motile vs biofilm lifestyle switch since the deletion of regulatory gene fleQ results in altered Mn oxidation (22,24). If a physiological function of Mn oxidation is the breakdown of recalcitrant organic carbon (ROC) for use as a food source, MnOB would be predicted to be found in areas with high concentrations of ROC, such as a compost pile.Sampling a compost pile on the campus of the University of Minnesota, Morris successfully resulted in the isolation of two MnOB strains, Pseudomonas sp.DSV-1 and MS-1.Both strains exhibit cold-tolerant growth and manganese oxidation down to 4°C.Genome sequence shows the two strains are very similar and have genes identified as important for Mn oxidation in the wellcharacterized MnOB Pseudomonas putida GB-1.We further demonstrate that the two strains can be genetically manipulated, illustrating the possibility of using these strains for bioremediation and studies of the evolution of Mn oxidation in the pseudomonads. Isolation and identification of DSV-1 and MS-1 To test the prediction that Mn oxidation allows bacteria to degrade plant matter, samples were taken from a compost pile consisting of an approximate ratio of 75% plant material and 25% food waste (Troy Otsby, personal communication).Three samples were taken from the surface of the pile and at depths of ~15 cm and ~30 cm.From an initial set of ~10 Mn-oxidizing candidate species, four isolates were purified, and isolates MS-1 and DSV-1 were chosen for further characterization.At 24°C on solid media, both develop the brown colony color indicative of Mn oxidation (Fig. 1).The presence of Mn oxides was confirmed using leucoberbelin blue [LBB; data not shown, (25,26)].This oxidation behavior is similar to the well-characterized MnOB P. putida GB-1 (GB-1, Fig. 1).To tentatively identify the isolates, their 16S rRNA gene was amplified by colony PCR and sequenced.The resulting sequences were 100% identical to each other, and the best match in GenBank was to Pseudomonas psychrophila type strain E-3 (27) (NR_028619.1, 99% coverage, 99.66% identity). Phylogenetic tree of Pseudomonas genus To further investigate the relationship of DSV-1 and MS-1 to other pseudomonads, multi-locus sequence alignment was performed to construct a phylogenetic tree comparing 88 species of the Pseudomonas genus and the new isolates (Table 1).The species chosen represent members of each of the major clades in the genus: the fluorescens, aeruginosa, and pertucinogena lineages (28).MS-1 and DSV-1 group with P. psychrophila and P. fragi in the P. fragi group within the fluorescens lineage (Fig. 2).This lineage not only contains other known MnOB, P. putida GB-1, P. entomophila L48, and P. fluorescens PfO-1, but also species not known to oxidize Mn, including P. syringae pv.tomato str DC3000 (21). Complete genome sequence of MS-1 and DSV-1 Using a combination of Illumina and Nanopore sequencing approaches, complete genome sequences were generated for MS-1 (NCBI accession #: JAYMYF000000000) and DSV-1 (NCBI accession #: JAYMYG000000000).The MS-1 genome is smaller than DSV-1 (5.3 vs 5.7 Mb), with fewer predicted genes (Table 2).Genetic relatedness and species attribution can be determined using the genome-wide average nucleotide identity (gANI) and alignment fraction (AF) metrics (29)(30)(31).Using tools available at the Integrated Microbial Genomes and Microbiomes website at the Joint Genome Institute (https://img.jgi.doe.gov/)(32), it was determined that the DSV-1 and MS-1 pairwise gANI is 99.1% and 88.6% for AF (Table 3), supporting their identification as two strains within the same species.However, their best match to a P. psychrophila strain was P. psychrophila CF149 with an gANI of 92.6%-92.8%and an AF of 85.8%-87.4% (Table 5).Because a gANI of >95% is commonly used to assign strains to the same species, DSV-1 and MS-1 have diverged sufficiently from P. psychrophila to be considered members of a different species. Identification of putative Mn oxidation genes Using the genome sequences, it was possible to identify orthologs of Mn oxidation genes from the well-characterized MnOB P. putida GB-1.This organism encodes three separate Mn oxidase enzymes in its genome, mnxG, mcoA, and mopA (21,22).MS-1 and DSV-1 both carry homologs to mnxG and mcoA but not mopA (Table 4).Three genes predicted to encode parts of a two-component regulatory pathway-mnxS1, mnxS2, and mnxR-are also essential for Mn oxidation in P. putida GB-1 (23).Each of these genes are present in both DSV-1 and MS-1 (Table 4). The spacing between the genes and their orientation on the P. putida GB-1 chromo some suggests that mnxG and mcoA represent the first gene of two operons, respec tively, while mnxS1, mnxS2, and mnxR form a third putative operon (23) (Fig. 3).All six of the putative mnxG operon genes are found in both DSV-1 and MS-1, and they are found in the same orientation and organization on the chromosome (Table 4; Fig. 3).The mcoA putative operon contains five genes in P. putida GB-1 (Fig. 3); however, only mcoA and the gene immediately downstream, a predicted SCO1/SenC copper chaperone, are conserved in DSV-1 and MS-1 (Table 4; Fig. 3).The genome organization of mnxS1/S2/R is also somewhat conserved between GB-1, MS-1, and DSV-1, except that the mcoA SCO1/ SenC gene pair of the putative mcoA operon is located in the space between mnxS1 and mnxS2 in both MS-1 and DSV-1 (Fig. 3). Pseudomonas sp. MS-1 and DSV-1 growth at low temperature Because of the close association of DSV-1 and MS-1 to P. psychrophila (Fig. 2), a psychro philic species that can grow at temperatures as low as −1°C (27), we compared the ability of DSV-1 and MS-1 to grow at low temperature to that of the model MnOB P. putida GB-1.At 24°C, GB-1 grew somewhat faster than either strain (Table 5), although it reached a lower final optical density (Fig. 4).Growth slowed for all three strains at 14°C (Fig. 4), with GB-1 still doubling at a slightly faster rate than MS-1 or DSV-1 (Table 5).However, at 4°C, GB-1 grew very slowly, with roughly a 24-h doubling time (Fig. 4).Both MS-1 and DSV-1 grew detectably at this low temperature, with doubling times of 7.5 and 8.1 h, respectively (Table 5).Therefore, MS-1 and DSV-1 are capable of growth at low temperature but grow more slowly than at more moderate temperatures.During growth at 4°C, the optical density of the MS-1 culture dropped dramatically once the culture reached stationary phase.This could suggest a defect in survival at low temperature for this strain.However, the Mn-oxidizing MS-1 4°C cultures began to form aggregates once they reached stationary phase (data not shown), so much of this decrease in optical density may be due to this aggregation. Pseudomonas MS-1 and DSV-1 oxidize Mn at low temperature The growth curve experiments were performed in the presence of reduced Mn, so it was possible to observe that all three strains accumulated Mn oxides during the course of the experiment (data not shown).To verify this observation, each strain was incubated on solid Lept media at 24°C, 14°C, and 4°C (Fig. 5).After 5 days, at 24°C, all three strains grew and oxidized Mn, as seen by the brown colony color.At 14°C, again all three strains oxidized Mn, with GB-1 producing a lighter brown color than MS-1 or DSV-1.At 4°C, both MS-1 and DSV-1 produced brown colonies, but GB-1 produced barely detectable growth.After 10 months, GB-1 had still failed to form detectable Mn oxides, while DSV-1 and MS-1 continued to grow and accumulate Mn oxides. Genetic manipulation of DSV-1 and MS-1 The ability to genetically manipulate DSV-1 and MS-1 would make it possible to use these strains to investigate low-temperature Mn oxidation and generate strains optimized for bioremediation at low temperature.As a first step, we screened both strains for antibiotic sensitivity.DSV-1 and MS-1 are both resistant to ampicillin and penicillin but sensitive to gentamicin and kanamycin (data not shown).The selective medium Pseudomonas Isolation Agar (PIA, CRITERION, Hardy Diagnostics) is often used to isolate the Pseudomonas strains from environmental samples and during triparen tal mating (23).The basis of this selection is the presence of the broad-spectrum antimicrobial drug triclosan, which inhibits fatty acid synthesis.Pseudomonas spp.are naturally resistant to triclosan due to the presence of the FabV alternative fatty acid synthesis enzyme (33).However, neither DSV-1 nor MS-1 possesses a fabV homolog in their genomes, and neither strain can grow on PIA (data not shown). Conjugation with E. coli is routinely used to introduce foreign DNA into P. putida GB-1 (22,23).To demonstrate that conjugation can be used to move plasmids into DSV-1 and MS-1, we performed triparental conjugation to move the plasmids pBR322MCS-5 and pUCP22 (Table 7) into both strains.These plasmids both carry aacC1, the gentami cin-resistant marker gene; the successful transfer of the plasmid into DSV-1 and MS-1 was detected by the ability of the conjugants to grow on media containing gentamicin (data not shown).P. putida GB-1 can be made chemically competent and made to take up plasmids by heat shock transformation (23); a similar approach was successful with MS-1 but has not yet been tried with DSV-1.To demonstrate the ability to generate mutations in the DSV-1 and MS-1 genomes, we conjugated into each strain the plasmid pRL27, which encodes the Tn5 transposon carrying a Kan R -resistant gene marker (Table 7).This plasmid has an oriR6K origin of replication and therefore requires the presence of the pir gene on the chromosome in order to be maintained as a plasmid (34).Since DSV-1 and MS-1 lack the pir gene, the only way to obtain Kan R colonies after conjugation is if the Tn5 transposon carrying Kan R has transposed into the chromosome.The ability to isolate Kan R colonies after conjugation into DSV-1 and MS-1 (data not shown) therefore demonstrates that these strains can be manipulated by insertion of Tn5 into the chromosome. After successfully isolating kanamycin-resistant colonies, the colonies were screened for their Mn oxidation phenotype, and 13 mutant isolates were identified with altered Mn oxidation activity (11 in the DSV-1 strain and 2 in MS-1).Mapping the site of insertion revealed that several different genes had been targeted by transposition of Tn5 (Table 6).The oxidation phenotypes ranged from slight increase (KG271) to slight decrease (KG274) to no oxidation (KG272, 277, and 278; Fig. 6A).In the non-oxidizing strain KG278 (Fig. 6A), the transposon was inserted into the gene rpoN (Table 8), which encodes the alternative sigma factor σ 54 .To verify that the oxidation defect of KG278 is due to the rpoN::Tn5 mutation, complementation was performed using a plasmid carrying the P. putida GB-1 rpoN gene (pKG228, Table 7).Complementation was successful; pKG228 restored Mn oxidation to rpoN::Tn5 (Fig. 6B). DISCUSSION In this work, novel MnOB were isolated from compost, supporting a possible role for Mn oxidation in the breakdown of complex organic molecules.Both MS-1 and DSV-1 were shown to grow and oxidize Mn at low temperature.A complete genome sequence and phylogenetic characterization showed these strains to be closely related to Pseudomonas psychrophila but genetically distinct.Therefore, they have been named Pseudomonas sp.DSV-1 and MS-1.Both strains are amenable to genetic manipulation and carry, in their genomes, genes homologous to those previously identified as important for Mn oxidation in P. putida GB-1. Low-temperature growth DSV-1 and MS-1 both grow well at 4°C but grow faster at 24°C (Fig. 4; Table 7).Psychro philic organisms are commonly defined as those that grow best at temperatures below 20°C and thus are confined to environments that are continuously cold.Conversely, psychrotrophic or psychrotolerant organisms grow best at 20°C or above but grow well at temperatures below 20°C (39).Given this definition, DSV-1 and MS-1 are best described as psychrotolerant.Cold-tolerant species have previously been identified in the genus Pseudomonas from environments as diverse as Antarctic sea ice and food spoilage (40,41).For example, Pseudomonas psychrophila HA-4 was isolated by its ability to degrade the antibiotic sulfamethoxazole at low temperature (42), and Pseudomonas fragi strains were isolated from the leaves of cold-adapted plants (43).MS-1 and DSV-1 are closely related to P. fragi and P. psychrophila (Fig. 2).Thus, the cold tolerance of Pseudomonas strains isolated from a compost pile located outside in Minnesota in winter was not unexpected. Low-temperature Mn oxidation While MnOB have generally been characterized as mesophiles (35,44), Mn oxidation at low temperature has been observed before.Brevibacillus brevi MO1 has been shown to oxidize Mn at 4°C but not to the same extent as it does at 37°C (45).Arthrobacter sp.NI-2 normally oxidizes Mn at 30°C; a mutation in this strain allows it to oxidize at 10°C (46).The dormant spores of Bacillus sp.SG-1 are capable of producing Mn oxides over a very wide range of temperatures, from 0°C to 80°C (47).Pseudomonas sp.MOB-449 grows well and exhibits its maximum Mn oxidation capacity at 18°C (48).At this low temperature, Mn stimulates biofilm growth and expression of the c-type cytochrome biosynthesis enzyme CcmE, leading to the proposal that the Mn oxidation supplements the cell's energy needs (49).Thus, while P. psychrophila DSV-1 and MS-1 are not the only MnOB capable of low-temperature Mn oxidation identified so far, they are the first characterized that actively grow and robustly oxidize at temperatures as low as 4°C. Conservation of Mn oxidation mechanism Many of the genes identified as playing a role in Mn oxidation in P. putida GB-1 are also present in DSV-1 and MS-1.Each has orthologs to the Mn oxidase genes mnxG and mcoA but lack clear orthologs to mopA (Table 4).This suggests that Mn oxidation in these strains depends on the multi-copper oxidases MnxG and McoA but not the heme peroxidase MopA.DSV-1 and MS-1 also carry orthologs to the Mnx two-component regulatory pathway comprising MnxS1, MnxS2, and MnxR.MnxR in P. putida GB-1 is required for Mn oxidation and is predicted to be a σ 54 -dependent transcription factor, based on its domain composition (23).The MnxR orthologs in MS-1 and DSV-1 are also predicted to contain σ 54 interaction domains.This suggests that the expression of Mn oxidation genes is driven by RNA polymerase containing σ 54 in all three strains.Supporting this conclusion, a Tn5 insertion in the predicted rpoN gene of MS-1 resulted in a strain completely defective for Mn oxidation when assayed on solid media (Fig. 6) and in liquid culture (data not shown).This oxidation defect could be complemented with the GB-1 rpoN gene, reinforcing the conclusion that Mn oxidation in this strain is σ 54 dependent. Previous work has shown that Mn oxidation in P. putida GB-1 can be disrupted by Tn5 insertions in genes encoding components of the TCA cycle, including the succi nate dehydrogenase complex (sdhABC), lipoate acetyltransferase (aceA), and isocitrate dehydrogenase (icd) (50).Insertion of Tn5 into the fumarate hydratase class I gene of DSV-1 resulted in moderately decreased Mn oxidation (KG274, Fig. 6); fumarate hydratase catalyzes the conversion of fumarate to malate in the TCA cycle.KG266-269 all have Tn5 inserted in a predicted thiol-disulfide isomerase (Table 8).In Bradyrhizobium japonicum, a similar protein called TlpA is involved in cytochrome c oxidase maturation (51).In DSV-1, the gene is in a putative operon between dsbD and dsbG genes, raising the possibility of polar effects on these neighboring genes.In Shewanella oneidensis, DsbD facilitates the transfer of electrons to the protein CcmG during the cytochrome c maturation (CCM) process (52).In P. putida MnB1, the CCM genes ccmA, E, and F have previously been identified as playing a role in Mn oxidation (50), and CcmE has been implicated in low-temperature Mn oxidation in Pseudomonas sp.MOB-449 (49).Thus, the function and regulation of Mn oxidation in the new isolates are likely similar to that in other Mn-oxidizing pseudomonads. Low-temperature bioremediation There are many potential applications for MnOB and biogenic Mn oxides in bioremedia tion.Cold-tolerant bacteria and their enzymes are also valuable tools for bioremediation and other industrial applications (42,57,58).Therefore, Pseudomonas ssp.MS-1 and DSV-1 expand the conditions under which MnOB can be used for bioremediation due to their ability to form Mn oxides at low temperature.Our preliminary results suggest the two strains differ in the effect of temperature on their ability to accumulate oxidized Mn.As judged by the intensity of brown oxides formed, MS-1 robustly formed Mn oxides at all three temperatures tested, while DSV-1 best formed oxides at the intermediate temperature of 14°C (Fig. 5).MS-1 also tolerates growth at temperatures above 24°C better than DSV-1 (data not shown), which suggests this strain will be the better target for bioremediation applications. At cold temperatures, bacteria experience stress due to decreased membrane fluidity, decreased enzyme activity, altered redox state, and increased stability of RNA and DNA structures, which interfere with replication and gene expression (59)(60)(61).The MS-1 and DSV-1 genomes are very similar to one another (Tables 2 and 3); comparing these genomes may make it possible to determine the genetic basis for their differences in oxidation and temperature sensitivity phenotypes.Preliminary characterization of cold shock genes in GB-1, DSV-1, and MS-1 (Table 2 and data not shown) failed to reveal a genetic basis for the cold tolerance of DSV-1 and MS-1 since all three genomes possess six putative cold shock protein genes (cspA).Both strains can be made to take up foreign DNA by conjugation and transformation; they can express foreign genes from plasmids and can have their genomes mutated with a transposon.The apparent conservation of Mn oxidation and its regulation between the new isolates and the well-characterized MnOB P. putida GB-1 will guide future efforts to generate cold-tolerant strains optimized for Mn oxidation under various conditions. Media and culture conditions Strains and plasmids used in this study are listed in Table 7. Pseudomonas strains were grown in LB or Lept liquid and solid media made according to the procedure of reference (25).Strains were grown at 24°C, 14°C, or 4°C.Escherichia coli strains were grown in LB medium at 37°C.The following concentrations of antibiotics were used: ampicillin (100 µg/mL), gentamicin (50 µg/mL), and kanamycin (30 µg/mL).For oxidation assays, MnCl 2 was added to Lept medium at a final concentration of 100 µM.Phosphate-buf fered saline was made according to standard protocols (62). Sample collection Samples for cultivation were collected from a compost pile on the University of Minnesota, Morris campus that is composed of a 3:1 ratio of plant material to food waste (Ostby, Personal Communication).Samples were taken in February 2019 using sterile, plastic 50 mL tubes.The tubes were opened and immediately used to scoop material from the compost surface, ~15 cm, or ~30 cm below the surface.After collection, the tubes were sealed, immediately transported back to the lab, and stored at 4°C.Further more, 1 g of sample was incubated in PBS pH 7.3 for 5 min at RT, with shaking.The PBS/ compost mixture was allowed to settle 10-15 min, and then 100 µL of the supernatant was spread onto Lept plates.After incubation at RT for 7 days, thousands of colonies were visible, with a subset of brown, putative Mn-oxidizing colonies.Mn oxidation was confirmed using a leucoberbelin blue spot test (25).LBB-positive colonies were selected and subcultured onto fresh Lept lates.After several rounds of re-streaking, DSV-1 and MS-1 were shown to be pure via microscopic observation. Identification of isolates by 16S rRNA sequencing To obtain 16S amplicons from our bacterial sample, colony PCR was run using iProof High-Fidelity DNA Polymerase using the following concentrations of reagents: 200 µM dNTP mix, 1 µM forward primer, 1 µM reverse primer, 0.5 U of iProof High-Fidelity DNA Polymerase per 50 µL reaction, 10 µL of 5× iProof HF Buffer per 50 µL reaction, and 1 µL of overnight culture in NB broth as the DNA template source.Primers 8F and 519R (Table 8) were used to generate an ~500 bp amplicon.Reaction conditions were initial denaturation at 98°C for 3 min followed by 25 cycles of 98°C for 30 s, 55°C for 1 min, 72°C for 1 min, followed by 72°C for 5 min. PCR amplicons were purified using DNA Clean & Concentrator-5 according to manufacturer's instructions (Zymo Research, Irvine CA).The concentration of DNA samples was determined using a Qubit 3.0 Fluorometer from manufacturer Invitro gen (Carlsbad, CA).Furthermore, 50 ng of 500 bp length sequences and 200 ng of 1,500 bp sequences were added to new tubes for sequencing by the University of Minnesota Genomics Center (http://genomics.umn.edu/).Also included in the samples were 6.4 pmol of the appropriate primers (Table 2).Short amplicons were sequenced using 8F and 519R, while the long amplicons were sequenced with 8F, 519R, 1492R, 533F, and CDR (Table 2). Amplicons of the 16S SSU gene were conjoined using GeneStudio (https://source forge.net/projects/genestudio/) to produce a consensus sequence of 1,467 bp.This consensus sequence was then used to query the 16S ribosomal RNA (bacteria and archaea) database using BLASTN (63,64). DNA extraction and genome sequencing Cultures were grown on solid R2A medium, and a single colony was transferred to 10 mL of tryptic soy broth and grown for 48 h with shaking.Five milliliters of each culture were then centrifuged for 10 min at 2,000 × g in a swinging bucket rotor, and the supernatant was removed.The cell pellets were then resuspended in 0.5 mL of sterile PBS pH 7.4 (Gibco-Thermo Fisher, Waltham MA), and DNA was extracted using the QIAamp UCP Pathogen Kit (QIAGEN, Germantown MD) following the standard protocol with the final elution in molecular biology grade water.Purified DNA was quantified using a Qubit 4 fluorometer using dsDNA HS Assay (Invitrogen-Thermo Fisher, Waltham MA).Illumina sequencing was performed using the Nextera DNA Flex Library prep following the standard protocol and sequenced with a 600-cycle MiSeq v3 Reagent Kit (Illumina, San Diego, CA).Long-read sequencing was performed using a 1D2 R9.2 Sequencing Kit on an Oxford Nanopore Minion sequencer (Oxford Nanopore, New York, NY).Read coverage was approximately 120× for Illumina sequencing and 30× for Nanopore sequencing.Illumina reads were trimmed for quality, and adapters were removed using Trimmomatic V0.39 (65).Illumina and Nanopore reads were then used to assemble the genomes using the Unicycler assembly pipeline V0.4.8 (66) with Spades V3.13.0 (67). Generation of Pseudomonas phylogenetic tree Assembled genomes for 88 Pseudomonas species were downloaded from NCBI, and 4 housekeeping genes (16S rRNA, rpoB, rpoD, and gyrB) were extracted from each assembled genome [as in reference (28)].These four genes were concatenated and aligned with Cellvibrio japonicus as an outgroup using default parameters in MAFFT [version 7; (68)].This alignment was used to build a phylogenetic tree with RAxML [v.8.2.11; (69)].The rapid bootstrapping and search for best scoring ML tree approach was used with 1,000 replicates, the input was partitioned by each gene, and a GTR Gamma nucleotide model was implemented.All of the above took place within the Geneious Prime (v.2019.1.1)interface. Growth curves, growth rate, and doubling times Growth rates and doubling times were calculated using spectrophotometry at a wavelength of 600 nm.Cultures of GB-1, DSV-1, and MS-1 were grown overnight in 5 mL of Lept media with continuous agitation at 240 rpm at a temperature of 24°C.Subcultures of each strain were prepared in triplicate by diluting the overnight cultures 100-fold into 50 mL of Lept media.These subcultures were then grown at 24°C, 14°C, and 4°C with continuous agitation at 240 rpm.Furthermore, 1 mL samples were taken periodically to determine optical density using a spectrophotometer. Transposon mutagenesis The plasmid carrying the transposon Tn5, pRL27, was moved into DSV-1 and MS-1 by triparental conjugation (23) and transconjugants selected by plating on LB containing 30 µg/mL kanamycin and 100 µg/mL ampicillin.Colonies were replica plated onto solid Lept media to screen for variations in the manganese oxidation phenotype.Selected MS-1 and DSV-1 mutants were streaked for single colonies on Lept media and compared to wild type to confirm the variation in their manganese-oxidizing capabilities. Mapping site of transposon insertion Some Tn5 insertion sites were mapped according to the protocol of reference (24) with the following exceptions.Genomic DNA was isolated using the Wizard Genomic DNA Purification Kit (Promega, Madison, WI).Five micrograms of purified gDNA were digested with BamHI in a 50 µL reaction overnight at 37°C.The digested DNA was ethanol precipitated, and 100 ng was ligated using T4 DNA ligase (New England BioLabs, Ipswich, MA) in a 20 µL reaction overnight at room temperature.The ligation reactions were then transformed into E. coli GT115 commercially made competent cells (Invivogen CHEMICOMP GT115, Fisher Scientific).LB agar with Km was used to select E. coli cells transformed with the plasmid containing Tn5 and the BamHI fragment of the genome.Plasmids were purified from Km R colonies using the QIAprep Spin Miniprep Kit (Qia gen,Valencia, CA).Purified plasmids were sent to be sequenced at Functional Biosciences (https://functionalbio.com/) using primers tpnRL17-1 and tpnRL13-2 (Table 2).The sequenced genes were identified using a BLAST search against the relevant genome database on the Integrated Microbial Genomes website (http://img.jgi.doe.gov/)(70). The remaining Tn5 insertion sites were mapped using an arbitrary PCR approach (55).Genomic DNA was prepared as above.Three reactions were performed for each mutant, each using tpnRL17-1 as the forward primer, and ARB1, ARB2, or ARB3 as the reverse primer (Table 2).Furthermore, 1 µL of genomic DNA was used as template, 1× Promega GoTaqG2 Hot Start Green Master Mix, and 0.8 µM final concentration primers in total volume of 25 µL.PCR conditions were as follows: 1 cycle of 95°C 5 min, 6 cycles of 94°C 30 s, 30°C 30 s, 72°C 2 min followed by 30 cycles of 94°C 30 s, 45°C 30 s, 72°C 2 min followed by 72°C 5 min and then stored at 4°C.One microliter of this reaction was used as template in a second PCR with ARB4 and tnp5IR-2R as the primers, 0.8 µM final concentration, and 1× Promega GoTaq G2 Hot Start Green Master Mix in a total volume of 30 µL.PCR conditions were as follows: 1 cycle of 95°C 5 min, 30 cycles of 94°C 30 s, 55°C 30 s, 72°C 2 min followed by 72°C 5 min and then stored at 4°C.Reactions were separated on a 1% low melt agarose gel; the prominent band from each reaction was excised using a razor blade and stored at 4°C.One microliter of liquid from the excised band was used as template for a third round of PCR with the same primers and conditions as the second PCR.The DNA from both the excised gel band and the third PCR was cleaned using the GeneJET Gel Extraction and DNA Cleanup Micro Kit (ThermoScientific) and was sent to Functional Biosciences (https://functionalbio.com/) to be sequenced using primer Tnp5IR-2R (Table 2). Construction of the rpoN plasmid The rpoN gene was PCR amplified from Pseudomonas putida GB-1 using primers rpoN_1 F and rpoN_2 R (Table 2), with a high-fidelity DNA polymerase (Phusion HotStart highfidelity DNA polymerase).The resulting PCR product was cloned into pJET1.2/blunt(CloneJet PCR Cloning Kit; Fermentas, Glen Burnie, MD).The genes were subsequently subcloned into the broad host-range plasmid pUCP22 (Table 1) using the EcoRI and XbaI restriction enzyme recognition sites engineered into the amplification primers, and the presence of the insert in the resulting plasmid was confirmed by restriction digest.The genes inserted into pUCP22 are expressed constitutively from the plasmid-borne promoter P lac . Transformation of Pseudomonas sp. MS-1 and derivatives Pseudomonas sp.MS-1 and the MS-1 rpoN::Tn5 mutant were made competent as follows.Bacteria were grown overnight in LB and subsequently diluted 25-fold into fresh LB and grown at room temperature for 4 h.Furthermore, 2 mL of cells was pelleted by centrifugation at 12,000 × g for 1 min and then washed with 1 mL of ice cold 0.1 M CaCl 2 .Cells were then pelleted and resuspended in 1 mL ice cold 0.1M CaCl 2 and incubated on ice for 30 min.Finally, cells were pelleted and resuspended in 100 µL ice cold CaCl 2 .Transformation was performed by adding 2 µL plasmid to the cells and incubating on ice for 30 min.Next cells were exposed to heat shock for 90 s at 37°C and then returned to ice for 2 min.Then, 400 µL SOC medium was added to each transformation, which were then incubated at room temperature with shaking for 1 h.Finally, the entire transformation was plated onto LB Gm plates and incubated at room temperature. FIG 2 FIG 2 Phylogenetic tree of genus Pseudomonas.A total of 88 species of the genus Pseudomonas are represented by this tree, which uses a concatenated sequence of the 16S rRNA gene, rpoB, rpoD, and gyrB to construct proposed evolutionary relationships.Known MnOB are highlighted in red. FIG 3 FIG 3 Conservation of putative Mn oxidation operons.Arrows represent predicted genes.Numbers below the arrows represent the number of base pairs (bp) between predicted genes; numbers above the arrows are the length of the predicted protein product in amino acids (aa).Written within the arrows are the gene names and/or IMG gene ID# for the gene.Genes in the putative mnxG operon are red, genes in the mcoA operon are green, and those in the mnx two-component regulatory pathway are blue.GB-1, Pseudomonas putida GB-1; MS-1, Pseudomonas sp.MS-1; DSV-1, Pseudomonas sp.DSV-1. FIG 4 FIG 4 Growth of P. putida GB-1, and P. psychrophila strains DSV-1 and MS-1 at various temperatures.Datapoints represent the average of three replicates; error bars are the standard deviation.(A) 24°C, (B) 14°C, and (C) 4°C.After 80 h of growth at 4°C, cellular aggregation in the MS-1 culture made it difficult to measure OD 600 . TABLE 1 Accession numbers of strains used in phylogenetic tree TABLE 1 Accession numbers of strains used in phylogenetic tree(Continued) TABLE 3 Average nucleotide identity and alignment fraction TABLE 4 Putative Mn oxidation gene orthologs in DSV-1 and MS-1 TABLE 5 Growth rates and doubling times TABLE 6 Tn5 mutations in DSV-1 and MS-1 a Cytochrome c maturation. TABLE 7 Plasmids and strains a Amp R , ampicillin resistance; Kan R , kanamycin resistance; Gm R , gentamicin resistance. TABLE 8 Primer list
7,866.6
2024-08-30T00:00:00.000
[ "Environmental Science", "Biology" ]
Stamping Tool Conditions Diagnosis: A Deep Metric Learning Approach : Stamping processes remain crucial in manufacturing processes; therefore, diagnosing the condition of stamping tools is critical. One of the challenges in diagnosing stamping tool conditions is that traditionally, the tools need to be visually checked, and the production processes thus need to be halted. With the development of Industry 4.0, intelligent monitoring systems have been developed by using accelerometers and algorithms to diagnose the wear classification of stamping tools. Although several deep learning models such as the convolutional neural network (CNN), auto encoder (AE), and recurrent neural network (RNN) models have demonstrated promising results for classifying complex signals including accelerometer signals, the practicality of those methods are restricted due to the flexibility of adding new classes and low accuracy when faced to low numbers of samples per class. In this study, we applied deep metric learning (DML) methods to overcome these problems. DML involves extracting meaningful features using feature extraction modules to map inputs into embedding features. We compared the probability method, the contrastive method, and a triplet network to determine which method was most suitable for our case. The experimental results revealed that, compared with other models, a triplet network can be more effectively trained with limited training data. The triplet network demonstrated the best test results of the compared methods in the noised test data. Finally, when tested using unseen class, the triplet network and the probability method demonstrated similar results. Introduction The metal stamping process remains one of the most common processes in manufacturing and is still being used by major industries, including the automotive, aerospace, and consumer appliances industries [1]. Therefore, the stamping process must be monitored and diagnosed to ensure that every product meets the required quality standards. A crucial component requiring diagnosis is the tool die, the quality of which can greatly affect the outcome of a product. One of the challenges in diagnosing stamping tool conditions is that, traditionally, the tools need to be visually checked and the production processes thus need to be halted. Following the trend of Industry 4.0, automation in stamping processes has triggered the use of online intelligent condition monitoring systems, which are crucial for improving the productivity and availability of production systems. Today's advanced sensor technology pays attention to and incorporates numerous mechanical properties such as vibration, strain, and displacement to monitor the conditions of manufacturing processes [2], however acquiring the mechanical properties from the tool is only the beginning of diagnosing its condition. Nonetheless, these data need to be analyzed and processed before we able to diagnose tool condition. The advancement of Industry 4.0 also accelerates the research and development of machine learning, which is extremely helpful to analyze the non-linear data that are being used to monitor stamping process. Traditional signal processing and conventional machine learning methods have been employed in several studies on the stamping processes and tool diagnosis. For example, Training f under various loss functions and other boundaries thus becomes the clear purpose of metric learning. Because deep learning models can be trained to learn linear or nonlinear problems, they can be used to map data points into feature space, and the weight and biases in deep learning architectures can be trained using various loss functions incorporating distance metric (5). The two most common architectures used for DML applications are Siamese neural networks and triplet networks. Siamese Neural Network A Siamese neural network [30] uses a single FEM, but it is used to map two data inputs into a feature space. The term "Siamese" reflects the nature of the shared neural network. Figure 1 presents an architecture of a Siamese neural network, where two samples are fed into a network in which two identical CNNs act as an FEM; they are then transformed into a feature space. After a feature representation is created, several methods can be used to train the CNN, namely the probability method [38] and the contrastive method [33]. the weight and biases in deep learning architectures can be trained using various loss functions incorporating distance metric (5). The two most common architectures used for DML applications are Siamese neural networks and triplet networks. Siamese Neural Network A Siamese neural network [30] uses a single FEM, but it is used to map two data inputs into a feature space. The term "Siamese" reflects the nature of the shared neural network. Figure 1 presents an architecture of a Siamese neural network, where two samples are fed into a network in which two identical CNNs act as an FEM; they are then transformed into a feature space. After a feature representation is created, several methods can be used to train the CNN, namely the probability method [38] and the contrastive method [33]. Probability Method Suppose we already have a feature representation from input ( , ) extracted using two FEMs. If FEM is denoted as a function , then the distance metric can be obtained using function (5), thus yielding function (6). The output from the distance metric is then converted into the probability of the two samples being the same. This probability can be computed using the sigmoid function (7): Let = ( , ) be the binary label for inputs and . Let = 1 if and are from the same class; otherwise, = 0. Because the output is a probability problem, regularized cross-entropy is used as loss function (8): Contrastive Method The contrastive method minimizes the metric distance between inputs of the same class and dissociates the inputs of different classes. It still uses distance metric function (6), but instead of being used to activate another function, it is directly used in the loss Probability Method Suppose we already have a feature representation from input (x 1 , x 2 ) extracted using two FEMs. If FEM is denoted as a function f , then the distance metric can be obtained using function (5), thus yielding function (6). The output from the distance metric is then converted into the probability of the two samples being the same. This probability can be computed using the sigmoid function (7): Let t = y(x 1 , x 2 ) be the binary label for inputs x 1 and x 2 . Let t i = 1 if x i and x j are from the same class; otherwise, t i = 0. Because the output is a probability problem, regularized cross-entropy is used as loss function (8): Contrastive Method The contrastive method minimizes the metric distance between inputs of the same class and dissociates the inputs of different classes. It still uses distance metric function (6), but instead of being used to activate another function, it is directly used in the loss function. A contrastive loss obtained using function (9) forces a positive pair to become closer to zero and pushes the negative pair with a degree of margin α. where α is the margin when the inputs are from different classes. Figure 2 presents a triplet network architecture. In this architecture, three identical CNNs are used as three FEMs; therefore, the weight, bias, and other parameters of the three CNNs are identical. Triplet datum X t is used as an input, and the given datum contains three sets of samples, namely anchor samples x a , positive samples x p , and negative samples x n . function. A contrastive loss obtained using function (9) forces a positive pair to becom closer to zero and pushes the negative pair with a degree of margin . Triplet Network where is the margin when the inputs are from different classes. Figure 2 presents a triplet network architecture. In this architecture, three identica CNNs are used as three FEMs; therefore, the weight, bias, and other parameters of th three CNNs are identical. Triplet datum is used as an input, and the given datum contains three sets of samples, namely anchor samples , positive samples , and negative samples . The and samples are from the same class, whereas the negative samples are from a different class than the samples. The purpose of triplet learning is to train the FEM (CNN) so that it can map a pseudometric space either close or far for positiv and negative pairs, respectively ( Figure 3). The x a and x p samples are from the same class, whereas the negative samples x n are from a different class than the x a samples. The purpose of triplet learning is to train the FEM (CNN) so that it can map a pseudometric space either close or far for positive and negative pairs, respectively ( Figure 3). function. A contrastive loss obtained using function (9) forces a positive pair to becom closer to zero and pushes the negative pair with a degree of margin . where is the margin when the inputs are from different classes. Figure 2 presents a triplet network architecture. In this architecture, three identica CNNs are used as three FEMs; therefore, the weight, bias, and other parameters of th three CNNs are identical. Triplet datum is used as an input, and the given datum contains three sets of samples, namely anchor samples , positive samples , and negative samples . The FEMs in the triplet training phase map data input into the embedding f (x) ∈ R n , which is a representation of Euclidean space of n-dimensional size. With (5), a distance metric can be calculated for a positive pair, as in (10), and for a negative pair, as in (11): Triplet Network According to [39], a loss function for a triplet network using positive and negative pair distances is as follows: where α is the margin added in the negative pair distance. This margin maintains a distance between the positive and negative groups, enabling the loss to push the negative group over the margin and away from the positive group. Triplet Selection Schroff et al. [34] proposed a problem for generating all possible triplets that may easily satisfy function (12). If these "easy triplets" fill most of the samples in the training data, then the result would be slower convergence; therefore, selecting the appropriate triplet data is crucial. One method for selecting hard triplets to ensure fast convergence is to violate the constraint in function (12). However, "easy" and "hard" triplets must first be defined. An easy triplet (13) already fulfills the equation, and the model exerts less effort on learning. However, hard triplets (14) place the negative pair closer to the anchor than they place the positive pair, creating difficulty for the model in terms of learning. Another type of triplet is a semihard triplet (15), in which the value of the negative pair is not smaller than that of the positive pair but falls between positive and negative both with and without a margin: Figure 4 illustrates the differences between the types of triplets. Therein, the triplets are compared in terms of the distance between the negative and anchor. Each triplet yields a different effect on model training; that is, if the training batch contains an excessive number of easy triplets, then the model dose not learn effectively. However, an excessive number of hard triplets would generate a high loss and assign excessively high weights to mislabeled data. Schroff et al. [34] also proposed online triplet mining, in which sets of triplets are generated before training. This method requires less effort, but may generate only easy or only hard triplets, which would necessitate the time-consuming process of manual data processing. Online triplet mining feeds a batch of training data, generates triplets by using all the samples in the batch, and then calculates the loss from every batch. Using this approach would increase the number of easy, hard, and semihard triplets included in every training batch. . Easy, hard and semihard triplet illustration, a represent represents an anchor sample and p represents a positive sample; a hard triplet selects a negative in the hard triplet region; a semihard triplet selects a negative sample in semihard (α) region, and an easy triplet selects a negative sample in easy region. Hard Triplet Soft Margin Hermans et al. [40] proposed a soft margin to replace a hinge function α +∘ inside the triplet loss function (12), which is used to avoid overcorrection with the softplus function ln(1 + exp (∘)) for which practical implementation is expressed as 1 . They argue that samples from the same class can be beneficial for their case. The softplus function offers slow (exponential), rather than abrupt decay, using only margin α. Dataset The data set used in the current study was extensively used in our previous study [29]. It contains progressive stamping die vibration signals acquired using an accelerometer with a sampling rate of 25.6 kHz and an axis parallel to the stroke direction of the stamping machine. The stamping machine used (LCP-60H, Ingyu Machinery, Taiwan) had a capacity of 60 tons and an automatic sheet metal feeder. The sheet material used was SPCC steel with a thickness of 1.5 mm. Three locations on the tool die as illustrated in Figure 5 were examined for two degrees of wear: mild and heavy. One set of healthy-condition samples was used as a reference. In total, seven classes of wear were included in the stamping tool condition data set explained in Table 1. . Easy, hard and semihard triplet illustration, a represent represents an anchor sample and p represents a positive sample; a hard triplet selects a negative in the hard triplet region; a semihard triplet selects a negative sample in semihard (α) region, and an easy triplet selects a negative sample in easy region. Hard Triplet Soft Margin Hermans et al. [40] proposed a soft margin to replace a hinge function [α + •] inside the triplet loss function (12), which is used to avoid overcorrection with the softplus function ln(1 + exp(•)) for which practical implementation is expressed as log1p. They argue that samples from the same class can be beneficial for their case. The softplus function offers slow (exponential), rather than abrupt decay, using only margin α. Dataset The data set used in the current study was extensively used in our previous study [29]. It contains progressive stamping die vibration signals acquired using an accelerometer with a sampling rate of 25.6 kHz and an axis parallel to the stroke direction of the stamping machine. The stamping machine used (LCP-60H, Ingyu Machinery, Taiwan) had a capacity of 60 tons and an automatic sheet metal feeder. The sheet material used was SPCC steel with a thickness of 1.5 mm. Three locations on the tool die as illustrated in Figure 5 were examined for two degrees of wear: mild and heavy. One set of healthy-condition samples was used as a reference. In total, seven classes of wear were included in the stamping tool condition data set explained in Table 1. Data preprocessing was conducted for each vibration sample. First, each data sample was converted from time-based to frequency-based (frequency = 12.8 kHz). Second, each converted sample was normalized. The data transformation is presented in Figure 6. One-Shot K-Way Testing For every test data sample ∈ , a support set S consisting of one number of test samples was created, in which only one sample, ∈ , was the same class as . We then placed randomly inside S. Because every DML configuration is different, the probability method was used with a Siamese neural network to calculate the accuracy of each DML. where is a distinct label of each data sample in support set S. Subsequently, each test sample can be classified using probabilistic function (7), in which the highest value indicates the sample most similar to test data sample as follows: The accuracy is then calculated for the given test data set ∈ ℝ × , where is the size of the test data set and is a dimension of data point . Data preprocessing was conducted for each vibration sample. First, each data sample was converted from time-based to frequency-based (frequency = 12.8 kHz). Second, each converted sample was normalized. The data transformation is presented in Figure 6. Mild Wear Position C Class 7 280 Data preprocessing was conducted for each vibration sample. First, each data sam was converted from time-based to frequency-based (frequency = 12.8 kHz). Second, ea converted sample was normalized. The data transformation is presented in Figure 6. One-Shot K-Way Testing For every test data sample ∈ , a support set S consisting of one number test samples was created, in which only one sample, ∈ , was the same class as We then placed randomly inside S. Because every DML configuration is different, probability method was used with a Siamese neural network to calculate the accuracy each DML. where is a distinct label of each data sample in support set S. Subsequently, each t sample can be classified using probabilistic function (7), in which the highest va indicates the sample most similar to test data sample as follows: The accuracy is then calculated for the given test data set ∈ ℝ × , where is size of the test data set and is a dimension of data point . One-Shot K-Way Testing For every test data sample x i ∈ X , a support set S consisting of one K number of test samples was created, in which only one sample, x j ∈ X , was the same class as x i . We then placed x j randomly inside S. Because every DML configuration is different, the probability method was used with a Siamese neural network to calculate the accuracy of each DML. where y is a distinct label of each data sample in support set S. Subsequently, each test sample can be classified using probabilistic function (7), in which the highest value indicates the sample most similar to test data sample x as follows: Appl. Sci. 2021, 11, 6959 The accuracy is then calculated for the given test data set X ∈ R N×D , where N is the size of the test data set and D is a dimension of data point x i . In terms of a Siamese neural network with contrastive loss and the triplet network, the greater the distance between classes, the more similar they are. Therefore, substituting (5) into (17) yields (19). The accuracy can then be calculated as (20). 1D CNN Architecture Zhang et al. [38] proposed a wide first-layer kernel (WDCNN) to extract features from roller bearings. In their study, they used a Siamese network with a probability output. Their argument for using a wide first-layer kernel was that if the kernel was small then it could be disturbed by high frequency noise. In this study, we use normal kernel 1D CNN instead of WDCNN since our problem does not use time-based input. Figure 7 shows our proposed architecture. In terms of a Siamese neural network with contrastive loss and the triplet network, the greater the distance between classes, the more similar they are. Therefore, substituting (5) into (17) yields (19). The accuracy can then be calculated as (20). 1D CNN Architecture Zhang et al. [38] proposed a wide first-layer kernel (WDCNN) to extract features from roller bearings. In their study, they used a Siamese network with a probability output. Their argument for using a wide first-layer kernel was that if the kernel was small then it could be disturbed by high frequency noise. In this study, we use normal kernel 1D CNN instead of WDCNN since our problem does not use time-based input. Figure 7 shows our proposed architecture. Model Performance According to The Number of Training Samples The five models were evaluated using different numbers of training samples to simulate the lack of training data observed in real-world stamping process scenarios. Each class was evaluated according to three sample sets, namely 100, 180, and 280 (all data) samples. These sets were then divided into training and test sets containing 60% and 40% of the samples, respectively. Each class sample set was randomly sampled five times, and each random sample was trained and tested four times. In total, every class sample set underwent 20 training processes, each of which generated a new model. This procedure was intended to mitigate randomness. The procedure is illustrated in Figure 9. Figure 10 presents the results of each loss function performance. The x-axis of Figure 10 represents the total number of samples for the training and test sets. One-shot ten-way testing was conducted to evaluate the test set. As illustrated in Figure 10, the triplet loss function yielded the most favorable results, with greater than 99% accuracy for the hard, semihard, and hard-soft-margin batches. The binary cross-entropy loss function yielded the second-best results, in which accuracy increased concurrently with an increasing number of training samples. The contrastive max-margin function yielded the least Model Performance According to The Number of Training Samples The five models were evaluated using different numbers of training samples to simulate the lack of training data observed in real-world stamping process scenarios. Each class was evaluated according to three sample sets, namely 100, 180, and 280 (all data) samples. These sets were then divided into training and test sets containing 60% and 40% of the samples, respectively. Each class sample set was randomly sampled five times, and each random sample was trained and tested four times. In total, every class sample set underwent 20 training processes, each of which generated a new model. This procedure was intended to mitigate randomness. The procedure is illustrated in Figure 9. Model Performance According to The Number of Training Samples The five models were evaluated using different numbers of training samples to simulate the lack of training data observed in real-world stamping process scenarios. Each class was evaluated according to three sample sets, namely 100, 180, and 280 (all data) samples. These sets were then divided into training and test sets containing 60% and 40% of the samples, respectively. Each class sample set was randomly sampled five times, and each random sample was trained and tested four times. In total, every class sample set underwent 20 training processes, each of which generated a new model. This procedure was intended to mitigate randomness. The procedure is illustrated in Figure 9. Figure 10 represents the total number of samples for the training and test sets. One-shot ten-way testing was conducted to evaluate the test set. As illustrated in Figure 10, the triplet loss function yielded the most favorable results, with greater than 99% accuracy for the hard, semihard, and hard-soft-margin batches. The binary cross-entropy loss function yielded the second-best results, in which accuracy increased concurrently with an increasing number of training samples. The contrastive max-margin function yielded the least Figure 10 presents the results of each loss function performance. The x-axis of Figure 10 represents the total number of samples for the training and test sets. Oneshot ten-way testing was conducted to evaluate the test set. As illustrated in Figure 10, the triplet loss function yielded the most favorable results, with greater than 99% accuracy for the hard, semihard, and hard-soft-margin batches. The binary cross-entropy loss function yielded the second-best results, in which accuracy increased concurrently with an increasing number of training samples. The contrastive max-margin function yielded the least favorable results, with 95.56% accuracy when training was conducted with all available samples. Figure 10 also presents the standard deviation for each calculation; the triplet loss function yielded the highest accuracy and exhibited the lowest standard deviation, the value of which decreased concurrently with increase in the number of training samples All loss functions exhibited high standard deviations when trained using the lowest number of training samples, with the contrastive max-margin and binary cross-entropy functions exhibiting the highest testing accuracy. To determine the efficacy of each loss function to enable each feature extractor (FE) to distinguish between different classes, embedding projections were produced for every FE (Figure 11). Each model was trained and tested using all available samples, and the results supported the previous results presented in Figure 10. Compared with untrained FEs, all models trained using loss functions exhibited some degree of improvement, but the results varied somewhat. In particular, the max-margin loss function provided the least distinguished groupings for each class in comparison with the other loss functions; that is, the class groupings appeared scattered. In addition, the max-margin loss function was the least accurate when trained and tested with all available samples (Figure 10) However, the binary cross-entropy loss function provided much greater embedding compared to that provided by the max-margin loss function. The binary cross-entropy loss function provided the distinguished embedding values required to group the samples, exhibiting a 2.81% increase in accuracy compared with that of the max-margin loss function. The triplet loss function exhibited the most favorable results, with small variations in accuracy among the different batch strategies. To determine the efficacy of each loss function to enable each feature extractor (FE) to distinguish between different classes, embedding projections were produced for every FE (Figure 11). Each model was trained and tested using all available samples, and the results supported the previous results presented in Figure 10. Compared with untrained FEs, all models trained using loss functions exhibited some degree of improvement, but the results varied somewhat. In particular, the max-margin loss function provided the least distinguished groupings for each class in comparison with the other loss functions; that is, the class groupings appeared scattered. In addition, the max-margin loss function was the least accurate when trained and tested with all available samples ( Figure 10). However, the binary cross-entropy loss function provided much greater embedding compared to that provided by the max-margin loss function. The binary cross-entropy loss function provided the distinguished embedding values required to group the samples, exhibiting a 2.81% increase in accuracy compared with that of the max-margin loss function. The triplet loss function exhibited the most favorable results, with small variations in accuracy among the different batch strategies. i. 2021, 11, x FOR PEER REVIEW 12 of 16 Model Performance under Noised Test Samples In this experiment, we evaluated the robustness of each method to the ever-changing conditions of mechanical environments by adding Gaussian noise to the test sets. The signal-to-noise ratio (21) measures the power ratio of a signal compared with the noise applied to the signal, and in our case, we applied a noise power higher than the signal power (−2 dB and −4 dB, respectively) to simulate an environment with high-noise conditions. = 10log / As we did in the previous experiment, we used 100, 180, and 280 (all data) samples per class for the training and test sets ( Figure 12). The results (Figure 13) indicate the accuracy of each loss function. In general, for all loss functions, the accuracy increased, and the standard deviation decreased when the number of training samples increased. The triplet loss function exhibited the most favorable result for the −2 dB (Figure 13a) signal-to-noise ratio, with the semihard, hard, Model Performance under Noised Test Samples In this experiment, we evaluated the robustness of each method to the ever-changing conditions of mechanical environments by adding Gaussian noise to the test sets. The signalto-noise ratio (21) measures the power ratio of a signal compared with the noise applied to the signal, and in our case, we applied a noise power higher than the signal power (−2 dB and −4 dB, respectively) to simulate an environment with high-noise conditions. SNR dB = 10 log 10 P signal /P noise (21) As we did in the previous experiment, we used 100, 180, and 280 (all data) samples per class for the training and test sets ( Figure 12). Model Performance under Noised Test Samples In this experiment, we evaluated the robustness of each method to the ever-changing conditions of mechanical environments by adding Gaussian noise to the test sets. The signal-to-noise ratio (21) measures the power ratio of a signal compared with the noise applied to the signal, and in our case, we applied a noise power higher than the signal power (−2 dB and −4 dB, respectively) to simulate an environment with high-noise conditions. = 10log / As we did in the previous experiment, we used 100, 180, and 280 (all data) samples per class for the training and test sets ( Figure 12). The results (Figure 13) indicate the accuracy of each loss function. In general, for all loss functions, the accuracy increased, and the standard deviation decreased when the number of training samples increased. The triplet loss function exhibited the most favorable result for the −2 dB (Figure 13a) signal-to-noise ratio, with the semihard, hard, The results (Figure 13) indicate the accuracy of each loss function. In general, for all loss functions, the accuracy increased, and the standard deviation decreased when the number of training samples increased. The triplet loss function exhibited the most favorable result for the −2 dB (Figure 13a) signal-to-noise ratio, with the semihard, hard, and hard-soft-margin batch strategies achieving accuracies of 96.0%, 95.94%, and 95.75% for the highest number of training samples and 93.86%, 94.60%, and 94.64% for the lowest number of training samples, respectively. (a) (b) Figure 13. Accuracy of the max-margin, binary cross-entropy, and triplet semihard, hard, and hard-soft-margins when tested with the −2 dB (a); and −4 dB (b) signal-to-noise ratio noise test sets. The binary cross-entropy function did not exhibit an increase in accuracy when tested using 100 and 180 samples per class for the training and test sets, but it did exhibit a high standard deviation with the lowest number of training set samples per class, even though it achieved higher accuracy than the 80 samples per class (81.03% for 60 samples) set, indicating that the model had low precision. The max-margin loss function achieved the lowest accuracy (72.14%), even with highest number of training set samples per class. However, with the lowest number of training set samples per class (60 and 180), it did not exhibit a high standard deviation, despite the FEM not being able extract the most meaningful features. The triplet loss function exhibited a drop in accuracy of 9-10% for the −4 dB signalto-noise ratio test set compared with its accuracy in the −2 dB signal-to-noise ratio test set ( Figure 13a). The binary cross-entropy loss function exhibited a high standard deviation in accuracy when it was trained with 60 samples per class, but its accuracy dropped to 57.21%. The max-margin loss function yielded the lowest accuracy compared with the accuracies of the other loss functions, and in terms of the low signal-to-noise ratio (−4 dB), it exhibited a high standard deviation in accuracy when tested 20 times for each training set. Performance under New Classes In this experiment, we evaluated a simulated scenario in which a new class could be recognized by the model without the need for model retraining. We evaluated all loss functions by using the test set combined with unseen classes to be identified by the model during training. The unseen classes were randomly chosen, and the percentages of unseen classes in the test sets were 20% and 40% of the total number of samples in each set, respectively. Additionally, no noise was added to the training or test sets, as illustrated in Figure 14. Notably, when minibatches were generated for the samples in the test sets, the unseen class was used as an anchor and employed for the target samples. Essentially, the model compared more than 20% and 40% of seen and unseen samples per test set, respectively. Figure 13. Accuracy of the max-margin, binary cross-entropy, and triplet semihard, hard, and hard-soft-margins when tested with the −2 dB (a); and −4 dB (b) signal-to-noise ratio noise test sets. The binary cross-entropy function did not exhibit an increase in accuracy when tested using 100 and 180 samples per class for the training and test sets, but it did exhibit a high standard deviation with the lowest number of training set samples per class, even though it achieved higher accuracy than the 80 samples per class (81.03% for 60 samples) set, indicating that the model had low precision. The max-margin loss function achieved the lowest accuracy (72.14%), even with highest number of training set samples per class. However, with the lowest number of training set samples per class (60 and 180), it did not exhibit a high standard deviation, despite the FEM not being able extract the most meaningful features. The triplet loss function exhibited a drop in accuracy of 9-10% for the −4 dB signal-tonoise ratio test set compared with its accuracy in the −2 dB signal-to-noise ratio test set ( Figure 13a). The binary cross-entropy loss function exhibited a high standard deviation in accuracy when it was trained with 60 samples per class, but its accuracy dropped to 57.21%. The max-margin loss function yielded the lowest accuracy compared with the accuracies of the other loss functions, and in terms of the low signal-to-noise ratio (−4 dB), it exhibited a high standard deviation in accuracy when tested 20 times for each training set. Performance under New Classes In this experiment, we evaluated a simulated scenario in which a new class could be recognized by the model without the need for model retraining. We evaluated all loss functions by using the test set combined with unseen classes to be identified by the model during training. The unseen classes were randomly chosen, and the percentages of unseen classes in the test sets were 20% and 40% of the total number of samples in each set, respectively. Additionally, no noise was added to the training or test sets, as illustrated in Figure 14. Notably, when minibatches were generated for the samples in the test sets, the unseen class was used as an anchor and employed for the target samples. Essentially, the model compared more than 20% and 40% of seen and unseen samples per test set, respectively. samples, the FE still was unable to learn essential features; the high number of samples in the test class resulted in low accuracy for 280 samples per class because the model had to recognize more unseen classes. For the test set with 40% ( Figure 15b) unseen classes, the FE exhibited a lower ability to extract meaningful features when tested with a high number of test samples; moreover, even though the standard deviation decreased with increases in the number of training samples, the accuracy also decreased. uracy of max-margin, binary cross-entropy, and triplet semihard, hard, and hard-soft-margin with test sets (a) and 40% (b) unseen classes. Conclusions This study presents a stamping tool condition diagnosis method based on DML. Several DML methods were compared to determine which one was the most suitable for stamping tool condition diagnosis. The probability method employs binary cross-entropy, The results (Figure 15a) revealed the accuracy of each loss function tested using the 20% unseen class test set. The triplet loss and binary cross-entropy functions achieved similar accuracies, 80.44% and 80.31%, respectively. However, these accuracies were achieved with 60 and 108 training samples per class, not 168 samples. We suspect that the model was able to generalize the training samples, but was not fully able to recognize the unseen class. In addition, even when it was trained with a higher number of training samples, the FE still was unable to learn essential features; the high number of samples in the test class resulted in low accuracy for 280 samples per class because the model had to recognize more unseen classes. For the test set with 40% ( Figure 15b) unseen classes, the FE exhibited a lower ability to extract meaningful features when tested with a high number of test samples; moreover, even though the standard deviation decreased with increases in the number of training samples, the accuracy also decreased. The results (Figure 15a) revealed the accuracy of each loss function tested using the 20% unseen class test set. The triplet loss and binary cross-entropy functions achieved similar accuracies, 80.44% and 80.31%, respectively. However, these accuracies were achieved with 60 and 108 training samples per class, not 168 samples. We suspect that the model was able to generalize the training samples, but was not fully able to recognize the unseen class. In addition, even when it was trained with a higher number of training samples, the FE still was unable to learn essential features; the high number of samples in the test class resulted in low accuracy for 280 samples per class because the model had to recognize more unseen classes. For the test set with 40% ( Figure 15b) unseen classes, the FE exhibited a lower ability to extract meaningful features when tested with a high number of test samples; moreover, even though the standard deviation decreased with increases in the number of training samples, the accuracy also decreased. (a) (b) Figure 15. Accuracy of max-margin, binary cross-entropy, and triplet semihard, hard, and hard-soft-margin with test sets containing 20% (a) and 40% (b) unseen classes. Conclusions This study presents a stamping tool condition diagnosis method based on DML. Several DML methods were compared to determine which one was the most suitable for stamping tool condition diagnosis. The probability method employs binary cross-entropy, Figure 15. Accuracy of max-margin, binary cross-entropy, and triplet semihard, hard, and hard-soft-margin with test sets containing 20% (a) and 40% (b) unseen classes. Conclusions This study presents a stamping tool condition diagnosis method based on DML. Several DML methods were compared to determine which one was the most suitable for stamping tool condition diagnosis. The probability method employs binary cross-entropy, the contrastive method employs contrastive max-margin loss, and the triplet network method employs three batch-generation strategies (semihard, hard, and easy). The main contributions of this study are as follows. First, we compared methods incorporating several types of evaluations. Second, we evaluated the methods by using various numbers of training samples, and the results revealed that the triplet network was the most accurate, followed by the probability and the contrastive methods. Third, we evaluated the methods by using a noise test data set, and for this experiment, the triplet network also demonstrated the most favorable results, followed by the probability and contrastive methods. Finally, we evaluated each method in terms of its ability to recognize new classes. The triplet and probability methods, which achieved similar results, exhibited the best performance, followed by the contrastive method. In general, the triplet network provided the most favorable results overall, and was most suitable for stamping tool condition diagnosis. However, when subjected to new classes, triplet networks may not be able to provide sufficient accuracy when used with the number of data samples employed in the present study. This problem may be mitigated with additional data. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to further study will be carried out using the same data. Conflicts of Interest: The authors declare no conflict of interest.
9,087.4
2021-07-28T00:00:00.000
[ "Engineering", "Computer Science" ]
The sup-norm problem beyond the newform Abstract In this paper we take up the classical sup-norm problem for automorphic forms and view it from a new angle. Given a twist minimal automorphic representation $\pi$ we consider a special small $\mathrm{GL}_2(\mathbb{Z}_p)$ -type V in $\pi$ and prove global sup-norm bounds for an average over an orthonormal basis of V. We achieve a non-trivial saving when the dimension of V grows. Introduction It is a classical problem in analysis and mathematical physics, more precisely Quantum Chaos, to bound the L ∞ -norm of certain eigenfunctions on manifolds.In the most basic situation one considers a Riemann surfaces X of finite volume and eigenfunctions φ of the Laplace-Beltrami operator ∆ X .A sup-norm bound in the spectral aspect is then an estimates of the form where λ φ = 1 4 + t 2 φ is the Laplace-Beltrami eigenvalue of φ.The local bound corresponds to δ = 0 and is known in great generality.The sup-norm problem asks for improved bounds featuring some δ > 0. The sup-norm problem has only been solved for very special surfaces X and is hopeless in general.Indeed there is a well known obstruction to the sup-norm problem coming from large eigenspaces V λ given by the inequality This observation is enough to establish the well known fact, that the local bound (i.e.(1) with δ = 0) can not be improved for the sphere X = S 2 .So far we have only described the most basic version of the sup-norm problem which, is already very interesting on its own.In addition it admits many variations which have been studied throughout the years.An example for such a variation is the so called level aspect where the base manifold changes in some convenient family X 1 , X 2 , . . .and one keeps track of this change in the sup-norm bound (1) using a suitable parameter called the level.Another generalisation that should be mentioned allows X to be a manifold of higher dimension and rank. Essentially any progress that has been made towards the sup-norm problem as introduced above relies on the arithmeticity of X.The basic idea introduced in the monumental paper [14] is to employ additional symmetries (in the form of Hecke operators) to build a spectral projector that is sharper than the one constructed with only the Laplace-Beltrami operator at hand.Morally this might be thought of as forcing a multiplicity one situation even if the Laplace-Beltrami eigenspaces can not be rigorously controlled.The result of this method is a bound as in (1) with δ = 1 12 for compact quotients X = Γ\H constructed from maximal orders in quaternion algebras. Since its appearance the method from [14] has been tweaked, modified and generalised, see for example [1,6,7,20,22] and the references within.Much work is concerned with congruence quotients X = Γ 0 (N)\H on which so called Hecke-Maaß newforms are considered.Since these newforms enjoy a nice multiplicity one property they are natural candidates for the sup-norm problem.In this note we are going beyond the case of newforms and consider situations where the dimension of the underlying p-adic representation grows.In other words, we solve the supnorm problem in the dimension aspect.This aspect is a new facet of the sup-norm problem which seems extremely interesting and is not yet well studied.While our result is the first in p-adic setting it is only preceded by [5] where an archimedean version of this aspect is discussed. To explain our result and its connection to the work of Blomer, Harcos, Maga and Milićević it will be most convenient to leave the classical world of Hecke-Maaßnewforms behind and work in the language of automorphic forms and automorphic representations. The sup-norm problem we will consider is connected to small GL 2 (Z p )-types in cuspidal automorphic representations π, where p > 3 is prime.Comparing this to the recent work [5] we are replacing the archimedean place ∞ by a finite place p and the minimal U(2)-type of some automorphic representation by a suitably chosen GL 2 (Z p )-type.Note that in order to afford interesting K-types at the archimedean place it is necessary to work over fields admitting complex places or in higher rank.In the p-adic world we already meet interesting cases when working with automorphic forms for GL 2 over Q. 1.1.Set-up and main result.Before we continue our discussion we need to fix some notation.Let G(R) = GL 2 (R) for some ring R and let A be the adele ring over Q.We will be working with cuspidal automorphic representations π of G(A) with unitary central character ω π .Abusing notation we will write π ⊂ L 2 0 (G(Q)\G(A), ω π ) assuming that π acts on an irreducible subspace of cuspidal automorphic forms by right translation.Given a compact subgroup H we write π H for the space of H-invariant elements in π. Set K ∞ = SO(2) and K l = GL 2 (Z l ) for primes l.Combining these we get the compact subgroup K = v K v ⊂ G(A).Given a prime p > 3 and m > 0 we consider the smaller compact subgroup Note that K(p m ) is normal and of finite index in K. Throughout we restrict ourselves to the situation where π is unramified (i.e.spherical) away from p.In particular it is spherical at ∞ and one associates the spectral parameter t π .Set T = 1 + |t π |.Further, we have We set V = π K(p mπ ) and observe that π| K endows V with the structure of a K- . We are concerned with the sup-norm of Φ(g) and obtain the following theorem which is a close analogue to [5,Theorem 1]. Theorem 1.1.Let p > 3 be prime and suppose π is twist minimal.In the notation above we have +ǫ .If the (arithmetic)-conductor of π is a perfect square (i.e. the exponent-conductor of the p-component π p of π is even) or the p-component π p of π is not supercuspidal, then we have the better bound While in the spectral aspect (i.e. the T -aspect in our statement) we only recover the local bound, the key feature of our theorem is the sub-local exponent in the dimension aspect d.Given the obstruction to the sup-norm problem coming from growing eigenspaces the aspect under consideration may seem counter intuitive.However, we are letting the dimension of the eigenspace vary in a controlled manner and manage to show that one can still achieve a considerable power saving in d on average over any orthonormal basis. Note that the sup-norm bound given in the theorem holds globally.Thus, unlike the one in [5][Theorem 1], no restriction to a compact domain is necessary here.As usual when proving global sup-norm bound the argument consists of two steps.First, a bound via the Whittaker expansion takes care of the regions close to the cusps.This part of the argument is fairly standard but requires some new computations of ramified Whittaker vectors.Second, a bound obtained from the amplified pre-trace inequality is used to handle the bulk.At this point it becomes crucial that we are only treating the average function Φ.Indeed, this allows us to identify the test function on the geometric side as a character of a finite group.The analysis of this character is carried out in Lemma 4.4 below and relies on character tables given in [15].This is the only place where the assumption p > 3 is used. To end this section let us briefly discuss the numerology of the exponents in the d-aspect.For simplicity we restrict this discussion to the cases in which our result gives the strong bound (2).Let us start by talking about the local-(not to say trivial-) bound (in the bulk).To obtain this we can follow Marshall's strategy (see [18]) which leads to the following.Let F be any cuspidal automorphic form so that the translates φ(•k), k ∈ K, generate an irreducible K-module W F .Then choosing certain K-matrix-coefficients as test functions in the pre-trace inequality yields Applying this to Φ upon noting that Φ 2 = d (The same bound can also be obtained from the Whittaker expansion coupled with a suitable generating domain.)Thus amplification allows us to improve the exponent from the local bound by 1 6 , which should be an familiar exponent.More suggestively we can write our main result as One could say that Theorem 1.1 implies φ i ∞ ≪ d 1 3 on average.Note that if p mπ agrees with the arithmetic conductor p nπ of π, then this result is not very interesting.Indeed, in this case we can generate the elements φ 1 , . . ., φ d in V directly from the newform φ • in π.By now there are very good bounds for this newform (and thus also for the φ i 's) known in the literature.See [22] if m π = n π = 1 or [9] in general.However, in the remaining cases (since π is assumed to be twist minimal these correspond to the situation where π is supercuspidal at p) our result provides new information in the sup-norm problem.Indeed one can still generate V from a translate of the newfom φ • .(This is precisely the strategy used in [18,20] to derive local bounds for the newform of arbitrary level using (3).)Translated into the level-aspect our result now essentially says that the sup-norm of the φ i 's is bounded by p 1 3 ⌈ nπ 2 ⌉ on average.To the best of our knowledge this can not be derived from any known sup-norm results on the newform φ • . Finally we want to compare our result to the guiding archimedean example [5, Theorem 1].Recall that we need to replace the K-module V by some irreducible U(2) representation W .This representation W will occur as the minimal U(2)type in some cuspidal automorphic π of G(A Q(i) ).Note that if dim C W ≍ l we can think of π (or rather π ∞ ) having spectral density ≍ l 2 .This explains the local bounds where Φ is constructed as an average over some suitable basis of W similar to our construction above.As result of an amplification process the authors of [5] arrive at Our notation suggests that in the result from [5] the number l 2 playes the role of our d.This can be explained via the spectral density of π ∞ and respectively π p .Indeed while in the archimedean situation the spectral density is roughly l 2 in our case the spectral density is linearly related to d.Thus in both cases the square root of the spectral density seems to determine the trivial bound.(This is only reasonable because we are considering minimal or close to minimal K-types in both cases.)Note that the quality of the saving 1 6 in the p-adic versus 1 12 in the archimedean case comes from slightly different behaviour of the spectral transform. Finally, let us remark that if the exponent conductor of π p is odd and π p is supercuspidal, then our bounds for the spectral transform, which in this case are linked to certain badly-behaved characters of GL 2 over finite rings, are comparable to those used in [5].This explains that in this case we have matching numerology and obtain only a saving of 1 12 in the final exponent.Translated to the level aspect our result states that on average the φ i 's are bounded by p 5(nπ +1) 24 . Bounds of this quality are known for newforms only in the compact setting, see [13]. Remark 1.2.Questions of these type should be even more interesting when considered in higher rank.The reason is that in higher rank the analogously defined small K-types can not be generated from translates of the newform.For example if one considers a depth-zero supercuspidal representation π p of GL 3 (Q p ), then it has (arithmetic)-conductor p 3 and the space π p (1) is the principal congruence subgroup modulo p in GL 3 (Z p ), is non-zero.However, it seems impossible to find a translate of the newform that generates π . Indeed this would mean finding g ∈ GL 3 (Q p ) with However, the question treated in this paper still makes sense and trying to answer it is work in progress.also like to thank the anonymous referee for pointing out an oversight in the amplification argument which has now been fixed. Preliminary considerations In this section we are putting in some ground work on which the following sections will rely. Recall that π was a cuspidal automorphic representation.Since we are assuming that π v is unramified for v = p, the (arithmetic)-conductor of π is p nπ for n π ∈ N ∪ {0}.When n π = 0 we have d = 1 and our theorem reduces to the local bound in the spectral aspect, so that without loss of generality we can assume n π ≥ 1 throughout.By Flath's factorisation theorem we can fix an isomorphism π ∼ = π v .Note that also the central character of π factors as ω π = v ω πv where ω πv is the central character of π v .For v = p we can fix a spherical (i.e. This vector is unique up to scaling.Recall that φ i , i = 1, . . ., d forms an orthonormal basis of V = π K(p mπ ) .Thus there is φ p so that we can identify Since the spherical functions φ • p are well understood much of our work boils down to understanding properties of an orthonormal basis span{φ (1) p , . . ., φ This a purely local problem, which we investigate in the following subsection. 2.1.Local considerations.We now focus on properties of the local representation π p .We start by recalling the classification of local representations.But before we do so we need some more notation.Given a (quasi)-character χ : Q × p → C × we write a(χ) for the (exponent)-conductor.Further write for an Iwahori subgroup and put K ′ p = N G(Qp) (I 0 (p)).We also need the filtration If this representation is irreducible, then we denote the so obtained representation by χ 1 ⊞ χ 2 .We write St for the Steinberg representation which we may identify with the unique irreducible subspace of Ind ).We are now ready to recite the following well known classification. Lemma 2.1.The representation π p falls into one of the following three cases: The representation π p is supercuspidal.In this case we can write π p = χ • π ′ p for a (quasi)-character χ : Q × p → C × and some twist-minimal representation π ′ p of conductor n ′ π which is constructed in one of the following two ways: With this classification at hand we continue to study the subspaces V in more detail. is irreducible as K p -module and we have: (1) The invariant m π is given by The dimension of V is given by Proof.This is not new and we only have to ensemble the pieces appropriately.Let us proceed case by case. First, if π p is in Case 1, then twist-minimality implies that χ 2 (or similarly χ 1 ) is unramified.Thus we have n π = a(χ 1 ) and the results on d and m π follow from [19,Proposition 4.3].Irreducibility can be seen by direct computation. Second, if π p is in Case 2 and twist minimal, then π p = St and n π = 1.The results on d and m π follow again from [19,Proposition 4.3].In this case irreducibility follows from [8, Theorem 1]. Finally, if π p belongs to Case 3, then the full statement is given in [17, Theorem 3.5].(See also [19,Lemma 4.5,Corollary 4.7] for the computation of m π and d.) 2.2.A generating domain.We now switch to the global picture again and aim to produce a suitable set F ⊂ G(A) which reduces our problem to studying Let F be the standard fundamental domain for SL 2 (Z)\H, which we identify with a subset of GL 2 (R) by identifying z Here and a(y) = y 0 0 1 . We further view F as a subset of G(A) by identifying it with its image under the usual embedding G(R) → G(A).The same series of identifications allows us to write Φ(z) for z ∈ H. Proof.First we take g ∈ G(A) and observe that by strong approximation we can write | by automorphy and the action of Z via a unitary character.However, we now observe that if φ 1 , . . ., φ d forms an orthonormal basis of V , then so does π(k)φ 1 , . . ., π(k)φ d .Let us write Φ (k) for the average constructed from the latter basis.We thus have Φ(g) ≤ S(Φ (k) , F ) ≤ A by assumption. The Whittaker bound We will now start the process of deriving a first bound for Φ which will be valid (high) up in the cusp.This is done by estimating Φ using the Whittaker expansions of the φ i 's.Throughout we will be working with an arbitrary orthogonal basis φ 1 , . . ., φ d and consider only g ∈ F . Where ψ A is the standard character of Q\A which has a factorisation ) and ψ l unramified for all primes l.Note that W φ (•) is right K(p mπ )-invariant and transforms with respect to ψ A when acted on by N(A) from the left.Thus a standard trick shows that W φ (a(q)g ∞ ) = 0 unless 0 = q ∈ 1 p mπ Z. Indeed, for any x with n(x) ∈ K(p mπ ), one computes We conclude that, if W (a(q)g ∞ ) = 0, then we have ψ A (xq) = 1 for all such x.This gives precisely the condition q ∈ 1 p mπ Z.This observation leads to the Whittaker expansion We need to exploit the factorisation of the Whittaker function W φ .To do so we first observe that we have the factorisation of Whittaker models Using the factorisation of φ will now determine distinguished elements in the local Whittaker models as follows.Starting at v = ∞ we set where t π is the spectral parameter of π ∞ .Of course W v is the spherical Whittaker function and is normalised so that where dy is the normal Lebesgue measure.We turn towards the finite places v = p given by some prime l = p.The spherical Whittaker function in W(π v , ψ v ) is then given by Here λ π (n) is defined by Finally we turn towards v = p.Here we write p for the image of p in the Whittaker model W(π p , ψ p ) such that here dy is the Haar measure of Q p normalised so that Vol(Z p , dy) = 1. With these choices made there is are constants As shown in [16,Section 4] the absolute values of these constants satisfy L {p,∞} (s, π ⊗ π) . Note that we choose the global measure on Z(A)G(Q)\G(A) to be the Tamagawa measure.In particular, the absolute value is independent of i and using [12] we get Combining everything we end up with for x ∈ R and y ∈ R + .Here ρ ∈ {0, 1} depends on whether φ 1 , . . ., φ d are even or odd. Let v 1 , . . ., v d be an orthogonal basis of π Kp(mπ ) p . We fix a Whittaker functional and thus an embedding Note that S πp is well defined as it is independent of the choice of Whittaker functional and the choice of basis v 1 , . . ., v d . Lemma 3.1.For any orthonormal basis φ 1 , . . ., φ d we have where g = n(x)a(y) ∈ F .Proof.To simplify notation we define if t = np k−mπ for k ∈ N 0 and (n, p) = 1, and a(t) = 0 = b i (t) otherwise.The Whittaker expansion now neatly reads With this at hand we estimate . The claim follows by inserting the definitions of a(t) and b i (t). Before we can estimate this expression we need to investigate the size of the local average S πp (a(y)).This is the content of the following subsection. 3.2. Computing the local averages.The computation of S πp (a(y)) involves a case study and each case will be treated using different techniques.Finally, combining all possible cases, will lead to the bound See Lemma 3.3, 3.6 and 3.7 below. The Steinberg representation. Let 2 ).This is the dual space of V and the invariant bilinear pairing is given by Further π = St can be identified as the unique irreducible generic sub-quotient of V ∨ . Next we choose a basis v 0 , . . ., v p of V Kp (1) .(In an analogous way one constructs the dual basis v ∨ 0 , . . ., v ∨ p in (V ∨ ) Kp (1) .)This is done as follows: we first construct Further γ i = wn(i) for i = 0, . . ., p − 1.For consistency of the indices we put γ p = 1 so that we can identify . This is the desired basis.Now there is an (up to scaling) unique ψ p -Whittaker functional Λ : We will first consider the related average Note that also this is independent of the choice of the particular basis v 0 , . . ., v p as long as one considers the corresponding dual basis of V ∨ .We will write st for the stable integral as defined [ Knowing the exact shape of the v i 's we can compute these integrals.First, we observe that a simple change of variables yields Vol(K p (1), dk) The case i = p is somehow special and will be treated later.For now let us assume 0 ≤ i < p.In this case we have To take advantage of the support of v p we have to investigate In view of the Iwahori-factorisation of k we find that n(x) ∈ N(Q p ) ∩ K p (1) is necessary for the integral to be non-zero.Thus one gets We turn towards i = p, so that γ p = 1.Further we replace y by yp −1 and consider y ∈ Z p .Recall that every k ∈ K p (1) can be written as k = t k n k n k ∈ B(Z p )N(pZ p ) t N(pZ p ) by using the Iwahori-factorisation.We obtain Vol(K p (1), dk) Vol(K p (1), dk) Note that the integrand only depends on n k .Therefore we start by discussing a suitable measure on K p (1).Indeed using the Iwahori factorisation we can write If we write ṽp to be the re-normalisation of v p with ṽp (1) = 1, then we have In the last step we simply made a change of variables in the x-integral.A simple matrix computation shows that Inserting this and using the transformation behaviour of ṽp one obtains Both integrals can now be computed quite easily.Starting from the first one we obtain Turning to the other integral we find In particular we have Note that this can be negative, but for non-unitary representations there is no expectation for these integrals to be non-negative.Combining the computations above and swapping back to y ∈ p −1 Z p leads us to the following result.Lemma 3.2.In the notation above we have We will obtain the desired estimate by relating S St (a(y)) to S V .Proof.Recall that the definitions of S πp and S V are independent of the choice of the underlying basis.Thus we can choose an orthogonal basis w 1 , . . ., w p of π Kp(1) p .Viewing π as invariant subspace of V we can assume that the w i 's are in V .We then have 1) in the annihilator of the w 1 , . . ., w p and let w 0 ∈ V Kp (1) be the dual element then after renormalising we have However, since there is a unique Whittaker functional on V ∨ which descents to the unique Whittaker functional on π when viewed as a sub-quotient we must have W w ∨ 0 (a(y)) = 0. (Since the unique invariant subspace is non-generic.)Thus S V (y) = S πp (a(y)) and the desired estimate follows directly from the previous lemma. Twist minimal Principal Series. Turning to this case we assume that π Without loss of generality we can assume that χ(p) = 1.(If we assume that π is unitary then it is tempered so that ρ ∈ iR.)Now we can choose a basis in the induced picture essentially as above, but we need to find a suitable decomposition of B(Z p )\K p /K p (n π ) (since the Bruhat decomposition does not hold in G(Z p /p m Z p ) if m > 1).First we start by defining From this element we can construct a basis of π Kp(mπ ) p as in the Steinberg case.Indeed, we fix a system of representatives {γ j } for B(Z p )\K p /K p (m π ) and set v j = π p (γ −1 j )v 0 .In order to explicate this basis we need to compute a suitable coset decomposition for B(Z p )\K p /K p (m π ).This is the content of the following lemma.Lemma 3.4.We have Proof.Take g = a b c d ∈ K p and set i = v p (c).We treat several cases distinguished by the value of i. First, if i = m π , then we have Second, for 1 ≤ i < m π we have . By right multiplication with elements in K p (m π ) we can view c d ∈ p i Z × p /p mπ Z p .The critical contribution is given by the matrices with i = 0. We can write Given v ∈ π Kp(nπ) we can compute the Jacquet Integral as follows.Without loss of generality assume v p (y) ≥ −n π , since otherwise the Whittaker function W v vanishes for trivial reasons.We compute Note that wn(a) = γ 0,a .Now we will have a closer look at the remaining integral: Since the v(1)-contribution is easily computed we arrive at the following lemma. Lemma 3.5.For v p (y) ≥ −m π we have for This supplies us with the necessary ingredients to show the required estimate for S πp .Proof.Note that since all v j 's are translates of v 0 their Whittaker-norm all coincides.So it suffices to compute one of these norms and it is easy to see that Next we observe that one can choose representatives so that v j = π p (γ −1 i,a )v 0 for some i = i(j) and a = a(j).In particular, we can sort the terms of the sum S πp (a(y)) according to this i.We get Applying the previous Lemma with v = π p (γ −1 i,a )v 0 and taking support properties of v 0 into account provides us with nice formulae for the W πp(γ −1 i,a )v 0 (a(y)).As soon as we can show that S i (y) ≪ |y| p for all i we are done.We start with i = 0.Here we have the explicit formula In this range we get Thus we need to bound the integrals G l (z, χ), which are somehow incomplete Gauß sums in the sense that one sums only over a specific congruence class.These sums were essentially computed in the proof of [2,Lemma 5.8].Indeed one extracts With this at hand we can easily evaluate S i .For i ≤ ⌊ mπ 2 ⌋ we have Finally consider i = m π .We have Therefore it suffices to compute the remaining integral.By some basic Gauß sum evaluations one gets Inserting this above concludes the proof since it implies S mπ (y) = δ vp(y)>0 |y| p . 3.2.3.Supercuspidal representations.Let X k be the set of character χ : For χ ∈ X k and m ∈ Z we will consider the functions ξ Given any representation π p we write K ψp (π p ) for the corresponding ψ p -Kirillov model.Note that this model contains the Schwartz functions so that we have ξ (m) χ ∈ K ψp (π p ).Note that by construction of the Kirillov model we have This suffices to compute S πp (a(y)) for supercuspdial representations π p . Lemma 3.7.Suppose π p is a twist minimal supercuspidal representation with (exponent)-conductor n π .Then the following is true In general we have the bound Proof.We start with the case n π = 2m π .By [19,Lemma 4.4] we find that a basis for π Kp(mπ ) in the Kirillov model is given by Note that we already took advantage of twist-minimality using that n χπ = n π for all χ ∈ X mπ .Our computations above show that this basis is orthonormal (with respect to the Whittaker inner product).Thus we have We turn towards the second case where n π is odd.Then we get the orthonormal basis It is again easy to compute the desired quantity: The result follows directly. Conclusion. We can now give a decent bound for Φ(z) using the Whittaker expansion.We will use the bound (4) and follow the standard procedure.Lemma 3.8.We have . Proof.Inserting (4) into Lemma 3.1 yields Estimating the remaining n-sum as for example in [22] or [20] yields the desired result. A bound via the pre-trace formula The next bound will be derived from the pre-trace inequality.We start by discussing the local test functions.At the archimedean place we closely follow [20, Section 3.5] and fix f ∞ so that it satisfies (1) f ∞ (g) = 0 unless g ∈ G(R) + and u(g) ≤ 1; (2) f∞ (σ) > 0 for all irreducible spherical unitary principal series representations σ of G(R); 4 .(The final property is not really necessary because we are ignoring the spectral aspect for now.)Note that f is the spherical transform (also Selberg/Harish-Chandra transform) of f and u(g) is the point-pair invariant on group level. Finally we define the unramified part of the test function f ur by setting for a set of primes S (to be determined) and normalised rth Hecke-operators κ r .This implements the usual amplification procedure.Finally we define the global test functions We introduce Further let σ denote the irreducible representation of GL 2 (Z/p mπ Z) through which the irreducible K p -module π Kp(mπ ) factors.This is a representation of a finite group and we write χ σ for its character.Finally we define the coefficients y r by linearising the convolutions of Hecke-operators in the definition of f ur .More precisely we write This can be compared to the analogous expression in [20, Section 7]. The following pre-trace inequality provides the transition to the counting problem. Lemma 4.2.For z ∈ F we have Proof.We start by considering the spectral expansion of the automorphic kernel k f (i) associated to the self-adjoint operators R(f (i) ) and dropping all terms except φ i .The latter is possible by positivity.We obtain We now sum this inequality over i to obtain Now we write γ for the image of γ in GL 2 (Z/p mπ Z).Note that this is well defined as long as γ ∈ K p .For such γ we get The rest of the argument is standard and can for example be found in [20]. By the choice of f ∞ we can already eliminate the archimedean influence from the right hand side.(Note that we are not aiming to amplify in the T -aspect.)Corollary 4.3.For z ∈ F we have for M z (r, g) = {A ∈ M 2 (Z) : det(A) = r, A ≡ g mod p mπ and u(Az, z) ≤ 1}. This last corollary tells us that we need to control the character χ σ and solve a counting problem estimating M z (r, g). To estimate the character we need to define certain level sets. Then the same estimates hold for hZ , where det(h) is not a square modulo p mπ . The representations of GL 2 over finite rings such as Z/p m Z and their characters are well studied but explicit estimates for the characters as needed here seem to be hard to find.We choose to use the character tables for SL 2 (Z/p m Z) computed by Kutzko in his PhD thesis.This makes it necessary to pass from SL 2 to GL 2 using Mackey Theory.Note that the character values in question were calculated in [3].However, they remain hard to extract and we hope our approach is more transparent. Proof.Note that if m π = 1, then σ is the character of GL 2 (Z/pZ).If we are in Case 1 or 2, then the representation is constructed by parabolic induction and the character is easily computed.Otherwise we must be in Case 3.1 in which case σ is cuspidal.In this case the character values are well known, see for example [10].(Alternatively one can use Mackey Theory to reduce to the case of characters for SL 2 (Z/pZ) and use the corresponding character table given in [21, p. 128].) We will now assume m π > 1.Since Case 2 can not occur we treat Case 1 and 3.1, leaving Case 3.2 for later.Our approach is based on reduction to the case of characters for SL 2 (Z/p m Z) using Mackey-Theory.Recall that we are assuming p to be odd.Let ω σ be the central character of σ and σ be an irreducible component of σ| where σ ′ is another irreducible representation of GL 2 (Z/p m π Z).Now we observe that the dimensions of irreducible representations of SL 2 (Z/p mπ Z) are given by p mπ (1 + p −1 ), p mπ (1 − p −1 ) and 1 2 p mπ (1 − p −2 ).Thus by recalling that σ has dimension p mπ (1±p −1 ) (in Cases 1 and 3.1) we find that we must be in the situation described in (6).Moreover, this means that the restriction of σ to Z • SL 2 (Z/pZ) is irreducible and equivalent to ω σ•σ .In particular the character χ σ is given by χ σ (z • s) = ω σ (z)χ σ (s) for z ∈ Z and s ∈ SL 2 (Z/p m Z).We conclude by referring to [15,Table III] where the character values of characters of dimensions p mπ (1±p −1 ) are listed. We now turn towards Case 3.2.Note that this case is exceptional in the sense that the restriction of σ to SL 2 (Z/p mπ Z) is reducible.More precisely σ| SL 2 (Z/pZ) ∼ = σ ⊕ σh As a consequence the character χ σ can only be described by a combination of two character of SL 2 (Z/p mπ Z).Indeed, χ σ (zs) = ω σ (z)[χ σ (s) + χ σh (s)].The corresponding character values are listed in [15, Table IV] and the claimed bound is derived directly by ignoring any possible cancellation between the two characters χ σ and χ σh .(Even though such cancellation can be observed in the m π = 1 situation this phenomenon does not seem to generalise.) If g ∈ hZ • SL 2 (Z/p m Z), then the argument proceeds similarly and we omit the details. Before continuing we will discuss our choice of S. But first recall that d . This is a slight spoiler but for experts in amplification it should be no surprise that this is the optimal size of the amplifier in this setting. By the prime number theorem (assuming d is sufficiently large, which is no problem) we have ♯S ∼ Λ/ log(Λ), but for us the following crude bound suffices Before we are ready to prove our key estimate we need to establish some counting results.Let The case λ = 0 is easily handled using existing results.For example taking N = δ = 1 in [22, Proposition 6.1].We follow standard procedure and write Counting the contribution of generic matrices is a standard lattice point counting argument, which we slightly modify.Note that our live is much easier, since we can take z in the classical fundamental domain for SL 2 (Z). We closely follow the argument in [22]. p λ y choices for c.Similarly, using the bound |a + d| ≪ √ K, we have ≪ K We have counted the number of possibilities for the admissible quadruples (c, b ′ , a+ d, a − d).Since each of those quadruples uniquely determines a matrix A we have established . The case when we are only considering square matrices only needs a minor modification.Indeed, instead of counting a + d trivially as earlier we observe that If the matrix is parabolic the right hand side would be 0. Thus we now consider only generic matrices.For those we can fix the left hand side first, so that we determine (a + d, √ r) essentially as solutions to a generalised Pell equation.There are at most ≪ K ǫ possibilities.The other bounds are derived elementary using only the fact that S contains only primes.(In contrast to [11,Lemma 2.4] we do not need a lattice counting argument, because we have an additional congruence condition on b that we can use.)We will only show the finial estimate, since the others are derived similarly. There are ≪ Λ 2 possible choices for r = l 2 1 l 2 2 ≍ Λ 4 .Having fixed the determinant of this form we find that there are only ≪ 1 choices for (a, d) with ad = l 2 1 l 2 2 .Finally we observe that we can choose b in ≪ 1 + Λ 2 y p λ ways, since p λ | b and |b| ≪ Λ 2 y.Putting these estimates together completes the proof.Proof.This follows along the lines of [4, Lemma 14]. We can now prove the main estimate of this section.We first consider the contribution of λ = 0.In this case the counting problem is independent of p and relatively easy.Indeed we have This is for example [22,Proposition 6.1] with N = δ = 1 and z ∈ F so that y ≫ 1. Next we assume λ > 0. We summarise the results from Lemma 4.5, 4.8 and 4.6 in Table 1 below.Note that for the contribution of r = 1 we have used Remark 4.7.), then we are done by the previous lemma.For larger y the Whittaker expansion (see Lemma 3.8) gives even better result. 3. 1 . Reduction to a local problem.Let φ = φ i for some i = 1, . . ., d.The global Whittaker period is given by Lemma 4 . 9 . 6 . Assume p > 3. Suppose π belongs to Case 1, 2 or 3.1.Then, for If π belongs to Case 3.2, then we have the weaker bound Φ(x + iy) We start with Cases 1, 2 or 3.1.Our starting point is Corollary 4.3.Breaking the g-sum up into pieces on which we can estimate the character suing Lemma 4.
9,210.6
2021-11-02T00:00:00.000
[ "Mathematics" ]
Safety Huddle methodology development in patient safety software: an experience report. OBJECTIVES to report the development and implementation of a digital tool developed by a group of nurses and information technology professionals working in healthcare quality management. METHODS an experience report regarding the development of the Safety Huddle digital model, using the agile Scrum methodology. RESULTS the first stage was the development of the model proposed by the team of nurses and IT professionals, based on the demand of quality and patient safety leaders in Brazil, and the second phase was the software implementation. FINAL CONSIDERATIONS the development and implementation of the Safety Huddle contributed to expedite the detection and distribution of actions, in addition to promoting integration among teams, accountability, and empowerment of professionals to foresee and identify issues related to patient safety and face them through action plans. INTRODUCTION Over the last years, an increasing evolution on the theme of patient safety, both in theory and practice, has been observed among healthcare professionals, researchers, senior management leaders, and healthcare service users. Patient safety (PS) means reducing the risk of unnecessary damage associated with health care at a minimum acceptable level (1)(2)(3) . According to the report "To err is human" published by the Institute of Medicine (IOM) of the United States, approximately 44,000 to 98,000 deaths per year, in the country, were associated with medical and hospital care errors. The occurrence of serious errors in health care led to the need for developing policies and implementing protocols to support the care provided in healthcare institutions, with the aim of reducing the occurrence of incidents causing harm or adverse events (AEs) (1)(2)(3)(4) . The World Health Organization (WHO) created the World Alliance for Patient Safety in 2004, and subsequently, in 2009, a work team developed the International Classification for Patient Safety (ICPS), which is still used nowadays. This classification presents a set of concepts organized into a structure that emphasizes risk identification, prevention, detection, and mitigation (1)(2)(3) . According to the ICPS, AEs are incidents that occurred during healthcare provision and resulted in harm to patients. It may be physical, social, or psychological harm, including diseases, injuries, suffering, disabilities, or death (1)(2)(3)(4) . In 2013, the Brazilian Ministry of Health established the Patient Safety National Program (PNSP, as per its acronym in Portuguese), with the purpose of implementing patient safety actions by means of six basic protocols based on the following priority areas: appropriate patient identification; effective communication among healthcare professionals; safety regarding prescription, use, and administration of medications; safe surgery; hand hygiene; and minimization of risk and harm caused by falls and pressure ulcers (2)(3)(4) . Therefore, in order to reduce risk of unnecessary harm to an acceptable minimum and provide safe and quality care, the implementation of safety protocols is of utmost importance to contribute to a safer care process. In addition, an effective communication channel must be established, allowing teams to deliver and receive clear and accurate information within all healthcare organization levels (2)(3)(4)(5) . According to the Joint Commission on Accreditation of Healthcare Organizations, contributing factors related to communication failure were identified as some of the main causes of AEs. This fact was evidenced from 1995 to 2004 in more than 60% of the AEs. Between 1993 and 1998, the Food and Drug Administration (FDA) evaluated reports of errors related to medications, which increased with catastrophic damage and were found in 16% of the AEs (2)(3)(4) . The Safety Huddle methodology, also called "safety briefing", was proposed by the Institute for Healthcare Improvement (IHI), and emerged from this context of communication failure and the need for early detection of AEs. According to the authors, this method increases safety awareness at the operational level or front line, and assists organizations in the development of safety culture (5)(6) . Corroborating the methodology of the IHI, the PNSP recommends early identification, discussion among work teams, and implementation of improvement plans based on systematized actions of the risk management process, which is a pillar of clinical governance (5)(6)(7) . However, some actions are required to operationalize the method and ensure it is successful, such as data collection to monitor care provision, identification of issues found during patient treatment, detection of risky circumstances or unsafe conditions, and near-miss. In practice, healthcare institutions encourage the detection of these findings through the voluntary notification of incidents (7)(8) . Another important aspect is to identify the perception of work teams on factors that affect their daily work and propose feedback, so they can understand that changes may add value to the work process, consequently resulting in improvements. In this respect, a group of independent hospitals manifested the need for as early as possible identification of information related to the care provided in their institutions and unsafe conditions detected in care processes. Therefore, the initiative of developing the digital Safety Huddle emerged, developed by experts working as external consultants, that is, without an employment bond to the institutions, in a company that produced the incident management software. OBJECTIVES To report the experience on the development and implementation of a digital tool developed by a group of nurses and IT professionals working in healthcare quality management. METHODS This was an experience report on the development and implementation of the Safety Huddle digital tool for use in all Brazilian hospitals. It is part of a master's project submitted to the School of Medicine of the Fluminense Federal University, under Certificate of Presentation for Ethical Consideration (CAEE, as per its acronym in Portuguese) protocol no. 17558819.9.0000.5243 and Research Ethics Committee (CEP, as per its acronym in Portuguese) protocol no. 3.567.788. Initially, the development of the digital model was requested by a group of 40 hospitals distributed into different regions in Brazil and that already used incident management software. The group indicated the need for improving issues related to prompt communication and incident treatment in a proactive way for work teams of healthcare institutions. Therefore, managers suggested a company with expertise in the area to propose a tool able to provide early detection, communication, discussion, and intervention in incidents and potential incidents in institutions. The following phases were carried out for software development: reviewing the scientific production available in the literature with the purpose of mapping knowledge on the theme, identifying potentialities and weaknesses, and integrating studies on the development of the tool; definition using the Agile Scrum method; and software's implementation in healthcare institutions. Because this was an experience report, previous authorization from the company's board responsible for the software was requested to launch the initiative. In addition, according to Resolution 466/12 of the National Research Ethics Commission (CONEP, as per its acronym in Portuguese), no data or information enabling the identification of the hospitals and the participants involved in the development of the software was released. RESULTS In order to facilitate the presentation of results, the development process of the digital tool and the software implementation were described in stages: Stage 1 -Development of the digital model Considering the model proposed by the Institute for Healthcare Improvement for the Safety Huddle, clients' need for a digital methodology was first understood. The kickoff, that is, the initial meeting of the project, involved the participation of quality and patient safety leaders of the hospitals that already used incident management software. It is worth mentioning that they were all nurses with expertise in the area. For this first stage, two quality tools were used. Brainstorming, the first, is a technique that encourages group creativity and has the purpose of achieving an objective for a specific process or task. After exposing ideas, operationalization was discussed by means of the drawing up of a flowchart in the Bizagi 3.3/2018 software to carry out validation with the IT team. The software's development team was made up of two nurses, two IT professionals with training in development and programming, and one product manager, who is a nurse, specialized in quality and patient safety. Scrum was the methodology used (9)(10) , which expedites and optimizes the management and planning of software projects. The method's name emerged from the comparison between developers and rugby players, that is, a quick meeting that occurs before the initial throw. According to some authors, using this method brings the following benefits: increase in client satisfaction; improvement in communication among the development team; motivation of the product and service development team; improvement in the quality of products and services produced; and reduction in development costs (10) . The operationalization of this technique for software development established a set of rules and management practices adopted for a successful project. The flow drawn up by leaders of the healthcare institutions was presented to the development team, and requirements and estimation of hours for software development were discussed, as recommended by Scrum. This was carried out through the backlog of product or service, that is, list of actions and functionalities to be developed, considered as an important practice for organization and management of the requirements collected, whose responsibility is shared with the development team (9)(10) . The project was carried out through meetings, and the backlog was daily updated after a quick meeting among team members, in order to set tasks to be carried out during the day and identify results achieved on the previous day. A checklist with these three questions was prepared to hold the meeting objectively: What was carried out yesterday? What will be carried out today? Was any obstacle identified when undertaking activities? (9)(10) The software was developed within six weeks, with a communication plan for clients, development of educational materials on the Safety Huddle method, and a guiding manual on its use. Stage 2 -Software implementation in healthcare institutions Meeting with the hospitals involved in the project was held for presentation of the operationalization flow of the digital Safety Huddle tool. The description of the flow consisted of sending a daily alert by email with all notifications of incidents in the institution over the past 24 hours. Classification according to the ICPS was only carried out after the report of notifications, when problems and possible risks were identified and mitigated as early as possible. This reading was carried out at 7 a.m. by the quality and patient safety team of the institution. The software enabled to save the emails of all leaders through the creation of specific groups according to work teams. For example, a team of critically ill patients made up of a physician coordinator of high-complexity units, a nurse coordinator of critical units, a clinical pharmacist, and other members according to the needs of institutions. The quality and patient safety team was responsible for sending notifications to work teams up to 8 a.m. Upon receipt, team members should discuss notifications up to 10 a.m. through the Safety Huddle Unit meeting, to discuss contributing factors and risks without prevention barriers, and developing short, medium, and long-term improvement plans. The choice of notifications to be discussed in the Safety Huddle unit was based on the following important points: consequence for patients based on ICPS/WHO taxonomy, experience of professionals involved, and the historical series of the notifications discussed in meetings with quality and patient safety leaders of the hospitals that already used incident management software, as presented in Chart 1. After choosing notifications, work teams were responsible for engagement to mitigate risks and deal with incidents, in order to reduce the degree of harm and contribute to a better outcome for patients, employees, and organization. At last, in loco discussion with the quality and patient safety team and work team was proposed, at 2 p.m. At this time, the quality and patient safety team participated, and validated actions discussed and implemented by the group. This kind of meeting, also called stand-up meeting, enables greater agility and does not affect the routine of the unit and sector where incidents occurred. FINAL CONSIDERATIONS Effective communication, which is set as an international patient safety goal, and interdisciplinary work team are key factors for safety and quality of care provided to individuals in healthcare services. It is worth mentioning that communication failure among healthcare professionals has been pointed out as one of the main factors that contribute to the occurrence of incidents/ adverse events, and, consequently, an unfavorable outcome to patients. In this respect, identification, analysis, and treatment of risks and incidents as early as possible are of utmost importance for better results in health care. The development and implementation of the digital Safety Huddle contributed to expedite the detection and distribution of actions, in addition to promoting integration among work teams. Therefore, it also ensures accountability and empowerment of front-line healthcare professionals to foresee and identify issues related to patient safety and face them based on action plans collectively developed. The experience, throughout this process, showed that involvement of senior leaders of institutions is of utmost importance for the structuring and implementation of actions, with the aim to ensure patient safety and promote voluntary notification of incidents. However, considering the critical-reflexive development of the method, the tool is not limited to the format herein purposed. The original tool lists several other pieces of data that may be collected, thus providing the possibility of continuous encouragement to improve quality and safety, as well as the development of further studies related to the tool as an intervention object for care improvement.
3,103.8
2020-01-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Compact silicon microring resonators with ultra-low propagation loss in the C band The propagation loss in compact silicon microring resonators is optimized with varied ring widths as well as bending radii. At the telecom band of 1.53-1.57 μm, we demonstrate as low as 3-4 dB/cm propagation losses in compact silicon microring resonators with a small bending radius of 5 μm, corresponding to a high intrinsic quality factor of 200,000-300,000. The loss is reduced to 2-3 dB/cm for a larger bending radius of 10 μm, and the intrinsic quality factor increases up to an ultrahigh value of 420,000. Slot-waveguide microring resonators with around 80% optical power confinement in the slot are also demonstrated with propagation losses as low as 1.3±0.2 dB/mm at 1.55 μm band. These loss numbers are believed to be among the lowest ones ever achieved in silicon microring resonators with similar sizes. ©2007 Optical Society of America OCIS codes: 250.5300 (photonic integrated circuits); 130.3120 (integrated optical devices); 230.5750 (resonators); 220.4000 (microstructure fabrication) References and Links 1. K. K. Lee, D. R. Lim, and L. C. Kimerling, “Fabrication of ultralow-loss Si/SiO2 waveguides by roughness reduction,” Opt. Lett. 26, 1888-1890 (2001). 2. F. Xia, L. Sekaric, and Y. A. Vlasov, “Ultra-compact optical buffers on a silicon chip,” Nature Photon. 1, 65-71 (2007). 3. Y. Vlasov and S. McNab, “Losses in single-mode silicon-on-insulator strip waveguides and bends,” Opt. Express 12, 1622-1631 (2004). 4. P. Dumon, W. Bogaerts, V. Wiaux, J. Wouters, S. Beckx, J. V. Campenhout, D. Taillaert, B. Luyssaert, P. Bienstman, D. V. Thourhout, and R. Baets, “Low loss SOI photonic wires and ring resonators fabricated with deep UV lithography,” IEEE Photon. Technol. Lett. 16, 1328-1330 (2004). 5. T. Tsuchizawa, K.Yamada, H. Fukuda, T. Watanabe, J. Takahashi, M. Takahashi, T. Shoji, E. Tamechika, S. Itabashi, and H. Morita, “Microphotonics devices based on silicon microfabrication technology,” IEEE J. Sel. Topics Quantum Electron 11, 232-239 (2005). 6. P. Dumon, G. Roelkens, W. Bogaerts, D. Van Thourhout, J. Wouters, S. Beckx, P. Jaenen, R. Baets, Basic Photonic Wire Components in Silicon-on-Insulator,Group IV Photonics, Belgium, p.189-191 (2005). 7. J. Niehusmann, A. Vörckel, P. H. Bolivar, T. Wahlbrink, W. Henschel and H. Kurz, “Ultrahigh-qualityfactor silicon-on-insulator microring resonator”, Opt. Lett. 29, 2861-2863 (2004). 8. M. A. Popovic, T. Barwicz, F. Gan, M. S. Dahlem, C. W. Holzwarth, P. T. Rakich, H. I. Smith, E. P. Ippen, and F. X. Kärtner, “Transparent Wavelength Switching of Resonant Filters,” in Conference on Lasers and Electro-Optics/Quantum Electronics and Laser Science Conference and Photonic Applications Systems Technologies, OSA Technical Digest Series (CD) (Optical Society of America, 2007), paper CPDA2. http://www.opticsinfobase.org/abstract.cfm?URI=CLEO-2007-CPDA2 9. T. Baehr-Jones, M. Hochberg, C. Walker, A. Scherer, “High-Q optical resonators in silicon-on-insulator based slot waveguides,” Appl. Phys. Lett. 86, 081101 (2005). 10. Q. Xu, V. R. Almeida, R. R. Panepucci, and M. Lipson, “Experimental demonstration of guiding and confining light in nanometer-size low-refractive-index material,” Opt. Lett. 29, 1626-1628 (2004). 11. T. Baehr-Jones, M. Hochberg, G. Wang, R. Lawson, Y. Liao, P. A. Sullivan, L. Dalton, A. K. –Y. Jen, and A. Scherer, “Optical modulation and detection in slotted silicon waveguides,” Opt. Express 13, 52165226 (2005). 12. C. A. Barrios, and M. Lipson, “Electrically driven silicon resonant light emitting device based on slotwaveguide,” Opt. Express 13, 10092-10101 (2005). #86164 $15.00 USD Received 6 Aug 2007; revised 1 Oct 2007; accepted 10 Oct 2007; published 19 Oct 2007 (C) 2007 OSA 29 October 2007 / Vol. 15, No. 22 / OPTICS EXPRESS 14467 13. S. Xiao, M. H. Khan, H. Shen, M. Qi, “Modeling and measurement of losses in silicon-on-insulator resonators and bends,” Opt. Express 15, 10553-10561 (2007). http://www.opticsinfobase.org/abstract.cfm?URI=oe-15-17-10553 14. C. W. Holzwarth, T. Barwicz and H. I. Smith, “Optimization of HSQ films for photonic applications,” 51 International Conference on Electron, Ion, and Photon Beam Technology and Nanofabrication, 2007. Introduction The high-index-contrast in silicon-on-insulator (SOI) waveguides allows small bending radii with low propagation losses, leading to compact resonators and high-density integration of micro-photonic devices.However, propagation losses due to waveguide sidewall roughness and small bending radii may be prohibitively large for highly integrated SOI photonic devices.Extremely low-loss SOI strips were reported by reducing waveguide roughness with postfabrication trimming techniques [1].In this paper, without post-fabrication trimming, we demonstrate ultra-low propagation losses of 3-4 dB/cm and 2-3 dB/cm in the entire C band in compact silicon microring resonators with bending radii of 5 μm and 10 μm, respectively.The corresponding round-trip losses are around 0.01-0.02dB.Our reported losses in microring bends are comparable to the latest reports on propagation losses in silicon strips, e.g., 1.7±0.1 dB/cm (with post-fabrication trimming) [2], 3.6±0.1 dB/cm [3], 2.4±1.6 dB/cm [4] and 2.8 dB/cm [5].This indicates that the bending loss is negligible compared to the linear propagation loss due to sidewall roughness.As a result, such low-loss microring bends may be treated as strips.Our reported lowest loss numbers in microrings are slightly lower than other ones for similar bending radii, e.g., 0.02-0.03dB/round-trip for a bending radius of 6.5 μm [2] and 0.004 dB per 90 o bend for a bending radius of 5 μm [6].Compared to the work on low-loss silicon microring resonators with a large bending radius of 20 μm in [7], we show comparable ultrahigh intrinsic quality factors of 200,000-300,000 in microring resonators with a four times smaller radius, and a higher intrinsic quality factor of 300,000-400,000 for a two times smaller bending radius.Thus our result enables more compact footprint of devices based on high-Q silicon microring resonators.Comparable results were briefly reported in [8] but without experimental details.There have been great interests in exploring the light confinement in slot-waveguides [9-10], which have also been used for active silicon photonic devices [11][12].The void structure provides many opportunities for novel photonic applications.Here, we report 1.3±0.2dB/mm propagation loss in the microring resonator based on slot-waveguides with around 80% optical power confined in the slot.This loss number is comparable to previous best-reported values in [9] but in a five times smaller ring resonator, and we also demonstrate slot-waveguide silicon microring adddrop filter for the first time as previous slot-waveguide resonators were coupled to only a single waveguide. Recently, we reported a new method to analyze the propagation loss in microring resonators [13].Figure 1 shows the schematic of a symmetrically coupled microring resonator.κ 2 is defined as the fraction of power coupling between the bus waveguide and the microring resonator.All losses other than the bus-ring coupling, including the bending loss and radiation loss due to sidewall roughness, is lumped into a parameter κ p 2 , which is the fraction of propagation power loss per round-trip in the microring resonator.We define the minimum power transmission in the through-port as γ , the drop -3dB bandwidth as δλ d , and the response period of the resonator as FSR (free spectral range).The waveguide power coupling coefficient is calculated to be κ 2 = π×(δλ d )×[1-(γ) 1/2 ]/FSR, and the propagation power loss coefficient is determined to be κ p 2 = 2π×(δλ d )×(γ) 1/2 /FSR [13].To be compared with the losses in straight waveguides, which is often quoted in dB/cm, the propagation loss in a microring resonator can be expressed as -10×log 10 (1-κ p 2 )/(2πR) (dB/cm), where 2πR is the perimeter of the microring resonator.The total quality factor is defined as )], and the intrinsic quality factor is We would like to comment briefly here on advantages of our method.For details, please refer to reference [13].Compared to the well-known cut-back or Fabry-Pérot methods, our method in principle is independent of fiber-to-waveguide coupling or cleaved waveguide facets.In particular, our method is very useful in determining the very low propagation losses in waveguides and/or bends from the response of a single resonator in add-drop configuration.It does not require the fabrication of many waveguides of various lengths and/or bends for accurate measurement.Compared to the well-known critical coupling method, ours does not require the tedious fabrication of many devices in order to obtain critically coupled resonators in all-pass configuration, which demands well matched waveguide coupling and resonator's loss, i.e., κ 2 =κ p 2 .Furthermore, for symmetrically coupled add-drop filters based on microring resonators, our method gives an in-situ loss analysis, avoiding the device non-uniformities that result from fabrication. Device fabrication Our devices were fabricated on a silicon-on-insulator (SOI) wafer with a top silicon layer thickness of 250 nm and a buried oxide thickness of 3 μm.The device patterns were exposed in a 150 nm-thick negative resist (hydrogen silsesquioxane, or HSQ) with a Vistec VB6 UHR-EWF electron-beam lithography (EBL) system at 100kV.The main beam deflection field size was 0.5mm×0.5mm,and the beam deflection step was 2 nm.For as smooth as possible waveguide line edges, we put large number (~ 2,800) of vertices in a polygon to approximate the rings in the layout.This minimizes pattern digitization error and reduces waveguide lineedge roughness.The electron beam has a spot diameter of around 5 nm, and this helps to round out pattern digitization error due to the discrete beam deflection step (2 nm) in exposures.The development of HSQ was done in 25% TMAH for 1 minute to improve the contrast.Inductively-coupled-plasma (ICP) reactive-ion-etch (RIE) was then applied to etch through the 250 nm silicon layer.The chamber pressure was 2 mTorr and the gases were Cl 2 and Ar with flow rates of 15 sccm and 5 sccm respectively.The HSQ mask was kept intact as a top cladding layer during device characterization as the HSQ has a refractive index ~ 1.4 and a very low absorption loss at 1.55 μm bands [14].According to our measurements in this paper, the HSQ does not appear to affect the optical performance in high-index-contrast silicon waveguides. It is known that the propagation loss is sensitive to the width of silicon waveguides, so we fabricated five sets of microring resonator with the same radius of 5 μm but different ring waveguide widths of 400, 450, 500, 550 and 600 nm.The microring waveguides are approximately of single mode (TE) at ~ 1.55 μm telecom band for widths up to 600 nm, and other modes have much higher propagation loss in the strongly bended microring waveguides.Figure 2 shows scanning-electron micrographs of one fabricated microring resonator with waveguide width W ring ~ 500 nm and waveguide cross-sections at two cleaved facets.W bus is fixed at 500 nm for all fabricated devices in this paper.The gap (g) between the bus waveguide and the ring is ~ 300 nm.The highly magnified (×100K) image of the ring waveguide shows a very smooth line edge.The line edge roughness is estimated to be ≤ 5 nm, which is mainly limited by the mixed effect of the digitization error and the finite beam spot size in EBL.Additionally, the waveguide width may have very slight variations due to the beam deflection errors, which are up to 10 nm over the entire field of 0.5 mm×0.5 mm according to machine calibrations.As the microring's footprint (e.g., 10 μm×10 μm) is very small compared to the whole writing main field, the effect of beam deflection errors is expected to be small.The sidewall smoothness and the line-edge smoothness are confirmed with waveguide cross-section images in Fig. 2. Slight over-etch into the buried oxide can be observed.One very important issue is the accuracy of the extracted such high intrinsic quality factors (> 200,000) and such low propagation losses (< 4 dB/cm).The accuracy of κ p 2 is very sensitive to small errors in measuring high through-port extinctions of 20 dB or more.In Fig. 3(b), the sharp resonance notch in the through-port only has a 3dB bandwidth of several picometers, which is close to our tunable laser wavelength resolution (1 pm).For a strip waveguide with cross-section of 500 nm × 250 nm, instead of the lowest TE mode, we also observed the lowest TM mode.This TM mode has higher propagation loss, but will not resonate at the wavelength of TE mode resonance, thus remaining in the waveguide.Therefore it may reduce the through-port extinction of the lowest TE mode, leading to a larger measured γ.According to Q t =λ o /δλ d /(γ) 1/2 , the actual intrinsic quality factor could be larger.In order to verify the achieved low propagation losses and high intrinsic quality-factors, weakly coupled microring resonators were also fabricated and tested.Figure 5(a) shows scanning electron micrographs of one fabricated microring resonators with weak waveguide coupling.The ring width W ring is ~ 600 nm, and the bus waveguide width W bus is ~ 500 nm.The coupling gap is increased to ~ 450 nm. Figure 5(b) is a zoom-in view for the responses at wavelengths ~1.55 μm.For the resonance at 1548.6 nm, we have γ = 0.053 ± 0.005 (~ 13 dB extinction) and δλ d = 0.025±0.001nm, corresponding to a total quality-factor Q t ~ 62,000. Analysis of propagation loss The extracted waveguide coupling coefficient k 2 is 0.0038±0.0004,and the extracted power loss coefficient k p 2 is 0.0022±0.0002.The propagation loss is 3.0±0.3dB/cm, and the corresponding intrinsic quality factor Q i is 270,000±27,000.The loss number verifies that we have indeed achieved low propagation loss of 3-4 dB/cm and high intrinsic quality-factors of 200,000-300,000 at telecom wavelengths in compact microring resonators with a radius of 5 μm.We believe these loss numbers are among the lowest ones without any post-fabrication trimming in silicon microring resonators or waveguides. To understand the bending effect on the propagation loss, we also fabricated and tested microring resonators with two other radii of 10 μm and 2.5 μm.Figures 6(a) and 6(b) show measured responses of through-port and drop-port in one fabricated resonator with 10 μm bend radius (W ring = 600 nm).For the resonance at ~ 1.53 μm, FSR = 7.7±0.05nm,γ = 0.021 ± 0.002 (~ 17 dB extinction) and δλ d = 0.022±0.001nm (Q t ~ 70,000).Consequently, κ p2 =0.0022±0.0002,and the propagation loss is 1.8±0.2dB/cm or 0.011±0.001dB/round-trip (Q i =422,000±40,000).For the resonance at ~ 1.56 μm, the propagation loss is 2.8±0.3 dB/cm or 0.017±0.002dB/round-trip (Q i =320,000±30,000).Compared to the microring resonator with R=5 μm, the resonator with R=10 μm shows obviously lower propagation losses across the C band due to the smaller bending curvature.Figures 7(a) and 7(b) show extracted propagation losses and intrinsic quality-factors, respectively, as functions of the ring width (400, 500 and 600 nm) and the wavelength over C band.Compared to the microring with 5 μm bending radius, for 10 μm bending radius, the propagation loss shows a significant lower number for W ring = 400 nm, and it is also less wavelength dependent.In Fig. 7, for W ring = 400 nm, we plotted propagation loss and intrinsic quality-factor for two microrings fabricated on the same chip but at different locations, and there were some variations attributed to fabrication-induced variations.For rings widths of W ring = 500 or 600 nm, the propagation loss is very low (~ 2-4 dB/cm) over the C band.These observations indicate that the bending loss is obviously smaller for 10 μm bending radius than that for 5 μm bending radius.On the other hand, for a 2.5 μm bend radius, the propagation loss increases dramatically by an order of magnitude or more for small ring width, and this high loss is mainly attributed to the bending loss in small microrings with R = 2.5 μm (only around four times of the guided wavelength in silicon).The lower bound for the propagation loss and the upper bound for intrinsic quality factor can be understood mathematically here.For a very small κ p k p 2 )/(2πR)≈4.34×κp 2 /(2πR), which is roughly constant if the propagation loss is dominated by the linear propagation loss, and the intrinsic quality-factor also stays approximately the same according to Q i =(2πλ o )/(FSR×κ p 2 )= (4π 2 n g /λ o )× (R/κ p 2 ). Slot-waveguide microring resonator Figure 8(a) shows scanning-electron micrographs of one fabricated slot-waveguide microring resonator and the simulated slot-mode (major e-field) amplitude profile.The radius of the microring is 10 μm.The light is coupled into the slot-waveguide microring resonator with a regular silicon waveguide.The slot has a width ~ 90 nm, and the width is ~ 250 nm for each slot arm.The mode is simulated with Rsoft BPM, and the power confinement factor in the slot area is around 80±10%. Figure 8(b) shows experimental add-drop response.The FSR is 10.1±0.1 nm at ~ 1.55 μm, and a total quality-factor Q t is ~ 14,100.The extracted propagation loss is 1.3±0.2dB/mm (Q i =52,000±3,000).In addition, we also fabricated and tested another slot-waveguide microring resonator with a radius of 5 μm, and the extracted propagation loss (12±1 dB/mm at 1.55 μm) is an order of magnitude higher than that in the slot-waveguide microring with a radius of 10 μm and nearly two orders of magnitude higher than that in the regular waveguide microring with the same radius of 5 μm.This large propagation loss indicates that sidewall roughness scattering loss is very large in slot-waveguide with small bending radius like 5 μm, since a major portion (around 80%) of the optical power is inside the slot of only ~ 90 nm wide. Conclusion In summary, without post-fabrication smoothing, we have demonstrated ultra-low propagation losses in compact silicon-on-insulator microring resonators, using optimized lithography and etch processes.The propagation loss was optimized by a by varying both the ring width and the bending radius.For a waveguide core cross-section of ~ 600 nm × 250 nm, the loss was found to be consistently 3-4 dB/cm and 2-3 dB/cm over the entire C band for bending radii of 5 μm and 10 μm, respectively.For waveguide core cross-sections below ~ 500×250 nm, the propagation losses in 5 μm-radius rings increase appreciably at larger wavelengths.The lowest propagation loss number we achieved was 1.8±0.2dB/cm at 1.53 μm for a 10 μm bending radius, corresponding to an intrinsic quality-factor of 422,000±40,000.To our best knowledge, the loss of 1.8±0.2dB/cm is the lowest one ever published for a rectangular submicron silicon waveguide without post-fabrication trimming, and the corresponding intrinsic quality factor of 422,000±40,000 is the highest one reported for any silicon microrings of similar bending radii.Slot-waveguide microring resonators were also fabricated, and a relatively low propagation loss of 1.3±0.2dB/mm (an intrinsic quality-factor of 52,000±3,000) was achieved at 1.55 μm in a slot-waveguide with a bending radius of 10 μm and around 80% optical power confined in the slot. Fig. 2 . Fig. 2. Scanning electron micrographs of a fabricated microring resonator and waveguide cross-section at two cleaved facets. Figure 3 Figure3shows the measured responses (power transmission spectrum) of a representative microring resonator as illustrated in Fig.2.In Fig.3(a), we show the drop-port response over the C band, and a very high filtering contrast ≥ 30 dB is demonstrated.The average FSR is 16.0±0.1 nm.In Fig.3(b) we use much finer wavelength steps to scan a particular resonance in order to accurately measure the through-port extinction and the drop bandwidth.The red line in Fig.3(b) is the measured through-port response, showing a high extinction of 24±0.5 dB (γ = 0.004 ± 0.0005).The blue line represents the measured drop-port response, with a -3dB bandwidth of δλ d = 0.11±0.01nm and an ultra-low drop-loss (≤ 1dB).These lead to a total quality factor Q t of 14,000±1100 at ~ 1524.6 nm.The extracted power loss coefficient κ p 2 is 0.0027±0.0004(Q i = 220,000±30,000), and the corresponding propagation loss is Figures 4 Figures 4(a) and 4(b) illustrate the extracted propagation losses and intrinsic qualityfactors, respectively, as functions of the ring width and the wavelength over C band.For rings width W ring ≤ ~ 500 nm, the propagation loss increases significantly as wavelengths increase across the entire C band.This is likely due to the fact that the bending dominates the loss and the bending loss increases significantly in bends with smaller waveguide width (W ring ≤ ~ 500 nm) due to lower optical confinement at larger wavelengths.For rings width W ring ≥ ~ 550 nm, the propagation losses are very low < ~5 dB/cm and do not change significantly over the C band.The lowest extracted propagation loss we observed is 3.5±0.3dB/cm (intrinsic quality factor Q i = 240,000±24,000) for W ring =600 nm at the wavelength ~ 1.55 μm. Fig. 5 . Fig. 5. (a) Scanning-electron micrographs of one fabricated weakly coupled microring resonator.(b) zoom-in view of through-port and drop-port responses scanned with 1 pm wavelength step. Fig. 6 . Fig. 6.Measured responses of the through-port and drop-port of a microring resonator with R=10 μm.(b) is a zoom-in view of (a). Fig. 7 . Fig. 7. Extracted propagation losses (a) and extracted intrinsic quality factors (b) in microring resonators with different ring widths (W ring = 400, 500, and 600 nm) but the same core height of 250 nm.The bending radius is 10 μm.
4,776.8
2007-10-29T00:00:00.000
[ "Physics" ]
The Development of a State-Aware Equipment Maintenance Application Using Sensor Data Ranking Techniques Billions of electric equipment are connected to Internet of Things (IoT)-based sensor networks, where they continuously generate a large volume of status information of assets. So, there is a need for state-aware information retrieval technology that can automatically recognize the status of each electric asset and provide the user with timely information suitable for the asset management of electric equipment. In this paper, we investigate state-aware information modeling that specializes in the asset management of electric equipment. With this state-aware information model, we invent a new asset state-aware ranking technique for effective information retrieval for electric power and energy systems. We also derive an information retrieval scenario for IoT in power and energy systems and develop a mobile application prototype. A comparative performance evaluation proves that the proposed technique outperforms the existing information retrieval technique. Introduction Ubiquitous sensing enabled by IoT technologies cuts across many areas of modern-day living. Advances in wireless sensor networks enhance the effectiveness of IoT applications and enrich human life. Among many useful IoT technologies, Paola et al. proposed a modified Stable Election Protocol (SEP), named Prolong-SEP (P-SEP), to prolong the stable period of Fog-supported sensor networks by maintaining balanced energy consumption [1]. The development of such a wireless sensor network technology generates a large amount of sensor data and enables the development of useful IoT applications in human life through intelligent information technology. With the development of ICBM (Internet of Things, Cloud, Big data, Mobile/machine intelligence) technology, which led to the 4th industrial age, Information Retrieval (IR) technology has recently focused on providing users with the information they want on time [2]. With the IoT (Internet of Things) in power and energy systems, where a large amount of information is generated at high speed, state-aware computing is considered as a key technology for intelligent information retrieval. Therefore, for the IoT in power and energy systems, unlike the existing computing environment, it is essential to develop state-aware information services that automatically recognize the status of an object and provide appropriate information according to the status of an object. In the power industry, the Internet of Things (IoT) is at the forefront of this transformation, imparting capabilities such as real-time monitoring, situational awareness and intelligence, control, and cybersecurity to transform the existing electric power energy systems into intelligent, cyber-enabled electric power energy systems. Digitizing the electric power ecosystem using IoT improves asset Sensors 2020, 20, 3038 2 of 16 visibility, optimizes the management of distributed generation, eliminates energy wastage, and creates savings. IoT has a significant impact on electric power energy systems and offers several opportunities for growth and development [3]. Furthermore, in the power industry, the number of electric assets is increasing rapidly, and a tremendous amount of asset information is being intensely generated with the proliferation of electric power IoT [4]. For effective asset management of electric equipment in the IoT of electric power energy systems, it is very important to find only meaningful information from large amounts of asset information and provide the information to decision-makers. In recent years, the importance of asset management in consideration of asset health has been increasing beyond the monitoring and simple diagnosis of assets, so the asset information retrieval application can be effectively used as a basic tool for evaluation of asset health [5]. The health index used to make decisions about maintenance or replacement of equipment is defined based on the status information of the equipment. Furthermore, the accuracy and reliability of the health index can be determined based on what status information of an asset is used [6]. However, in the IoT of electric power energy systems, tremendous amounts of asset information are produced at a high speed, so it is very difficult to find suitable asset status information and use it for effective asset management purposes, such as defining the health index. To overcome these difficulties in the IoT of electric power energy systems and support decision-making for effective asset management, asset information processing technology considering the state of each asset is very necessary. If the status information of an asset can be provided immediately when it is needed, it can significantly reduce the time and cost of processing large amounts of IoT data for asset management decisions. As a result, each asset state awareness technique is of great help for effective asset management, such as defining a more accurate health index of electric equipment. However, due to the enormous amounts of sensor data and the steady increase in equipment assets, studies considering an effective technique for providing equipment status information for asset management are still insufficient. Besides, despite the remarkable development of ICBM technology, there are many difficulties in utilizing sensor data for asset management [7]. In this paper, we propose a technique for asset state-aware information retrieval in the IoT of electric power energy systems that have never been attempted before. The purpose of the proposed technique is to support decision-making for asset management more efficiently. To enable the retrieval of asset information according to the current situation, we define an asset state-aware information model. With the state-aware information model, we invent a new asset state-aware ranking technique for effective equipment asset information services in the IoT of electric power energy systems. The research contribution of this paper is as follows. First, the proposed information retrieval technique enables the development of decision support applications for more effective asset management because it can provide asset information that meets the potential needs of decision-makers. Second, we developed an application prototype that applied the proposed state-aware information technique and demonstrated it based on the actual use scenario. Prototype development and demonstration showed the possibility of using the proposed technique. Lastly, by performing comparative verification, the superiority of the proposed technique was proved, and as a result, it was shown that the proposed technique helped more effectively manage equipment assets in the IoT environment. This paper is organized as follows. Section 2 discusses context-aware computing in related studies. We also introduce previous studies on information retrieval in IoT environments and discuss their limitations. Section 3 discusses the necessity for state-aware information retrieval in the IoT of electric power energy systems and proposes a state-aware information retrieval technique for equipment based on the sensor data of each electric asset. In Section 4, we implement a prototype of the asset state-aware information retrieval mobile application that specializes in equipment asset management, showing the possibility for practical use of the proposed technique. In Section 5, we compare and verify the existing technique with the proposed technique to show the superiority of the proposed technique. Finally, Section 6 discusses the conclusions and future research. Context-Aware Computing Context-aware computing technology, which has been actively researched since the 2000s when the Internet of Things began to emerge, is an information technology that expresses real-world features by combining real-world sensing technology, networking technology, and multimedia technology [8][9][10][11]. Context-aware computing technologies have been applied to the ubiquitous and mobile computing paradigms, playing a key role in the success of these technologies [12,13]. Likewise, the convergence of context-aware computing and sensor technologies will enable the development of various systems: Not only is the amount of data generated over time by a large number of sensors or terminal devices distributed in the power system, but the types of data are also as diverse as the various types of sensors [14]. Therefore, there is a need for an Information Retrieval technique that is specialized for various IoT applications. Alam et al. implemented a context-aware automated cognitive health assessment system, combining the sensing powers of wearable physiological and physical sensors in conjunction with ambient sensors. Based on this system, they developed an automatic cognitive health assessment application in a natural older adult living environment [15]. Eleni et al. focused on the issue of facilitating the management, process, and exchange of the numerous and diverse data points generated in multiple precision farming environments by introducing a framework with a cloud-based context-aware middleware solution as part of a responsive, adaptive, and service-oriented IoT integrated system [16]. In [17], an architecture for building and running context-aware smart classrooms was proposed. The proposed architecture consists of three parts, namely a prototype of a context-aware smart classroom, a model for technology integration, and supporting measures for the operation of smart classrooms in this architecture. Schilt and Theimer, who most successfully defined the concept of context awareness, defined contexts as the environment around them or the situation in which they exist [18]. In other words, the context may be referred to as information characterized by the state of an entity existing in the real world. The mean of the entity is an interaction between a human and a thing or things. If information about this interaction can characterize the situation of the object, then that information can be said to be a context [19]. Therefore, context-aware computing may be defined as a computing system that uses the context of an entity in the process of providing appropriate information or services related to a user's work. Strang et al. classify general context information as follows [20]: • The context of a user. • The context of the physical environment. • The context of a computing system. • The history of interaction between entities. • Unclassified situations. When the classification defined in [20] is applied to the IoT in electric power energy systems, the following context information can be defined: • Identification information of each electric asset (identifier, installation date, manufacturer, serial number, etc.). • The sensed state data of each electric asset (temperature, slope, voltage, current, degradation signal, etc.). • The spatial information (location, direction, etc. The development of asset information service in the IoT of electric power energy systems through the convergence of context-aware computing technology and sensor technology will make the decision-making for asset management of electric assets more effective. Information Retrieval for IoT Information retrieval includes information processing techniques for the presentation of information, the storage of information, the organization of information, and the search for access to specific information to effectively provide the information desired by a user in a short time [21]. Such Information Retrieval technology develops decision-making support services for various types of asset management by finding information that meets the user's needs from large amounts of information generated from sensors or terminal devices in the IoT environment [22]. However, the intense increase in data due to the expansion of IoT infrastructure in many areas of the world has faced many limitations in finding meaningful information from large amounts of data and making appropriate decisions for asset management. Therefore, the necessity for information retrieval to easily access the required information has been further increased, and thus active research on Information Retrieval technology in the IoT environment is being conducted. Representative studies include [23,24]. Reference [23] proposes an IoT information model specialized for smart building and [24] proposes an IoT standard information retrieval model and indexing technique. The information model proposed by [23] did not consider the characteristics of the unstructured data generated in the IoT environment at all and did not solve the limitation of performance in processing a large amount of sensor data for information retrieval. In turn, [24] proposed an indexing technique specialized for IoT data to improve the efficiency of information retrieval in the IoT environment, but it also cannot consider the characteristics of specific domains, such as the IoT of electric power energy systems, where unstructured data is generated a lot. In the case of the IoT for electric power energy, where the sensor network environment processes and stores unstructured data with different characteristics in the form of information retrieval, users are enabled to easily find the desired thing's information. State-Aware Equipment Maintenance Application Using Sensor Data This chapter proposes a new asset state-aware information retrieval technique in the IoT of electric power energy systems. The proposed technique is composed of context-aware computing, sensor techniques, and the Information Retrieval method. Figure 1 shows the conceptual processes of our asset state-aware information retrieval in the IoT of electric power energy systems. As depicted in Figure 1, the asset information tagged with sensor data representing the current state of the equipment asset is stored in a state-aware conceptual layer database. The database stores meaningful data linked to sensor data and equipment information. Looking at the process of providing context-aware information service as shown in Figure 1, first, when a user's request for information occurs, the situation data of the user is input to the search engine. Then, among the numerous sensor data attached to the equipment, the sensor data that meets the current situation of the user is transmitted as the input value of the information retrieval engine. The state-aware information retrieval engine finally provides information that matches the user's state and the current state of the equipment based on the state-aware information retrieval method proposed in this paper. With the consideration of asset state, the proposed information retrieval technique enables the development of a decision-support application for more effective asset management because it can provide asset information that meets the potential needs of decision-makers. The proposed information retrieval technique consists of asset state information modeling considering the characteristics of the IoT of electric power energy systems, the automatic asset sensor data tagging technique, and the asset information ranking technique. The proposed sensor data tagging technique is a method of automatically tagging the sensor data to the asset's information, describing the current state of each asset, as well as providing the information or content of an asset. In other words, by applying the sensor data tagging technique, the real-world information and the digital information can be effectively Sensors 2020, 20, 3038 5 of 16 connected to generate meaningful information that is simultaneously representing the current state of the asset and information related to the asset state. To generate the information tagged with an asset state, we define an asset state-aware information model. Equipment's State-Aware Information Model For the equipment asset's state-aware information modeling, we organized the equipment asset and the information related to the asset and asset sensor data tag into a folksonomy that has a hyperlink structure, as represented in Figure 2. Folksonomy is a web-based information technology that allows user to upload their resources and to label them with arbitrary words, the so-called tags [25]. Currently, almost all web-based applications provide folksonomy-based information services, and examples of successful applications are Facebook (www.facebook.com), Instagram (www.instagram.com), and YouTube (www.youtube.com). The proposed information model based on folksonomy creates a contextual association between the asset and the information through sensor data annotations. As a result, the asset state-aware information model proposed in this paper makes it easy to generate semantic information that simultaneously represents the current state of the equipment assets and information related to the assets' state. Information related to a specific asset also can be easily classified by a considered asset state. The use of this classification approach for information suitable for an asset state can be effectively provided for decision-makers who need to manage equipment assets in the IoT of electric power energy systems. Equipment's State-Aware Information Model For the equipment asset's state-aware information modeling, we organized the equipment asset and the information related to the asset and asset sensor data tag into a folksonomy that has a hyperlink structure, as represented in Figure 2. Folksonomy is a web-based information technology that allows user to upload their resources and to label them with arbitrary words, the so-called tags [25]. Currently, almost all web-based applications provide folksonomy-based information services, and examples of successful applications are Facebook (www.facebook.com), Instagram (www.instagram.com), and YouTube (www.youtube.com). The proposed information model based on folksonomy creates a contextual association between the asset and the information through sensor data annotations. As a result, the asset state-aware information model proposed in this paper makes it easy to generate semantic information that simultaneously represents the current state of the equipment assets and information related to the assets' state. Information related to a specific asset also can be easily classified by a considered asset state. The use of this classification approach for information suitable for an asset state can be effectively provided for decision-makers who need to manage equipment assets in the IoT of electric power energy systems. Equipment's State-Aware Information Model For the equipment asset's state-aware information modeling, we organized the equipment asset and the information related to the asset and asset sensor data tag into a folksonomy that has a hyperlink structure, as represented in Figure 2. Folksonomy is a web-based information technology that allows user to upload their resources and to label them with arbitrary words, the so-called tags [25]. Currently, almost all web-based applications provide folksonomy-based information services, and examples of successful applications are Facebook (www.facebook.com), Instagram (www.instagram.com), and YouTube (www.youtube.com). The proposed information model based on folksonomy creates a contextual association between the asset and the information through sensor data annotations. As a result, the asset state-aware information model proposed in this paper makes it easy to generate semantic information that simultaneously represents the current state of the equipment assets and information related to the assets' state. Information related to a specific asset also can be easily classified by a considered asset state. The use of this classification approach for information suitable for an asset state can be effectively provided for decision-makers who need to manage equipment assets in the IoT of electric power energy systems. The asset's state-aware information model in Figure 2 is defined as Definition 1: The asset state-aware information model is represented as a tuple F := (A, C, I, R), then, • A, C, I are finite sets, where A is the equipment assets, C is the sensor data, and I is information about the equipment asset. • R is a ternary relationship between A, C, and I. A folksonomy of the equipment asset generated based on the proposed information model was connected to the other folksonomies and this set of folksonomies has the characteristics of an undirected hypergraph. Through these characteristics, we can define the asset state-aware tagging technique and the equipment asset state-aware information technique that are proposed in this paper. Equipment's State-Aware Tagging Algorithm The asset state-aware information retrieval technique proposed in this paper includes the automatic asset sensor data tagging technique that considers the asset state based on the information model defined in the previous section. Because power distribution equipment assets are often installed and operated outdoors, these assets are easily broken, or power failures occur due to external environmental factors such as bad weather conditions. Therefore, if the asset state and the asset's maintenance history information are semantically connected, it is possible to efficiently develop a variety of information services for asset management of the equipment in the IoT of electric power energy systems. Considering these characteristics of assets in the IoT of electric power energy systems, the asset state-aware tagging technique works by deciding whether or not to tag the sensor data with the asset information according to the importance of the situation that has a significant impact on the life of the asset. For example, if a particular equipment asset A is constantly generating abnormal data when it is in situation C, or if the asset A has a lot of asset information related to situation C, it is assumed that situation C has a significant impact on the life of asset A. As a result, the proposed information retrieval technique uses the asset state-aware tagging technique to retrieve the asset information corresponding to both the state of the equipment asset and the user or a decision-maker. Algorithm 1 The algorithm for asset state-aware tagging 1. FOR EACH (the information set that relates to equipment asset A) 2. BEGIN 3. FOR EACH (the information D j tagged with sensor data C j ) CIW ij ← Divide the participation frequency of the sensor data C j about the equipment information D j By the participation frequency of all tags of sensor data 6. IF (CIW ij > Threshold) THEN tagging the sensor data C j to information D j End Algorithm 1 performs asset state-aware tagging according to the situational importance of the asset state. Firstly, Algorithm 1 calculates the situational importance CIW ij . The situational importance CIW ij refers to the influence of the sensor data C on the equipment asset A. The higher the value of CIW ij , the more the sensor data C affects the life of asset A. Next, Algorithm 1 extracts the asset information D j that has the value of CIW ij above a certain threshold. Finally, the sensor data C j , which consists of situational information, and the value of CIW ij are tagged to the asset information D j . Equipment's State-Based Information Ranking In the ranking step of the proposed information retrieval technique, a recommendation weight is given to the asset information in consideration of both the user's state and the asset's state to form asset information to be finally provided to the user; that is, the recommendation weight means the degree of matching between the current state of the equipment and the information. Therefore, the higher the weight of information, the greater the degree of conformity to the current state of the equipment. The state-aware information ranking method can effectively provide asset information that matches the user's state and the current state of the equipment asset. To determine the recommendation weight of the asset information, two factors can be defined as in Definition 2: where id f c determines the importance of the sensor data C a for the entire asset information. For id f c , N is the number of all asset information and n a is the number of information with which sensor data C a is tagged. f ac neg quantifies the negative impact of the current state C current on the bases of the predefined state negative a for the equipment asset A. where w dc a is the weight indicating the importance of the sensor data C a for the asset information D a . In w dc a , CIW dc a predefined in Algorithm 1 means the importance of the sensor data C a tagged to the asset information D a . Furthermore, w dc neg a means the weight for the validity of the asset information D a in the negative state C a . w dc neg a is the weight indicating the importance of the asset information D a regarding the negative impact state C a has on the life of equipment asset A. Based on Equations (2) and (3), the state suitability CRC(C a , D a ) of the asset contents D a for the sensor data C a is calculated through Equation (4). Next, the formula for calculating CR da , which is the reliability of the asset information D a for the asset A, is as shown in Equation (5). In Equation (5), w f da is the number of words in the asset information D a and the D a is the number of times the sensor data C a is tagged in the asset information D a ; that is, the reliability of the asset information D a with respect to the sensor data C a can be calculated according to the how much information D a contains the contents related to the sensor data C a . Finally, the recomend weight Rank CD a of the asset information A is calculatead by summing up the suitablity CRC(C a , D a ) of the asset information D a for the sensor data C a and the reliability CR da of the information D a for the sensor data C a and the equipment asset A, as shown in Equation (6). As a result, the rank value of each information quantifies the authority and trust of the contents contained in the information, so that the asset information with high-rank value is more in line with the user's information request. The Development of a State-Aware Equipment Maintenance Application In this chapter, we implement a prototype of a state-aware equipment maintenance mobile application for IoT in electric power energy systems by applying our proposed technique. The state-aware equipment maintenance application developed in this paper provides useful information that matches the current situation of the equipment around the user's current location. The developed mobile application helps users to more effectively manage equipment assets by providing users with information on equipment exposed to dangerous situations on time. For the implementation of the application prototype, we consider the following equipment assets: • IoT in electric power energy systems enables the construction of innovative asset management operation models, such as failure prediction, by integrating smart sensor technology into the power grid where equipment assets like transformers, switches, or wires are operated. Considering the characteristics of the IoT of electric power energy systems, we categorized various states that have a negative effect on the equipment asset, as defined as in Table 1, in order to provide timely information suitable for the state that has a relatively high impact on the life of the equipment asset. Table 2 shows an example of the asset state that can be obtained through smart sensors in the IoT of electric power energy systems, and we considered these states for the development of the prototype application. Table 2. An example of acquired sensor data for IoT in electric power energy systems. The states considered in the prototype can be divided into the equipment asset state and the user's state, as shown in Table 2. The developed prototype utilizes large amounts of sensor data generated from sensors attached to equipment assets installed on a pole, such as transformers, and thus also any switch in the equipment asset's state. In addition, the user's profile data, such as gender, age, and work role, as well as the user's current location are used as the user's state in the prototype. These user states are acquired from the IP and GPS of the user's smartphone. Category of States Providing large amounts of sensor data, which is one of the limitations of the existing various IoT information services, makes it difficult for the user to access the desired information. To solve this, the implemented prototype restricts the amount of information provided in consideration of the spatial state of the user so that the desired information can be provided more effectively. Figure 3 is part of the spatial conceptual hierarchy model of assets in the IoT of electric power energy systems, based on which the amount of provided information depends on the spatial state of each user. Limiting the amount of provided information according to the spatial state means providing the information according to the semantically hierarchical levels that consider the state of the user and the installed location of the equipment assets. In other words, unlike general information retrieval services based on user input keywords that provide users with large amounts of unnecessary information, users can obtain the needed information more effectively by showing retrieved information at a limited level by considering the current spatial state. In other words, unlike general information retrieval services based on user input keywords that provide users with large amounts of unnecessary information, users can obtain the needed information more effectively by showing retrieved information at a limited level by considering the current spatial state. Implementation Environment and Testing Scenario The implemented prototype consists of a server module, a client module, and a state generator, as shown in Table 3. Table 3 shows the implementation environment of each module. The state generator generates the location, weather, and asset state information-like the information obtained from the sensors in the real IoT of electric power energy systems environment-for testing the developed prototype. The operation of the state-aware equipment maintenance application implemented in this paper is shown in Figure 4. The asset state and the user's state generated from the state generator are transferred to the server module. The server module considers the received sensor data to meet the information need for the user through the asset state-aware information retrieval technique proposed in this paper and provides retrieved asset information through the user's smartphone. Implementation Environment and Testing Scenario The implemented prototype consists of a server module, a client module, and a state generator, as shown in Table 3. Table 3 shows the implementation environment of each module. The state generator generates the location, weather, and asset state information-like the information obtained from the sensors in the real IoT of electric power energy systems environment-for testing the developed prototype. The operation of the state-aware equipment maintenance application implemented in this paper is shown in Figure 4. The asset state and the user's state generated from the state generator are transferred to the server module. The server module considers the received sensor data to meet the information need for the user through the asset state-aware information retrieval technique proposed in this paper and provides retrieved asset information through the user's smartphone. The service test of our prototype is based on the following scenario. Mr. Lee who owns a GPSequipped smartphone with the state-aware equipment maintenance application and has a work role of "equipment facilitator". The state of the season is August and the state of the weather is hot and humid. In addition, the state corresponding to the location of the equipment asset was generated based on the spatial information hierarchical model as shown in Figure 3. Figure 5 is an abstraction map of the hierarchical inclusion between each location of the S/S (substation), D/L (distribution line), poles, and equipment assets. The moving route of Mr. Lee is also shown in Figure 5. The prototype provides Mr. Lee with asset information that satisfies both the current state of Mr. Lee and equipment assets that are installed in Mr. Lee's moving route. The implemented prototype will provide Mr. Lee with information about equipment assets that are installed in Mr. Lee's moving route by considering both Mr. Lee's state and asset state, based on the defined scenario. In other words, the prototype using the asset state-aware information retrieval technique proposed in this paper provides the user with useful information enabling more effective asset management according to the asset's state. Implementation Results From the results in Table 4, it can be seen that as the frequency of sensor data tagging increases, the value of the state similarity is larger. In other words, asset information tagged with many specific sensor data can be said to have high reliability. Through this, the automatic sensor data tagging The service test of our prototype is based on the following scenario. Mr. Lee who owns a GPS-equipped smartphone with the state-aware equipment maintenance application and has a work role of "equipment facilitator". The state of the season is August and the state of the weather is hot and humid. In addition, the state corresponding to the location of the equipment asset was generated based on the spatial information hierarchical model as shown in Figure 3. Figure 5 is an abstraction map of the hierarchical inclusion between each location of the S/S (substation), D/L (distribution line), poles, and equipment assets. The moving route of Mr. Lee is also shown in Figure 5. The prototype provides Mr. Lee with asset information that satisfies both the current state of Mr. Lee and equipment assets that are installed in Mr. Lee's moving route. The implemented prototype will provide Mr. Lee with information about equipment assets that are installed in Mr. Lee's moving route by considering both Mr. Lee's state and asset state, based on the defined scenario. In other words, the prototype using the asset state-aware information retrieval technique proposed in this paper provides the user with useful information enabling more effective asset management according to the asset's state. The service test of our prototype is based on the following scenario. Mr. Lee who owns a GPSequipped smartphone with the state-aware equipment maintenance application and has a work role of "equipment facilitator". The state of the season is August and the state of the weather is hot and humid. In addition, the state corresponding to the location of the equipment asset was generated based on the spatial information hierarchical model as shown in Figure 3. Figure 5 is an abstraction map of the hierarchical inclusion between each location of the S/S (substation), D/L (distribution line), poles, and equipment assets. The moving route of Mr. Lee is also shown in Figure 5. The prototype provides Mr. Lee with asset information that satisfies both the current state of Mr. Lee and equipment assets that are installed in Mr. Lee's moving route. The implemented prototype will provide Mr. Lee with information about equipment assets that are installed in Mr. Lee's moving route by considering both Mr. Lee's state and asset state, based on the defined scenario. In other words, the prototype using the asset state-aware information retrieval technique proposed in this paper provides the user with useful information enabling more effective asset management according to the asset's state. Implementation Results From the results in Table 4, it can be seen that as the frequency of sensor data tagging increases, the value of the state similarity is larger. In other words, asset information tagged with many specific sensor data can be said to have high reliability. Through this, the automatic sensor data tagging Table 4, it can be seen that as the frequency of sensor data tagging increases, the value of the state similarity is larger. In other words, asset information tagged with many specific sensor data can be said to have high reliability. Through this, the automatic sensor data tagging method proposed in this paper has an effective advantage in classifying highly reliable asset information. Figures 6 and 7 show the operation results of the mobile application prototype implemented by applying the state-aware information retrieval method proposed in this paper. From the results in In general, an information retrieval method returns large amounts of information, including a keyword directly input by a user as a search result. As a result, the existing information retrieval method makes it difficult for users to find the desired information and have limitations in providing reliable information. However, the state-aware information retrieval method proposed in this paper can search for the asset information that matches the user's state and the asset state without the user's direct keyword input. As shown in Figures 6 and 7, the information of the equipment in the most negative state among many of the equipment installed close to the user's current location is given first. Therefore, the developed application prototype can increase the effectiveness of facility asset management by first providing only the information the user needs and increase the user's satisfaction with the provided information. Through the development of an application prototype, we know that the proposed asset state-aware information retrieval technique can be applied to provide optimal equipment asset information that meets the potential needs of users. As a result, the developed mobile application based on the state-aware information retrieval technique proposed in the paper helps users to more effectively manage equipment assets by providing users with information on equipment exposed to dangerous situations on time. Evaluation In this section, we perform a comparative performance evaluation between the existing retrieval technique that cannot consider the sensor data and the proposed technique. Through the comparative evaluation, we show that the performance of the asset state-aware information retrieval technique proposed in this paper is superior to that of the existing technique. Experimental Design In constructing the experiment data, 100 documents having a certain number of keyword inclusions and tagged to sensor data tags were randomly generated. Fifty of the documents were classified as "Related" and the remaining 50 were classified as "Not Related". For calculating the rank value of the documents, with consideration of the relevance of the information, for documents labeled "Related", we also had 10 documents labeled with "Perfect", 10 documents labeled with "Excellent", 10 documents labeled with "Good", and the remaining 20 documents labeled with "Fair". Furthermore, to evaluate the performance of the existing search technique, a high-rank value was assigned to a document that has many specific keyword inclusions. The precision technique was used to evaluate the efficacy of the information retrieval systems. Precision is the relationship between the number of retrieved relevant documents R with respect to a query statement Q and the number of documents D that have been retrieved based on it, i.e., R/D [26]. We applied precision as a method to verify the superiority of the proposed technique. In the proposed technique, documents with a high value of state similarity have high-rank values. Based on the experimental data constructed according to the experimental design, the retrieval accuracy of each existing method and the proposed method was calculated through Equation (7). Equation (7) calculates the search accuracy in consideration of the rank and the relevance to the user's information request of the retrieved documents that consisted of Top(N). Precision Performance Result To compare and evaluate the search performance of the existing and proposed methods, Equation (7) was used to calculate the retrieval precision of the two methods. Figure 8 shows the performance results for the retrieval precision of the existing and proposed methods. As shown in Figure 8, we know that the asset state-aware information retrieval technique outperforms the existing methods. The existing method is based on the traditional information retrieval method. That is, the search relevance of a document is evaluated based on the number of query terms included in the document [27]. In the existing technique, since the relevance and rank of a document are determined only by the number of keywords included in the document, documents that have a large number of keyword inclusions, but do not have an information request, may be retrieved; as a result, as shown in Figure 8, the retrieval precision distribution of the existing technique is not stable. In addition, the existing method provides information including simple keywords without considering the user's state and the equipment's state at all, so the search accuracy is lowered and the user satisfaction in providing the information is inevitably reduced. In contrast, because the state similarity of the proposed technique has a decisive influence not only on the rank value of the document but also on the relevance of the document to the user information request, the proposed technique shows a relatively stable distribution of high-precision performance. existing method provides information including simple keywords without considering the user's state and the equipment's state at all, so the search accuracy is lowered and the user satisfaction in providing the information is inevitably reduced. In contrast, because the state similarity of the proposed technique has a decisive influence not only on the rank value of the document but also on the relevance of the document to the user information request, the proposed technique shows a relatively stable distribution of high-precision performance. Conclusions With the IoT in electric power energy systems, tremendous amounts of sensor data are produced at a high speed, so it is very difficult to find suitable asset status information and use it for effective asset management purposes. To overcome these difficulties, we proposed a technique of asset stateaware information retrieval in the IoT of electric power energy systems that have never been attempted before. To enable the retrieval of asset information according to the current states, we defined an equipment asset state-aware information model. With the information model, we invented a new equipment asset state-aware ranking technique for the development of effective asset information services in the IoT of electric power energy systems. Then, to show the feasibility of our proposed technique, we implemented a prototype of the state-aware information retrieval mobile application for equipment in the IoT of electric power energy systems. The demonstration of the scenario-based prototype showed that the proposed technique is useful for developing information services for effective equipment asset management in the IoT of electric power energy systems. Finally, the comparative performance evaluation of the existing information retrieval method and the proposed technique proved that the performance of the proposed technique is more excellent. In conclusion, the proposed information retrieval technique enables the development of decision-support applications for more effective asset management because it can provide asset information that meets the potential needs of decision-makers. Conclusions With the IoT in electric power energy systems, tremendous amounts of sensor data are produced at a high speed, so it is very difficult to find suitable asset status information and use it for effective asset management purposes. To overcome these difficulties, we proposed a technique of asset state-aware information retrieval in the IoT of electric power energy systems that have never been attempted before. To enable the retrieval of asset information according to the current states, we defined an equipment asset state-aware information model. With the information model, we invented a new equipment asset state-aware ranking technique for the development of effective asset information services in the IoT of electric power energy systems. Then, to show the feasibility of our proposed technique, we implemented a prototype of the state-aware information retrieval mobile application for equipment in the IoT of electric power energy systems. The demonstration of the scenario-based prototype showed that the proposed technique is useful for developing information services for effective equipment asset management in the IoT of electric power energy systems. Finally, the comparative performance evaluation of the existing information retrieval method and the proposed technique proved that the performance of the proposed technique is more excellent. In conclusion, the proposed information retrieval technique enables the development of decision-support applications for more effective asset management because it can provide asset information that meets the potential needs of decision-makers.
9,749.6
2020-05-27T00:00:00.000
[ "Computer Science" ]
The loss of transcriptional inhibition by the photoreceptor-cell specific nuclear receptor (NR2E3) is not a necessary cause of enhanced S-cone syndrome. Purpose To investigate functional consequence on photoreceptor-cell specific nuclear receptor (NR2E3) transcriptional activity of enhanced S-cone syndrome (ESCS) mutations localized in ligand binding domain (LBD). Methods Point mutations were introduced into the LBD of full length and Gal4 chimeric NR2E3 receptors and transcriptional activity was investigated by using transient co-transfection assay on corresponding luciferase reporters. Expression and DNA binding properties of transfected mutant and wild-type receptors were tested by Western blotting and gel shift assay. Results Our analysis show that two ESCS mutations, missense mutations R385P and M407K, abolished NR2E3 repressive activity in the context of full-length and Gal4 chimeric receptors, while W234S and R311Q mutants retained their repressive activity in both assays. All mutant receptors maintained their stability and DNA binding ability. Conclusions These results showed that NR2E3 mutations localized in LBD induce ESCS disease without affecting inhibitory activity as recorded in vitro. This demonstrates the absence of correlation between transcriptional inhibition and ESCS phenotype. This analysis suggests that NR2E3 might have transcriptional activation properties not yet identified. Nr2e3 gene, have also revealed a two fold increase in S-cone number, retinal dystrophy at early stages and slow retinal degeneration [10][11][12]. Expression of NR2E3 in mouse retina is restricted to rod nuclei, starts after the completion of cone cell birth, and peaks after completion of rod cell differentiation [13,14]. The current hypothesis is that NR2E3 represses S-cone fate as well as participates in rod photoreceptor commitment [5,[13][14][15][16][17]. The intrinsic genetic program appears to be the major determinant of cell-fate commitment in the retina [18]. The competence model of cell-fate determination proposes that a homogeneous pool of multipotent progenitors passes through states of competence where it can produce a given set of cell types [19]. Transcription factors are among the best characterized intrinsic factors, and NR2E3 may have a similar role as its paralog NR2E1 in driving pluripotent cells to a particular fate [20]. NR2E3, as a nuclear receptor, possesses a central DNA binding domain (DBD), and a C-terminal ligand binding domain (LBD) [21]; it was originally described as a transcriptional repressor and binds DNA as a homodimer [14,22]. Physical and functional interactions of NR2E3 with several transcription factors involved in photoreceptor differentiation have been established [14,23]. It has recently been shown that NR2E3 directly interacts with the nuclear receptor NR1D1 and the homeoprotein Crx [14,23]. These interactions lead to enhanced expression of rod-specific genes and reduced expression of cone-specific genes in vitro. NR2E3 also interacts ©2007 Molecular Vision The loss of transcriptional inhibition by the photoreceptor-cell specific nuclear receptor (NR2E3) is not a necessary cause of enhanced S-cone syndrome Mathias Fradot, 1 Olivier Lorentz, 1 Jean-Marie Wurtz, 2 José-Alain Sahel, 1 with Nrl, a photoreceptor specific transcription factor, and modulates its transcriptional activity [15]. The absence of functional Nrl gene in mouse gives a severe phenotype with a complete loss of rods replaced by S-cones. Interestingly, the expression of NR2E3 has been shown to be dependent upon Nrl, suggesting that the increase in S-cones in the Nrl-/-mouse results in part from the absence of expression of NR2E3 [16]. Analysis of gene expression modification in rd7 mouse retina has been performed using different approaches [14,15,17]. Microarray analyses revealed an up-regulation of numerous cone-specific genes in rd7 mouse retina, pointing out NR2E3 repressive function [14,17], while chromatin immunoprecipitation assays associated with reverse transcriptase polymerase chain reaction (RT-PCR) analysis, demonstrated that NR2E3 represses cone specific genes but activates the expression of rod-specific genes [15]. In addition, the transcriptome of the retina of transgenic mice overexpressing NR2E3 confirms the role of NR2E3 as a suppressor of the expression of cone-specific genes [24]. Corbo and Cepko also reported a delay of rhodopsin expression in rd7 mouse [17]. In rat, during development, there is about a week-long period between birth of rods and onset of rhodopsin expression [25]. During this period, NR2E3 would suppress S-cone fate by reducing S-cone gene expression as well as promoting rod fate by activating rod-specific promoters [14,15,23,24]. In the present paper, we analyzed the transcriptional properties of the LBD of NR2E3 (from residue 113 to 410) mutants found in ESCS, fused to a heterologous DBD (Gal4 DBD ), to circumvent a problem due to DNA binding specificity [26], as well as the full length protein to get a better understanding of the activity of NR2E3 mutants in a more physiological context [14]. We confirmed that NR2E3 inhibitory properties involve the helix H12 of the LBD as observed for other nuclear receptors [14,27]. We reported an absence of correlation between transcriptional inhibitory properties of NR2E3 LBD and ESCS, implying the existence of some transcriptional activation properties that might be controlled by a yet to be identified ligand [28]. COS-1 cells were transiently transfected by the calcium phosphate precipitate method [29]: Cells were plated at a density of 3.5X 10 5 cells/ml in 24-well tissue culture plates (500 µl/well) and incubated for 2 h at 37 °C in a humidified 5% CO 2 incubator before transfection. Cells were transfected with 500 ng of Gal4 or NR2E3 responsive luciferase reporter construct [14], 10 ng of pRL-TK internal reporter construct (Promega, Charbonnieres, France) and a variable amount of different expression constructs. One day after transfection, cells were washed with medium without serum and changed to fresh medium. Two days after transfection, lysates were collected and luciferase activity was measured using the Dual Luciferase Reporter Assay System (Promega). HeLa cells were transfected using Lipofectamine 2000 reagent (Gibco-BRL). Cells were plated in six-well tissue culture plates (2 µl/well) and left until they reached 80% confluence. Before transfection, cells were washed and changed to OPTI-MEM medium (Gibco-BRL). Cells were transfected with 750 ng of Gal4-responsive luciferase reporter construct, 10 ng of pRL-TK internal reporter construct, a variable amount of different expression constructs and 2.5 µl of Lipofectamine 2000 according to the manufacturer's instructions. Two days after transfection, lysates were collected and luciferase activity was measured using the Dual Luciferase Reporter Assay System. All transfection assays were performed in triplicate. Each assay group was repeated at least twice. Site-directed mutagenesis: Point mutations were introduced in NR2E3 LBD from pCMV-Gal4-NR2E3 LBD and pCMX-HA-NR2E3 constructs obtained from Dr. Mime Kobayashi [6]. Mutations were introduced by oligonucleotide-directed mutagenesis using the thermostable Deep Vent DNA polymerase (New England Biolabs Inc., Beverly, MA). Amplified DNA was digested by DpnI (New England Biolabs Inc.) and used to transform XL-10 Gold ultra-competent E. coli cells (Stratagene Europe, Hogehilweg, Netherlands). Mutations were confirmed by sequencing. Nuclear protein extraction: Nuclear extracts were prepared from transiently transfected COS-1 cells. Transfected cells were rinsed with 1X PBS, harvested by centrifugation for 5 min at 800x g at 4 °C, and washed with 5 volumes of hypotonic buffer (10 mM HEPES, pH 7.5, 1X complete protease inhibitor cocktail [Boerhinger Mannheim, Mannheim, Germany], 10 mM KCl, 1.5 mM MgCl 2 , 0.5 mM DTT). The cells were suspended with 3 volumes of hypotonic buffer, and incubated for 10 min on ice. Cytoplasmic membranes were disrupted with a pestle B. Nuclei were harvested by centrifugation for 15 min at 1,200x g, suspended with 0.5 volume of low salt buffer (20 mM HEPES, pH 7.5, 1X Complete protease inhibitor cocktail, 2 mM KCl, 1.5 mM MgCl 2 , 0.2 mM EDTA, 0.5 mM DTT, 25% glycerol) before the disruption of nuclear membranes by drop-wise addition of 0.5 volume of high salt buffer (20 mM HEPES, pH 7.5, 1X Complete protease inhibitor cocktail, 1.2 M KCl, 1.5 mM MgCl 2 , 0.2 mM EDTA, 0.5 mM DTT, 25% glycerol). Nuclear lysates were incubated 30 min on ice under agitation and harvested by centrifugation for 30 min at 16,000x g at 4 °C. Supernatants were aliquoted and stored at -80 °C until assayed. Analysis of transcriptional repression by NR2E3 LBD : In order to study the transcriptional properties of the LBD of NR2E3, we expressed it as a chimeric protein, fused to the DBD of the yeast transcription activator Gal4. This construct was transiently transfected into COS-1 cells and tested on Gal4-responsive reporter plasmids using luciferase assay ( Figure 1A). First, to verify that the inhibitory properties of Gal4-NR2E3 LBD reported by others [14,22] were not due to specific elements in the promoter, we performed experiments using Gal4 binding sites, upstream of two different minimal promoters; the βglobin and the SV40 proximal promoter. In Figure 1A, transcriptional inhibition mediated by Gal4-NR2E3 LBD was dosedependent from 10-100 ng for both reporter constructs. This inhibition is due to the NR2E3 LBD , since the heterologous DBD alone did not significantly inhibit the expression of luciferase at 10 and 30 ng and was inhibitory only for the highest amount of expression vector used (100 ng) with the SV40 promoter. The activity was similar to that obtained with another nuclear receptor, RARα, tested under the same conditions, and in absence of its ligand ( Figure 1B). This inhibition was reverted in presence of RAR ligand, the all-trans retinoic acid ( Figure 1C). Mutational analysis of NR2E3 LBD : To examine the functional consequences of NR2E3 mutations described in ESCS, we introduced six mutations found in ESCS (E121K, W234S, R309G, R311Q, R385P, and M407K) and two artificially designed mutations (R385L and αH12) into the Gal4-chimeric receptor by site-directed mutagenesis. Most mutations examined are localized in the LBD, between positions 163 and 410, of the human NR2E3 protein [7]. These Gal4-NR2E3 LBD mutant constructs were transfected into COS-1 cells and tested for their ability to repress transcription from a Gal4 responsive element fused to the β-globin minimal promoter ( Figure 2A). Four out of six of the ESCS mutations (E121K, W234S, R309G and R311Q) had slightly reduced inhibitory activity as compared to wild-type. For these mutants, there was no correlation between the NR2E3 LBD transcriptional inhibition activity and the ESCS phenotype. As described previously, and confirmed here in another cell line, the deletion of mutant H12 (N397Stop) results in the total absence of transcriptional repression [14]. This is in agreement with others studies where deletion of helix H12 enhances repression and co-repressor binding, although several nuclear Expression of NR2E3 wild-type and mutants in COS-1 transfected cells. C: Electrophoretic mobility shift assay of full-length wild-type (WT) and mutated NR2E3 using a Kni x2 probe [6]. Bound indicates shifted bands, and free denotes unbound probe. 597 receptors lacking helix H12 act as transcriptional repressor [30][31][32]. The ESCS mutation M407K corresponds to a position in the helix H12 of nuclear receptors known to modulate the affinity of the LBD to co-regulators. Again, and as seen by others, the M407K NR2E3 mutant protein is not able to mediate transcriptional repression [14]. The ESCS R385P mutation also results in loss of inhibition. This mutant is not localized within the helix H12, and the loss of transcriptional inhibitory property must result from a distinct mechanism. In order to test the possibility of a conformational constraint that might be created by the proline residue, we designed the artificial R385L mutant with an arginine residue replaced by a leucine residue. The R385L mutant has an inhibitory activity slightly weaker than of the wildtype NR2E3 and similar to that of four ESCS mutants. Mutational analysis was also performed in HeLa cells using Gal4 chimeric receptors and Gal4 responsive reporters ( Figure 2B). The six examined ESCS mutations presented similar behavior in HeLa and COS-1 cells; four of them (E121K, W234S, R309G, and R311Q) retained a slightly reduced inhibitory activity, while R385P and M407K mutants were not able to mediate transcriptional inhibition in HeLa cells. Only the R385L artificial mutant has different behavior in the two cell lines. It lost its repressive activity in COS-1 but remained active in HeLa cells. The αH12 mutant displayed similar activity in both cell lines. In order to check that the lack of transcriptional inhibition was not the result of a difference in protein stability in COS-1 cells, we performed Western blotting analysis ( Figure 2C). The mutant fusion proteins were confirmed to be expressed at similar levels. The R385P mutant, which lacks the inhibitory activity, was even expressed at a slightly higher level than the wild-type construct. The loss of activity of the R385P mutant could also theoretically result from misfolding of the protein, and the resulting inability of this mutant to bind the Gal4 responsive element. To study this hypothesis, nuclear extracts from COS-1 transfected cells were prepared and used in gel mobility assays in conjunction with oligonucleotides corresponding to the Gal4 binding site ( Figure 2D). Gal4 protein, used as a positive control, gave a shift in agreement with its molecular weight (lane 4 and 5). Wild-type Gal4-NR2E3 LBD displayed two bands shifted in mobility (lanes 6 and 7) that were also observed when the R385P mutant protein extract was used (lanes 8 and 9). This provides the evidence that R385P mutation has no effect on the conformation of the heterologous DNA binding domain. Mutational analysis of full-length NR2E3: Functional consequences of NR2E3 mutations were also examined in fulllength protein. Four mutations found in ESCS (W234S, R311Q, R385P, and M407K) were introduced into HA-tagged full-length NR2E3 by site directed mutagenesis. These mutated NR2E3 constructs were transfected into COS-1 cells and tested for their ability to repress transcription from a NR2E3 responsive element fused to the thymidine kinase minimal promoter ( Figure 3A) [7,14]. Two out of four of the mutations (W234S and R311Q) had slightly reduced inhibitory activity as compared to wild-type, while R385P and M407K mutants lost most of their repressive activity. To check for the stability of NR2E3 mutant proteins, we analyzed COS-1 transfected cells by Western blotting. All the full-length receptors, mutant or wild-type, were expressed at similar level and had the expected electrophoretic mobility ( Figure 3B and Figure 4). As no natural DNA response element has been identified for NR2E3, we used Kni 2X2, a dimeric response element, which NR2E3 is able to bind [6,14]. Dimerization of several nuclear receptors has been shown to be dependant upon LBD, indicating that LBD-localized mutations could affect DNA binding ability. Nuclear extracts from COS-1 transfected were analyzed by gel-shift mobility with oligonucleotides corresponding to Kni X2 response element [6] in order to check for the DNA binding of the different NR2E3 proteins; mutant and wild-type ( Figure 3C). All the mutated full-length proteins displayed a band (lanes 6, 6', 8, and 8') that was also observed with the wild-type full-length protein (lanes 4 and 4'). This provides evidence that NR2E3 dimerization ability was not affected by these ESCS mutations. In order to test the possibility that NR2E3 is behaving differently on inactivated and activated promoters, we tested the four mutants for their ability to repress transcription driven by Gal4 activation ( Figure 5). The activation by Gal4 is resulting from a cryptic Gal4 binding Element beside the NR2E3 responsive element in the reporter construct used [14]. Activation (two-folds) was observed in the presence of Gal4 protein. This activation was repressed by wild-type NR2E3. The ESCS mutants have similar inhibitory properties toward this Gal4-mediated transcriptional activity. DISCUSSION The transcriptional inhibitory property of NR2E3 was also reported in other cell types, such as the human embryonic kidney (HEK) cells HEK 293, the kidney cells CV-1, and more important the retinal pigmented epithelium (RPE) cells RPE-J [14,22]. This inhibition was also observed when NR2E3 was tested as a full-length protein on a selected DNA binding element [14]. We have observed that the inhibition mediated by NR2E3 resembles that of unliganded RARα. This suggests the following: (1) that the inhibitory function of NR2E3 results from interactions of the LBD with co-repressors; (2) that only a conformational change, may be induced by binding to a ligand not present in these cells; (3) the exchange of corepressors to co-activators could results in transcriptional activation [27]. Candidate ligands, as the all-trans and 9-cis retinoic acid and the 11-cis retinaldehyde, have been previously excluded [22]. Nevertheless, the 13-cis retinoic acid was recently reported as an NR2E3 agonist using a transcriptional activation assay [28]. We have demonstrated here that some of the NR2E3 mutants that cause ESCS disease are not defective in transcriptional inhibitory activity. Four mutant proteins (E121K, W234S, R309G, and R311Q) retain transcriptional repression when tested as Gal4-fusion. Two of these mutants (W234S and R311Q) that tested as full length on the identified NR2E3 ©2007 Molecular Vision Molecular Vision 2007; 13:594-601 <http://www.molvis.org/molvis/v13/a64/> Figure 5. Repression of Gal4 activated promoter by NR2E3 wildtype and mutant full-length proteins. COS-1 cells were transfected with various combinations of Gal4 (50 ng) and NR2E3 wild-type and mutant (100 ng) expression plasmids. Transcriptional activity of a NR2E3 responsive reporter gene was measured. Normalized values are expressed as relative luciferase activity. Figure 6. Molecular model of NR2E3 LBD. Homology modeling of the NR2E3 LBD based on the RAR LBD crystal structure. Highlighted are several residues that mutated in enhanced S-cone syndrome. The residue R385, shown in green, was predicted to localize in the ligand hydrophobic pocket. Mutation of W234, shown in red, was predicted to modify the ligand pocket conformation. responsive element are also fully capable of repressive function. The activation by Gal4 is resulting from a cryptic Gal4 binding Element beside the NR2E3 responsive element in the reporter construct used (M407K and R385P). This absence of correlation was also observed for Gal4 activated transcriptional activation ( Figure 5). A molecular model of NR2E3 LBD was established from the RAR LBD crystal structure ( Figure 6) [33]. The importance of position of M407K in the α-helix H12 of nuclear receptors that interacts with co-regulators [30] is suggested by the loss of repressive activity of the artificial mutant with a deletion of that helix (DH12). The position of the R385P mutation within a predicted hydrophobic pocket in a structural model of the LBD of NR2E3 might suggest the requirement of that residue in the interaction with a putative activating ligand [28]. It is unclear why the artificial mutant R385L retains transcriptional repression in COS-1 but not in HeLa cells. The work presented here demonstrates that there is no correlation between the transcriptional inhibition mediated, in vitro, by the ligand binding domain of NR2E3 and the phenotype of ESCS. The difference between the ESCS mutants observed is not the result of differential interactions with protein partners such as the nuclear receptor NR1D1 or the homeoprotein Crx reported to involve the DBD of NR2E3 [15,23]. There is an ongoing debate about the mechanisms leading to excess of S-cones in ESCS. The models currently discussed involved the inhibition of S-cone specific genes by NR2E3 with [15,23,24] or without activation of rod-specific genes [14]. The absence of rod function in ESCS [3,4] argues for the involvement of NR2E3 in regulating rod-specific genes, while the absence of perturbation of rod-specific expression in the rd7 retina [14] indicates that in the absence of NR2E3, rod-specific genes are expressed at a normal level. While our results do not address the mechanisms behind the lack of correlation between NR2E3 mutations and ESCS the results point to the possible existence of transcriptional activation properties of NR2E3 regulated by a yet to be identified ligand. ESCS mutants might all be defective in transcriptional activation in addition for some of them to reduced transcriptional inhibition. The recent identification of NR2E3 agonists is an element supporting this hypothesis.
4,495.4
2007-04-06T00:00:00.000
[ "Biology" ]
Numerical study on aerodynamic performance during the process of entering the formation flight . Two or more planes can fly in close formation similar to migratory birds, making use of the wing-tip vortex of the leader plane to increase lift and reduce drag, thereby effectively improving the flying range. By conducting wind tunnel tests and numerical simulations, the aerodynamic performance of formation flight at different relative positions can be obtained, the optimal formation position can thereby be solved. However, significant nonlinear and unsteady aerodynamic characteristics, induced by the interference between the follower plane and the wing-tip vortices of the leader plane, will affect the flight safety of the whole formation. At present, there is no effective prediction methods. Numerical simulations adopting adaptive grid refinement and dynamic overset grid were conducted for the dynamic entering process of the formation flight containing two Ty-154 planes. The aerodynamic characteristics and vortex interferences were analysed considering the effects of approaching direction and speed. The results indicate that there is deterioration of stability at the position where maximum lift gain is reached; Compared to the entering speed, the impact of directions on dynamic aerodynamic characteristics is more significant. Introduction In 2001, Nature published the research on the long-distance migration of Tang geese using formation flight.The study found that by means of formation flight 11~14% of energy can be saved, and Tang geese flying in formation reach farther than those flying alone [1].Inspired by it, studies have been made on the aerodynamic characteristics of aircraft formation flight and the concept of Surfing Aircraft Voices for Energy (SAVE) has been proposed.It specifically refers to the close formation flight of two or more planes similar to migratory birds, during which the rear plane "rides" on the vortex of the front plane, achieving significant lift-drag ration promotion. In a formation flight, the airflow around the front plane receives strong disturbances, and forms a complicated wake field downstream of components such as the wing and fuselage, which is the main reason for affecting the aerodynamic characteristics of the rear plane.Through theoretical analysis [2,3], numerical calculations [4,5], wind tunnel tests [5,6], and flight tests [7,8], the aerodynamic characteristics of the rear aircraft at different relative positions in the formation flight can be obtained, thereby deriving the optimal position of the formation to maintain maximum aerodynamic gain.But there is another problem that should be concerned: how to arrive or leave the optimal position safely?When the rear plane is approaching, leaving, or passing through the wing-tip vortex of the front plane, the flow nearby changes significantly and rapidly, leading to nonlinear and unsteady aerodynamic characteristics, which can easily excite unexpected motion, reduce the reliability and effectiveness of flight control.In specific situations, maintaining a stable flying manually is quite difficult as fast and accurate operations are needed in an extremely short period of time, which will affect the flight safety and mission accomplishment.Therefore, it is necessary to conduct investigations on the dynamic effects during the entering process of the rear plane, which will support the verification for formation flight control strategies design, and contribute to the aviation safety. The dynamic characteristics for an aircraft during certain flight task can be analyzed through aerodynamic modeling and simulation [9,10], based on the static aerodynamic coefficients obtained from engineering estimation or Computational Fluid Dynamics (CFD).However, unsteady, complicated vortices interaction is the dominant mechanism forming the aerodynamic interferences in a formation flight.The whole entering process should be simulated using time-accurate numerical method in order to take account of the unsteady flow phenomenon. The use of CFD for the dynamic entering process of formation flight requires innovation of the dynamic grid method.Traditional schemes include the grid deformation, the grid reconstruction and the overlapping grid.Among which, the grid deformation method only suits for small-scale motion, while the dynamic entering usually covers distance many times larger than the wingspan of a plane; The grid reconstruction method is inefficient as the grid topology should be regenerated at every time step; The overlapping grid method requires cell scale consistency at the boundary between both set of grid to ensure a successful interpolation, hence the background grid should be refined throughout the entire path of the entering process, leading to a severely increase of the total number of grid cells (up to 100 million), which may be impossible to be solved using large clusters. Therefore, a dynamic, adaptive overlapping grid method was developed to simulate the large-scale motion in a formation flight using limited server resources.Numerical simulations were conducted on the dynamic entering process of the formation combined by two transporters simplified from Ty-154.The effects of entering direction and relative speed on the aerodynamic characteristics were then analyzed. Numerical method An in-house CFD code was adopted, which is a three-dimensional, cell-centered Reynolds-averaged Navier-Stokes (RANS) solver based on unstructured grid.Some commonly used turbulence models such as S-A and SST models have been implemented.The multigrid scheme were used to accelerate convergence, and the parallel-running mode of the solver was available to improve the computational efficiency.The control equations, discretization schemes, turbulent models and some extra strategies on grid refinement and movement are illustrated in following subsections. Control equations and discretization schemes For the numerical simulation of unsteady flow, the RANS equations along the Cartesian coordinate system was considered.The equations take the following form: (1) where Q is the conserved flow variables, (E, F, G) are the inviscid convective flux, and (Ev, Fv, Gv) are the viscous flux: ( ) ( ) The spatial discretization adopted the Finite Volume Method (FVM) based on unstructured grids.The Roe's flux difference scheme was used for the inviscid convection term discretization.The thirdorder upwind biased upwind scheme for conservation laws (MUSCL) was used for flow variables interpolation at the interface.The Venkatakrishnan limiters were used to suppress numerical oscillations for the discontinuous problems.The viscous term of the equations was discretized by the second-order central difference scheme. The boundaries of flow field can be divided into actual boundaries (fixed wall) and artificial boundaries (free stream, symmetry, and overset interpolation).The surface of planes was set to non-slip, adiabatic wall boundary condition, the free stream boundary was calculated using local one-dimensional Riemann invariants, the overset interpolation boundary should be generated dynamically by a series of operations including hole cutting, contributors searching and orphan removing. Menter's Shear Stress Transport (SST) model was adopted to approximately simulate the turbulence.The model was first proposed by Menter in 1994 [11,12], which combines advantages of both k-ε and k-ω models.It acts equivalent to k-ω model near the wall, while gradually shifts to k-ε model away from the wall, showing high accuracy in general turbulent flow calculations. The control equation of the SST turbulence model is as follows: 1 21 ( ) where ρ is density, k P is the production term, k is the turbulent kinetic energy, ω is the dissipation rate, μ is the laminar viscosity coefficient, μt is the turbulent viscosity coefficient.β * , β, γ, σk, σω, a1 are closure constants, and F1, F2 is weighting functions for adjusting constants in different flow regions. Vorticity based adaptive mesh refinement method The aerodynamic characteristics for the dynamic entering of the formation flight may undergo significant changes due to the dynamic interferences between flow around the rear plane and the wingtip vortex of the front plane, especially during the process when the rear plane is passing through the vortex.In order to get the detailed changing trends, the "small-scale" wake vortices should be simulated in high fidelity. It is necessary to adopt a sufficiently fine grid where the wake flow of the front plane exists to improve the simulation accuracy.However, the fact that the distance between the front and rear planes is usually quite large (up to 3~10 times of the wing-span) in a formation makes it impossible to conduct the grid refinement globally.The grid refinement level that can be accepted by current clusters is insufficient to meet the high-fidelity simulations.Therefore, an adaptive mesh refinement method based on vorticity was adopted which could significantly reduce the grid cells to improve computational efficiency, when ensuring adequate simulation accuracy of for complicated wake vortex flow. To achieve adaptive refinement, it is important to select appropriate flow field variables and establish the relationship between the variables and the grid cell scale.Then the bisection method was used for mesh refinement based on the scale distribution.After the alternating iteration of the flow control equations solving and the refinement, adaptive grid can be finally obtained for calculation.The selection of flow field variables is related to the physical problems to be simulated, for example when simulating mass separated flow of a plane, the turbulent kinetic energy can be selected, and when it comes to shock capture in supersonic flow, the Mach number gradient is a proper characteristic variable.For the simulation of formation flight, the simulation accuracy of wake vortices propagating downstream of the front plane is dominant, so the vorticity along the x-direction ωx was selected as the variable for adaptive mesh refinement, through which the grid cell scale L was defined as: Where Lmax and Lmin are the maximum and minimum cell scales, k is the tuning factor.A larger k produces finer grid, thus more grid cells. Adaptive assembly for overlapping grid Besides the high-fidelity simulation on "small-scale" wake vortices, it is also necessary to simulate the "large-scale" motion which covers the distance several times of wing-span, greatly beyond the characteristic length of the planes.The automatic overlapping grid method was adopted in the simulation.Generally, when variables are interpolated between different computational regions at boundaries, it is important to ensure that the cell scales on both sides of the boundary are equal, otherwise the inconsistency will lead to interpolation accuracy loss or contributor searching failure.While, the length of grid cells around the rear plane is much smaller than that of the background grid.One option is refining all cells of the background grid covered by the entering path of the rear plane manually, which will sharply increase the total number of grid cells and slow the computational jobs.Therefore, an adaptive assembly method for the overlapping grid was adopted, which automatically refines the adjacent background grid cells based on the overlapping boundary nodes distribution to ensure successful interpolation at each time step. The kernel of the method is to how to refine the background grid robustly and efficiently.Two criteria can be chosen to decide whether the refinement should be accomplished: One is making an attempt to build the interpolation templates between overlapping grids, refining the adjacent cells of background grid when the attempt fails; The other is making a comparison between adjacent cells of both zones, the refinement finishes only when their lengths are equal.Conducting overlapping grid interpolation includes hole cutting, contributors searching and orphan removing, which is time consuming and unnecessary when cells of one side are much larger than that of the other side.So the adjacent background grid cells were bisected without the interpolation building until the cell lengths of both sides of grid zone are equal.Then the interpolation was tried and the cells were refined again when occurring failure.This process will be repeated until the grid assembly succeeds and available grid sets are formed.During the entire unsteady simulation, the above operation was conducted at each time step as the relative position of two planes may be changed. Model introduction The formation composed of two planes simplified from Ty-154 transporter was studied.The plane includes fuselage, wings, vertical tail, and high flat tail.Both planes are scaled by 1:22.Table 1 provides the geometric parameters.The appearance of the plane is shown in Figure 1. Flow field region and computational grid The simulation was carried out using a semi-zone grid, with symmetric boundary applied at the longitudinal symmetric plane of the front plane.The flow field was divided into two regions: the background zone and the rear plane zone, shown in Figure 3.The freestream vector was along the xaxis in grid coordinate system and parallel to the symmetric boundary condition.Thus the case simulated a Λ-shaped formation with one leader and two followers in fact. The grid close to the surface of the plane has been refined along the normal direction to simulate the boundary layer, and the first grid spacing is about 0.0003 mm to ensure that y + =1. Figure 4(a) shows the surface grid distribution of the rear plane, with local refinement at surfaces with large curvature, to the minimum value of about 0.4 mm. Figure 4(b) shows the grid distribution at wall surfaces and longitudinal sections of the rear plane wing.It can be seen that the background grid has been adaptively refined at the overlapping boundary, ensuring grid cells consistent.The overlapping assembly result was shown in Figure 4(c).The minimum spatial grid spacing away from the solid wall is set to Lmin=10mm to achieve a balance between the simulation accuracy and the efficiency. Adaptive grid independence A grid independence study was conducted on the formation flight process with relative positions (x, y, z)/b = (3.0,-0.1, 0.0), to determine the appropriate value of the adaptive tuning factor k. Selecting k = 1.1, 1.3, 1.5 and 1.7 respectively, Figure 5 shows the trend of the total number of grid cells during the iteration.The initial grid has 6.07 million cells, and the converged number of grid cells reached 7.04 million, 9.94 million, 12.63 million, and 14.51 million respectively by different k.Table 2 shows the impact of k on the aerodynamic characteristics of the rear plane.Compared to results obtained by the finest grid when k = 1.7, the relative deviation of the longitudinal aerodynamics obtained by k = 1.5 is only about 0.2%, and the deviation of the lateral aerodynamics is about 1.2%~3.5%.While for k = 1.3, the deviations are relatively larger, with the longitudinal aerodynamics error of about 1.5%, and the lateral aerodynamics error of 10%.Therefore, it can be considered that when k = 1.3 the grid refinement is insufficient and the discretization error is non-negligible.When k ≥ 1.5, the calculations show results of general convergence.To improve computational efficiency, k = 1.5 was selected for adaptive grid refinement control in following simulations. Numerical validation A comparison with experiment result has been conducted for validation of the numerical method and the grid.The experiment which adopted a formation of two same aircrafts was made in a transonic wind tunnel with the test section of 2.4m×2.4m.The wingspans of the leading and the following models are 0.735m and 0.49m respectively, the angles of attack were 2° and 2.4°.The Mach number for the experiment was 0.76, and the Reynolds number based on the wingspan of the rear plane is 8.0×106.The relative streamwise location x/b = 1.5.More experimental details can be checked in studies by Tao Y [5]. Figure 6 shows the comparison between CFD and experiment for lift force coefficient CL, pitching moment coefficient Cm and rolling moment coefficient Cl.The computational and experimental results showed similar distribution along spanwise and vertical directions.The magnitudes of aerodynamic interferences obtained from both methods were consistent with each other, with some small errors that still existed because of the complexity of the vortex flow and the interferences of wind tunnel walls and struts.Since there were only distinctions of numerical parameters of test models and the incoming flow, between the validation case and the problems to be studied in present paper, the comparative study is believed to be appropriate to validate the numerical accuracy, and the computational method and grid have been proved to be valid. Calculation results and analysis The following numerical simulations were conducted at a freestream Mach number of 0.74.The Reynolds number based on the wingspan of the aircraft is 1.8×10 8 , thus the approximate flight altitude is 12km.The turbulence energy of freestream is 0.1%, and the turbulent-to-laminar viscosity ration is set to 50.The angles of attack are 2° for both planes.Different relative speeds have been analyzed including low, medium and high speed. Longitudinal entering The dynamic process of the rear plane entering the wake vortex influence zone of the front plane longitudinally is analysed in present section.Table 3 shows the initial and final formation location parameters, as well as the relative entering speed of the rear plane.Different levels of relative speed were derived from the moving distance and the accomplished time which is 1s for high speed, 2s for medium speed and 3s for high speed.Considering the aircrafts are scaled by 1:22, the specific value of speed for real flight should be 57.86 m/s, 86.68 m/s and 173.58, respectively, which varies in a considerably wide range. Before the unsteady simulation, an initial flow field result should be calculated first.Thus the start time of the moving process of the rear plane was set to t = 0.5s, when the flow pattern and aerodynamic coefficients were converged. Figure 7 shows the grid distribution on a longitudinal slice around wingtips at different time during the simulation.Before simulation, the background grid is in its initial state, only cells adjacent to the rear plane have been refined.At t = 0.5s, the background grid has been adaptively refined along the wake vortex propagation area of the front plane.The adaptive refinement area changes with the movement of the rear plane and the influence area of vortex wake flow.From the grid distribution changes, the adaptive refinement method developed in this paper can effectively simulate dynamic entering problem. Figure 8 shows the changes of the aerodynamic coefficients of the rear plane along with the moving distance when it enters the formation longitudinally.The impact process of the wing tip vortex on the rear plane can be observed though the change of lift coefficient.As the rear plane gradually moves forward and upwards, the lift coefficient gradually increases and reaches its maximum value when L = 5.12m ~ 5.32m, where the lift coefficient increases by about 0.058 compared to the initial state.It can be considered a zone where the wing tip vortex has the greatest effect on the rear plane due to the maximum lift, which can be named as the Theoretical Maximum Profit Zone (TMPZ).It can be used as a reference for the analysis of other aerodynamic coefficients and marked with orange coloured bands.The drag coefficient shows an increasing -decreasing -increasing trend, with TMPZ located in the reduction range.The pitch moment coefficient gradually decreases to a peak value near TMPZ and then rapidly increases, reaching its maximum value after crossing TMPZ, and then gradually decreases to the initial value.The vortices wake flow also has a significant impact on the lateral characteristics of the rear plane.When approaching TMPZ, the lateral force coefficient slowly increases to a maximum value and then rapidly decreases to a minimum value before returning to zero.At the same time, the trend of yaw moment coefficient is opposite to that of the lateral force coefficient, indicating that the lateral force mainly acts on the tail wing.The rolling moment coefficient increases rapidly to its maximum value before approaching TMPZ, and rapidly decreases to a smaller negative value at the beginning of TMPZ, followed by a slow increase.Generally, despite the maximum lift gain in TMPZ, there are significant changes in lateral force and aerodynamic moments, posing a huge challenge to the stability control of the plane.At the same time, the aerodynamic coefficients of the rear plane show same trend at different entering speed, implying that the aerodynamic hysteresis effect is not the dominant factor within the study range.Figure 9 shows the surface pressure distribution and wake interference at different time points.During the entering process, the wingtip vortex wake of the rear plane shows a downward pattern due to the relative velocity.When the wingtip vortex of front plane is far from the rear plane, there is no significant effect on the pressure distribution of the rear plane.When the frontal wingtip vortex sweeps over the wing of the rear plane, a downward wash flow is generated on the left side of the vortex core, and an upward wash flow is generated on the right side, resulting in a decrease in the local angle of attack on the outer side of the left wing, and an increase of angle of attack on the inner side.Therefore, the low-pressure area on the leeward side is enlarged on the inner side and reduced on the outer side.As most part of the rear plane is immerged in the upwash flow induced by the frontal wingtip vortex, the lift generated by the left wing increases at this time.When the rear plane is located below the wingtip vortex, the induced wash flow has a rightward velocity component, resulting in a positive lateral force coefficient mainly acting on the vertical tail, which then induces a negative yawing moment.However, when the rear plane is located above the wingtip vortex, as the induced wash flow has a leftward velocity component, a negative lateral force and a positive yawing moment are brought out.The induced rolling moment is also generated by the contribution of unbalanced aerodynamic forces.When the rear plane approaches the front wingtip vortex, the lift on its left wing gradually increases, and the vertical tail generates a lateral force to the right, causing a rapid increase in the rolling moment; As the rear plane gradually leaves the wingtip vortex, the lift on the left wing gradually decreases, while the lateral force of the vertical tail gradually turns negative, resulting in a rapid decrease in the rolling moment. Lateral entering Entering the formation from different directions implies completely different flow mechanisms and aerodynamic characteristics.The present section investigates the dynamic lateral entering process of the rear plane.Table 4 shows the initial and final formation parameters, as well as the corresponding relative speeds.Similar to the longitudinal process, the specific value of speed for real flight of the lateral entering process should be 53.02m/s, 79.42 m/s and 158.84. Figure 10 gives the changes in aerodynamic coefficients during the entering process.From the trend of lift coefficient, the maximum value was reached in the range of L = 6.15m ~ 6.60m, which can be defined as TMPZ of the lateral entering.It can be seen that the change of drag coefficient during the lateral entering is greater than that during the longitudinal entering, and the pitching moment shows a decreasing trend at first and then rapidly increases.The lateral force gradually decreases during the movement, and correspondingly the yawing moment gradually increases.Before entering TMPZ, the rolling moment rapidly increases at first, similar to the trend of lift coefficient, and then rapidly decreases to a negative value.In both longitudinal and lateral entering process, although in TMPZ the maximum lift gain can be easily achieved, the unfavourable interferences of lateral forces and aerodynamic moments are significant, too. Figure 11 shows the flow details during the lateral entering process of the rear plane.From the local perspective of the rear plane, the frontal wingtip vortex gradually approaches the left wing tip and sweeps over the wing surface until it reaches the wing root.During the process, it is evident that the pressure on the leeward side of the left wing decreases globally at first, influenced by the induced wash flow, and then shows unbalanced distribution as the vortex sweeps over the wing.As the area affected by the vortex gradually moves to the right side, the lift on the left wing first increases and then decreases, while the lift on the right wing gradually increases at a slower speed, resulting in a trend of increasing and then decreasing of lift coefficient and rolling moment of the plane.The leftward washing flow induced by the vortex near the vertical tail of the rear plane also causes a gradually decreasing lateral force and correspondingly increasing yawing moment.Comparing the dynamic processes of different entering directions, it is concluded that the amount of vortex induced aerodynamic forces and moments of the rear plane are basically equivalent.However, during the longitudinal entering process, the directional stability of the rear plane changes severely, while during the lateral entering process, the longitudinal and lateral stability changes significantly. Conclusion The numerical study on the aerodynamic performance of the rear plane entering the formation of Ty-154 has been conducted.The vortices interference and its effect on aerodynamic coefficients were analysed.The results showed that: • (a) During the entering process, there exists a Theoretical Maximum Profit Zone (TMPZ), in which the maximum lift coefficient gain of about 0.05~0.06can be obtained; • (b) In the TMPZ, the lateral force and aerodynamic moments of the rear plane changes significantly, posing a huge challenge to the stability control, based on this fact, the TMPZ should not be recklessly pursued for safety in formation flight, strict security border should be set and certain buffer domains are equally needed because of the complex meteorological conditions such as the sudden gust; • (c) Entering the formation from longitudinal direction causes greater changes in directional stability, while from lateral direction causes greater changes in longitudinal and lateral stability, so the entering path should be designed based on the control capability of the following aircraft; • (d) Within the present study envelop, the aerodynamic hysteresis effect caused by relative speed has no significant impact on aerodynamic characteristics. Figure 1 . Figure 1.Geometric outline diagram of Ty154 (Unit: m).The formation location parameters are defined by the relative position between wing-tips of adjacent side of two planes, and are nondimensionalized using the wingspan of the plane.When the rear plane is located downstream, upper, and right of the front one, all the location parameters of x/b, y/b, z/b are positive.Figure 2 shows a schematic diagram of the relative position of the formation. Figure 2 . Figure 2. Schematic diagram of formation location parameters. Figure 5 . Figure 5.Total number of grid cells in adaptive refining process, with different k. Figure 6 . Figure 6.Aerodynamic coefficients distribution on the slice at x/b = 1.5, obtained by CFD (left) and wind tunnel experiment (right). Figure 7 . Figure 7. Grid distribution during the entering process, at different time step. Figure 8 . Figure 8. Aerodynamic coefficients of rear plane during longitudinal entering. Figure 10 . Figure 10.Aerodynamic coefficients of rear plane during lateral entering. Table 2 . Calculation results of different grid scales. Table 3 . Setups of longitudinal entering process. Table 4 . Setups of lateral entering process.
6,056.4
2024-02-01T00:00:00.000
[ "Engineering", "Physics" ]
miRNA-dysregulation associated with tenderness variation induced by acute stress in Angus cattle miRNAs are a class of small, single-stranded, non-coding RNAs that perform post-transcriptional repression of target genes by binding to 3’ untranslated regions. Research has found that miRNAs involved in the regulation of many metabolic processes. Here we uncovered that the beef quality of Angus cattle sharply diversified after acute stress. By performing miRNA microarray analysis, 13 miRNAs were significantly differentially expressed in stressed group compared to control group. Using a bioinformatics method, 135 protein-coding genes were predicted as the targets of significant differentially expressed miRNAs. Gene Ontology (GO) term and Ingenuity Pathway Analysis (IPA) mined that these target genes involved in some important pathways, which may have impact on meat quality and beef tenderness. Introduction MicroRNAs are one of the largest gene families and account for~1% of the genome [1]. They are 21-25 nucleotide small, non-coding RNAs that post-transcriptionally repress the expression of protein-coding genes through binding to the 3' untranslated regions (UTR) of the target mRNAs [1][2][3][4][5]. Accumulated evidence indicates that miR-NAs are important in the regulation of many biological processes, such as developmental timing, cell metabolism, cell differentiation, cell death, cell proliferation, haematopoiesis and patterning of the nervous system, etc [1,4,6]. Recent studies have uncovered muscle-specific miRNAs that regulate diverse aspects of muscle function, including myoblast proliferation, differentiation, contractility and stress responsiveness [7][8][9][10]. Disruption of miRNA biogenesis causes diverse developmental defects, including abnormal embryogenesis and depletion of stem cells [4]. It has been reported that microRNA-133a regulates cardiomyocyte proliferation and suppresses smooth muscle gene expression in heart [8]. miR-1 and miR-133 have distinct roles in modulating skeletal muscle proliferation and differentiation in cultured myoblast in vitro and in Xenopus laevis embryos in vivo [9]. miR-335 and miR-126 are identified as metastasis suppressors in human breast cancer because their expressions are lost in the majority of primary breast tumors [11]. Additionally, miRNAs have been found involved in viral infections, cancer, cardiovascular disease and neurological and muscular disorders [6,[12][13][14][15][16][17][18][19][20]. With the progression of research, a large number of miRNAs have been found to play roles in the regulation of metabolic process. Although there are 18226 entries in miRBase, representing hairpin precursor miR-NAs and expressing 21643 mature miRNA products in 168 species, only a handful of miRNAs have been studies deeply, and a range of functions extending beyond developmental regulation have been revealed [4]. Especially, 665 miRNAs in bovine are shown in the database, and some of them are studied in bovine cell in vitro, but few have been studied in vivo [21,22]. Beef tenderness is a complex characteristic influenced by many aspects, such as production, processing factors and cooking aspects, etc. More efforts have been focused on factors influencing meat quality, including breed, sex, feed, handling, environment, finishing weight and age at slaughter, etc [23][24][25][26][27][28]. So far, no research is performed on whether the variation of beef tenderness is regulated by miRNAs. To test our hypothesis that acute stress may influence beef quality mediated by miRNAs, a miRNA microarray was used to detect differentially expressed miRNAs between stressed and non-stressed groups of cattle. The results from the study demonstrated that acute stress altered both beef quality and miRNA expression, which will help us identify mechanisms underlying the control of beef tenderness. Results Differentially expressed miRNAs in LD muscle with differential stress status Warner-Bratzler shear forces (WBSF) measurements were made to evaluate variation of beef tenderness caused by acute stress. The results showed that the average WBSF for stressed group and control group were 19.74 kg and 5.04 kg, respectively. The stressed group was much tougher than the control (non-stressed) group from the student t-test result (P < 0.0001). To determine the miRNA expression patterns in Angus cattle with different stress status, miRNA microarray analysis was conducted on LD muscle. These arrays were designed based on the miRBase Version 11.0 and contained 126 bovine miRNAs. For each miRNA, there were 4 to 8 repeat probes on each slide. After hybridization, washing, scanning, data were collected and then Limma package was applied. A total of 13 miRNAs were identified as differentially expressed miRNAs using the criteria of P value less than 0.05 and FDR (false discover rate) less than 0.4 (Table 1). Of these, one miRNA was down-regulated while 12 miRNAs were up-regulated in stressed group compared with control group. To reveal the overall expression profiles of these differentially expressed miR-NAs in these two groups, clustering analysis was performed as previously described. The visualization showed that the expression patterns of these miRNAs can apparently divide these 6 individuals into stressed and control groups ( Figure 1). To obtain a highly statistically confident result, a stringent statistical significance threshold (P < 0.05, Fold Change > 1.5) was used. Bta-miR-497 was chosen as the most significantly differentially expressed miRNA to do further analysis. qPCR analysis of differentially expressed miRNA To validate the microarray results, mimic bta-miR-497 were synthesized by miScript Primer Assays. Quantitative RT-PCR was performed to measure the expression level of bta-miR-497 in stressed and control groups. The results showed that the expression of bta-miR-497 significantly increased in stressed group compared to control group (P < 0.05) (Figure 2), consistent with the miRNA microarray results (fold change = 1.62 in microarray), namely, the expression of bta-miR-497 was increased after the acute stress. Prediction of targets of differentially expressed miRNA and function annotation To understand the potential functions of significantly differentially expressed miRNA in this diverse stress status, 135 genes were predicted as the potential targets of miR-497 in bovine by using bioinformatics method. To further explore the function of these predicted target genes, Gene Ontology analysis was performed. The results showed that the predicted target genes in GO biological process terms were enriched in cellular catabolic process and cellular process. In cellular component category, GO terms related to the cytoplasmic part, membrane-bounded organelle, intracellular membranebounded organelle, organelle, intracellular organelle, cytoplasm and intracellular organelle part. The molecular function category of GO terms showed that succinyltransferase activity, purine nucleotide binding, ribonucleotide binding, purine ribonucleotide binding, purine ribonucleoside triphosphate binding, GTP binding, guanyl nucleotide binding, guanyl ribonucleotide binding, S-acyltransferase activity and GTPase activity were enriched. Summaries of the enriched GO term categories for predicted target genes are shown in the Table 2. To further visualize the pathways and networks these target genes related with, IPA of target genes was conducted. Analysis results showed that cell cycle, cell morphology, cellular function and maintenance, molecular transport and cellular movement were ranked in the top of "Molecular and Cellular Functions". While, inhibition of angiogenesis by TSP1, D-glutamine and Dglutamate metabolism, G2/M DNA damage checkpoint regulation, galactose metabolism and nucleotide sugars metabolism were among the top canonical pathways. The most significant networks functioned in drug metabolism, endocrine system development and function, lipid metabolism, amino acid metabolism, molecular transport, small molecule biochemistry, gene expression, cellular movement, cell cycle, cardiovascular system development and function, organismal development, cancer and gastrointestinal disease. Summaries of the enriched networks and their functions are shown in Table 3 Discussion Abnormal or disease conditions can induce dysregulation of mRNA and protein levels. It has been reported that muscle-specific miRNAs, miR-206 and miR-499, are upregulated and miR-1, miR-133a, and miR-133b are downregulated in extraocular muscles compared to limb muscle, concluding that a miRNA network contributes to the extraocular muscles by regulating posttranscriptional expression of genes involved in structure, signaling, metabolism, angiogenesis, myogenesis, and regeneration in extraocular muscles [7]. In addition, miR-145 is found to be necessary for myocardin-induced reprogramming of adult fibroblasts into smooth muscle cells and can induce differentiation of multipotent neural crest stem cells into vascular smooth muscle [10]. Meanwhile, miR-145 and miR-143 cooperatively target a network of transcription factors to promote differentiation and repress proliferation of smooth muscle cells [10]. Both also act as integral components of the regulatory network whereby serum response factor controls cytoskeletal remodeling and phenotypic switching of smooth muscle cells during vascular disease [29]. In our study, several miRNAs were found to be dysregulated due to different stress status, of which, some have been previously studied. For example, miR-497 has been found to promote ischemic neuronal death by negatively regulating antiapoptotic proteins [30]. Another research found that miR-497 and miR-302b coregulate ethanol-induced neuronal cell death through BCL2 protein and cyclin D2 [31]. But its function in muscle development has not been reported yet. Therefore, these finding further suggest that miRNAs may play some roles on transcriptional circuits controlling gene expression in skeletal muscle. Notably, the surgical implantation of rumen canulas imitated a non fatal form of hardware disease. Hardware disease occurs when an animal ingests a sharp piece of metal and the metal pierces the rumen or reticulum wall. As expected, the phenotype in this study indicated that those animals undergoing this stress had significantly higher WBSF. In this research, we identified differently expressed miRNAs associated with divergent stress status in LD muscles samples between stressed and control groups. The annotation of predicted target genes further showed that miRNA may be involved in important pathways regulating target genes, such as lipid metabolism, amino acid metabolism, gene expression, molecular transport, etc. In the future, the predicted miRNA targets need to be validated in vitro and the expression levels of corresponding target genes and proteins should be measured, which will help to elucidate how miRNAs regulate gene transcription and protein expression in the variation of beef quality and tenderness. Sample collection and experiment design Seven purebred Angus steers were obtained from Wye Angus farm (Queenstown, MD). After weaning the steers were acclimated to a pelleted forage diet only to meet maintenance needs. At 10 months of age, 4 steers underwent a surgical procedure that involved anesthetization and placement of a rumen catheter. The surgery was acute stress compared to normal growth condition. Three steers that received no surgery were designated as control group. At the age of 1 year, the steers were harvested. After harvest 10 mg longissimus dorsi (LD) muscle from the 12 th to 13 th rib of the right side of the carcass were placed in RNAlater solution (Qiagen, Valencia, CA) and stored at −80°C for further analysis. Steaks of the LD from the 12 th to 13 th rib of the left side of the carcass were obtained, vacuum packed, stored at 4°C for a total of 14 days post harvest, and then frozen at −20°C. Once all steaks were obtained, aged, the steaks were thawed at 4°C, cooked to an internal temperature at 70°C, cooled, cored and then analyzed for WBSF as previously described [32]. After WBSF data were analyzed by student t-test, three extremely tough individuals were chosen to be designed as stressed group and three cattle without stress were designed as control group. Based on these tough and control groups, a total of 6 miRNA microarrays were hybridized and analyzed. All procedures were approved by the University of Maryland Institutional Animal Care and Use Committee (Protocol # R-07-05). RNA extraction and miRNA microarray hybridization Total RNAs from the 6 samples were extracted using miRNAeasy Mini Kit (Qiagen) as described in the manufacturer's instructions. The RNAs were quantified by Nano-Drop ND 1000 Spectrophotometer (Thermo-scientific, Wilmington, DE) and RNA integrity determined by 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA). Equal aliquots of total RNA from each sample were pooled together as common reference RNA. One μg total RNA from each sample or common reference were labeled with Hy3 TM and Hy5 TM fluorescent label, respectively, with the help of the miRCURY TM LAN Array power labeling kit (Exiqon, Denmark) following the instructions. The Hy3 TMlabeled samples and a Hy5 TM -labeled reference RNA sample were mixed pair-wise and hybridized to the miR-CURY TM LNA array (Version 9.2; Exiqon, Denmark), which contained capture probes targeting all of the miR-NAs for all the species registered in the miRBase (Version 11.0) at the Sanger Institute. One hundred and twenty-six of these probes are bovine-related miRNAs in the miRBase version. Hybridization was performed according to the miRCURY TM LNA array manual with a Tecan HS4800 hybridization station (Tecan, Austria). After hybridization, the microarray slides were scanned and stored in an ozone free environment to prevent potential bleaching of the fluorescent dyes. The miRCURY TM LNA array microarray slides were scanned using the Agilent G2565BA Microarray Scanner System (Agilent) and image analysis was performed with ImaGene 8.0 software (BioDiscovery, Inc., USA). miRNA microarray data analysis Microarray data were analyzed in R using the Linear models for microarray data (Limma) package. For each miRNA, quantified signals within arrays were averaged. Normalizations within arrays and between arrays were performed using the global LOWESS (LOcally WEighted Scatterplot Smoothing) regression algorithm. Contrasts were made to compare stressed and control groups. Differentially expressed miRNAs were selected to do further analysis using the stringent statistic criteria of p value less than 0.05 and FDR (false discover rate) less than 0.4. qRT-PCR analysis of miRNA expression Total mRNAs including miRNAs were extracted from 6 same samples using miRNeasy Mini Kit (QIAGEN) and RNeasy Mini Kit (QIAGEN) according to the standard protocol. mRNAs were reversely transcribed and quantified with miScript Reverse Transcription Kit (QIAGEN), miScript SYBR Green PCR Kit (QIAGEN), and miScript Primer assays (QIAGEN). In the reverse transcription control, PCR water (Invitrogen) was used to replace miRNA samples. Briefly, 1μg of purified miRNA was used for reverse transcription, and then diluted to 5 volumes. Two μl of diluted RT products were used for real-time PCR quantification. Two types of controls were applied in real-time PCR, including reverse transcription control and blank using PCR water, to ensure that no amplicon was observed in the controls. U6 were used as normalization controls. Data were analyzed using the 2 -ΔΔCT method and student T tests were used to compare the miRNAs expression levels (SAS version 9.2). Here we only validated the most significant miRNA, namely bta-miR-497, which sequence is shown as CAG-CAGCACACUGUGGUUUGUA. The mimic miRNA of bta-miR-497 was synthesized by Qiagen. Prediction of miRNA targets The target genes for miRNAs were predicted by Target-ScanHuman (http://www.targetscan.org/vert_50/). In the menu of "Select a species", cow was chosen and the names of significantly differentially expressed miRNAs were inputted and then submitted. From the output only the genes with the conserved sites were reserved as predicted target genes of this miRNA. Data mining and network analysis of significantly differentially expressed miRNAs and predicted target genes Hierarchical clustering of significantly differentially expressed miRNAs was performed using Cluster 3.0 [33]. The expression data were further filtered, adjusted and normalized. Average linkage clustering was performed and visualized using Treeview. The initial information on Gene Ontology [15] functions and functional relevance of predicted target genes was obtained from Gene Ontology Enrichment Analysis Software Toolkit (GOEAST) [34]. The GO analysis included biological process, molecular function and cellular component. Ingenuity Pathway Analysis (IPA, Ingenuity System, Redwood City, CA) was used to generate networks and assess statistically relevant biofunctions and canonical pathways that predicted target genes are involved in. These genes were mapped to corresponding genes in the Ingenuity knowledge database. The biofunctional analysis identified the molecular and cellular function, physiological system development and function. Canonical Pathway Analysis identified the most significant pathways in the dataset.
3,518.2
2012-06-01T00:00:00.000
[ "Agricultural and Food Sciences", "Biology" ]
Optimal Planning of Electrical Appliance of Residential Units in a Smart Home Network Using Cloud Services : One of the important aspects of realizing smart cities is developing smart homes/buildings and, from the energy perspective, designing and implementing an efficient smart home area energy management system (HAEMS) is vital. To be effective, the HAEMS should include various electrical appliances as well as local distributed/renewable energy resources and energy storage systems, with the whole system as a microgrid. However, the collecting and processing of the data associated with these appliances/resources are challenging in terms of the required sensors/communication infrastructure and computational burden. Thanks to the internet-of-things and cloud computing technologies, the physical requirements for handling the data have been provided; however, they demand suitable optimization/management schemes. In this article, a HAEMS is developed using cloud services to increase the accuracy and speed of the data processing. A management protocol is proposed that provides an optimal schedule for a day-ahead operation of the electrical equipment of smart residential homes under welfare indicators. The proposed system comprises three layers: (1) sensors associated with the home appliances and generation/storage units, (2) local fog nodes, and (3) a cloud where the information is processed bilaterally with HAEMS and the hourly optimal operation of appliances/generation/storage units is planned. The neural network and genetic algorithm (GA) are used as part of the HAEMS program. The neural network is used to predict the amount of workload corresponding to users’ requests. Improving the load factor and the economic efficiency are considered as the objective function that is optimized using GA. Numerical studies are performed in the MATLAB platform and the results are compared with a conventional method. Introduction Electricity/energy management systems involve a series of related programs used by the operator of the electric grid and its customers to improve the efficiency and performance of the power/energy systems [1,2]. In this way, both the electricity supplier and the consumer will benefit more [3]. The energy management helps to obviate the requirement of constructing new/costly power stations on the production side, and reduces the energy price and related penalties for consumers on the consumption side. A significant portion of the energy produced by distributed and renewable energy resources is consumed locally, which improves the efficiency of electric grids. However, the control/management of the local inverter-interfaced energy resources and consumers • Artificial intelligence and heuristic methods may reach a local sub-optimal point due to the local search for problem-solving or the use of expert experiences [24]. Fuzzy control methods, [25,26], genetic algorithm (GA) [27], and particle swarm optimization (PSO) [28,29] are examples of this category. The performance of these methods depends on the user experience and is weak against system changes and probability. • Classical methods, on the other hand, are more complex but offer optimal and reliable solutions. For example, the linear integer linear programming method [30] has been used to optimize distributed generation sources' energy production and consumption to reduce common costs. Further, smart apartments equipped with wind and solar-type generation units, storage batteries, and electric cars can be connected to the network; however, the important factor of common welfare and comfort has not been considered [31]. Also, a general model is used for building energy management that can optimize and compromise user convenience and the minimizing of energy costs. In this paper, the increasing use of grid-connected hybrid vehicles and their positive effects, such as not needing to consume fossil fuels and the use of energy stored in the vehicle to meet home consumption loads, have been considered. It is noteworthy that charging the battery of a significant number of vehicles is a big risk for the smart grid [32]. Simultaneous charging of batteries may cause a sudden overload of the distribution grid [32]. Especially if it coincides with the peak consumption time. This concurrence can cause congestion of the distribution grid. Thus, with proper planning, the destructive effects of electric vehicles can be reduced considerably [33]. In the optimal operation of home loads with electric vehicles and devices, energy storage has been done in response to the prices and the time of use. Energy storage and electric vehicles can interact and exchange energy between the smart home and the distribution network. However, the study was conducted without considering the sources of distributed production [34]. The primary purpose of this research is to provide an intelligent service for controlling the working schedule of home appliances in cloud computing to minimize the cost of electricity. Although this seems obvious and valuable even without using advanced technologies such as the cloud platform, the internet-of-things, and wireless sensor networks, implementing such a service would be efficient only with the provision of modern technologies. The main reason for this is that the timing of the activity of electrical devices will not be possible without the possibility of their automatic operation due to the urgent need for humans to control electrical devices. Today, however, due to introducing smart washing machines, smart dishwashers, and automatic vacuum cleaners, many tasks can be performed automatically with no human intervention. Second, with the internet-of-things, remote access to the home appliance is possible, and its control is also provided by central applications. Service in the cloud can implement this central control and management [35]. Monitoring the environment and specific tasks that will need to be more visible will require using environmental sensors as wireless sensor networks. Applications of wireless sensor networks in this regard, including technologies related to control and monitoring of children, sick people at home, aged care, and home temperature control, etc., require the use of networks of sensors. According to the source, the dynamic resource allocation mechanism in the supercomputer has been implemented [36]. In this work, an intelligent mechanism for dynamic allocation and management in the cloud is proposed to manage/allocate cloud services for the energy management system. The amount of daily demand for allocation of the virtual machines to each customer for the source's valid data is provided. Thus, implementation of the proposed service, given that it is implemented in a wireless manner using sensor networks and internet-ofthings platforms as the essential technologies, depends on the specialized allocation of resources in the supercomputer and scheduling algorithms. Further, the following factors are considered in the program for optimal operation of the electrical equipment of smart residential homes under welfare indicators: • Actual load profiles are used, whereas, in most articles, the average consumption of appliances has been used. Cloud service provides the computational/storage requirements to deal with large data; • Local renewable energy resources, such as solar-wind hybrid systems, with their generation profiles, are considered in the management program as part of the smart home network; • The battery energy storage systems are involved in the program and their optimal operation is determined including optimal charging and discharging at different tariffs; • Economics and load factor improvement are considered as the objective functions of the problem. The System Description and Materials The system under study, consisting of a smart home with electric appliances, is shown in Figure 1a. The proposed method, based on a three-layer HAEMS, as shown in Figure 1b, includes (1) The access layer or the layer in which the sensors and actuators are located. The terminals are responsible for collecting data from the sensors of the intelligent building system and appliances. The collected data are sent to the next layer (fog layer) via Wi-Fi. Then, any equipment that is a part of the building can manage the smart terminal (socket) in the same part. (2) The fog layer, in which all kinds of servers are located for computing and data storage; this sorter can easily manage the same batch layer of fog and avoid malfunctioning, and any input data can be stored in the data centre instantly. Then, using the received general data, a package of data is created to quickly issue the necessary decisions and commands based on the stored data to respond to the target equipment. (3) The cloud layer of the data centres that are controlled and monitored by HAEMS. To achieve the goal of optimizing the HAEMS process of the building system, the data packet is sent from the top layer. Therefore, it provides more data for decision-making. In the third layer, the haze dots have an important feature of data processing capability compared to the second layer data, therefore it requires more data and connection to the cloud layer in our proposed model. Therefore, we can treat a point in the third layer as an independent unit from an intelligent building. The third layer is the cloud where the data received from fog layers are analyzed through the HAEMS and scheduled by GA and embedded neural networks. After planning, a smart insight into the first layer will emerge to optimize the status of the monitored points. The System Description and Materials The system under study, consisting of a smart home with electric appliances, is shown in Figure 1a. The proposed method, based on a three-layer HAEMS, as shown in Figure 1b, includes (1) The access layer or the layer in which the sensors and actuators are located. The terminals are responsible for collecting data from the sensors of the intelligent building system and appliances. The collected data are sent to the next layer (fog layer) via Wi-Fi. Then, any equipment that is a part of the building can manage the smart terminal (socket) in the same part. (2) The fog layer, in which all kinds of servers are located for computing and data storage; this sorter can easily manage the same batch layer of fog and avoid malfunctioning, and any input data can be stored in the data centre instantly. Then, using the received general data, a package of data is created to quickly issue the necessary decisions and commands based on the stored data to respond to the target equipment. (3) The cloud layer of the data centres that are controlled and monitored by HAEMS. To achieve the goal of optimizing the HAEMS process of the building system, the data packet is sent from the top layer. Therefore, it provides more data for decision-making. In the third layer, the haze dots have an important feature of data processing capability compared to the second layer data, therefore it requires more data and connection to the cloud layer in our proposed model. Therefore, we can treat a point in the third layer as an independent unit from an intelligent building. The third layer is the cloud where the data received from fog layers are analyzed through the HAEMS and scheduled by GA and embedded neural networks. After planning, a smart insight into the first layer will emerge to optimize the status of the monitored points. Mean Squared Normalized Error (MSNE) The normalized value of the mean squared error is the normalized amount of squared error, which is usually used to evaluate the predicted continuous values. The squared value of the error makes it possible to consider a penalty for a more significant error so that the difference between the simulated value and the actual value, considering a power of two, reflects the magnitude of the error. Hence, the performance is evaluated accurately by considering the magnitude of the error and not its direction. Also, by normalizing the mean squares of the error, the evaluation is generalized, and the performance of the algorithm is generally evaluated based on the accuracy of the proposed method and not the data used. Calculating the normalized value of the mean squares of the error yields: where is, the actual output value and is the simulated input value for the member. Linear Regression Matches the Predicted Value and the Actual Value of the Variable Value The complementary measure for the accuracy or is a measure called the linear regression of the simulated value and the actual output value of the approximating algorithm. In fact, in addition to accuracy, another measure of reliability is needed to approximate a constant value. Reliability is calculated by a criterion called regression of correlation coefficients. The degree of reliability is numerical in the range (1 and −1). It is, in fact, Mean Squared Normalized Error (MSNE) The normalized value of the mean squared error is the normalized amount of squared error, which is usually used to evaluate the predicted continuous values. The squared value of the error makes it possible to consider a penalty for a more significant error so that the difference between the simulated value and the actual value, considering a power of two, reflects the magnitude of the error. Hence, the performance is evaluated accurately by considering the magnitude of the error and not its direction. Also, by normalizing the mean squares of the error, the evaluation is generalized, and the performance of the algorithm is generally evaluated based on the accuracy of the proposed method and not the data used. Calculating the normalized value of the mean squares of the error yields: where T i is, the actual output value and Y i is the simulated input value for the i member. Linear Regression Matches the Predicted Value and the Actual Value of the Variable Value The complementary measure for the accuracy or MSNE is a measure called the linear regression of the simulated value and the actual output value of the approximating algorithm. In fact, in addition to accuracy, another measure of reliability is needed to approximate a constant value. Reliability is calculated by a criterion called regression of correlation coefficients. The degree of reliability is numerical in the range (1 and −1). It is, in fact, an indicator for evaluating the degree of linear correlation between the actual value and the estimated value of a parameter. If R = 0, there is no linear relationship between the two values, but if R = 1 or R = −1, there is a stable positive or negative linear relationship. The optimum value for R is one, which indicates excellent reliability in the model. Calculating the reliability criterion through linear regression of correlation coefficients is: where R is a linear regression to measure the performance of the algorithm, T i is the actual output value, and Y i is the simulated input value for the ith member. Also, T is the actual output value and Y is the simulated input value of one step before step i. MSNE and R regression accuracy criteria can jointly demonstrate the performance of supervisor learning algorithms, and each alone may be flawed. Hence, if each is used independently to describe the version of the machine learning tool (here is the predictive neural network), it is incomplete and not sufficient to ensure the proper implementation of the method. Thus, both criteria are used together and as a compliment, and the neural network performance indicator will predict the workload. According to the source [24], the performance of dynamic resource allocation can be examined in terms of the request rate rejection and the number of resources wasted. The Cloud Rejection Rate The number of rejected requests is the primary measure of the efficiency of the dynamic resource allocation mechanism in the cloud. Thus, the number of rejected requests relative to the total number of proposals submitted to the cloud calculates the amount of this performance criterion. The following equation calculates this: where R t represents the number of rejected requests and U t represents all requests received at time t. The Number of Wasted Resources in the Cloud The number of resources wasted is the ratio of the remaining empty capacity in the servers to the total capacity of the cloud servers: where N is the total number of servers, Ld i is the current load on the physical server i, and Cap i is the total capacity of the physical server i. The neural network to predict workload needs to be trained first, and then its performance is measured by running on a test data set. However, the data set is based on the descriptions in the previous section of this paper; it has a percentage of noise to make the simulated data more realistic. It seems that averaging several times results in better neural network performance. The complete execution of the neural network on the Moore database should be evaluated so that the randomness of the noise can be more effective in accurately representing the overall performance neural network [33]. Thus, in this study, the neural network is run 20 times for different amounts of noise. Its efficiency is considered in terms of best performance and average efficiency shown in Table 1. I: Objective function All intelligent electrical appliances are controlled and programmed by the central control of the smart home network. The objective function is presented as: where SP represents the cost of operating a smart home and LF represents the load factor. SP is defined as the difference between the cost of purchasing energy from the upstream grid C EP with the profit from the sale of energy to the upstream grid C BS and the profit from participation in the valley filling program C DM . Increasing the load factor can reduce the peak consumption or increase the average consumption by filling the valleys of the total load profile. Charging and Discharging the Battery Charging and discharging is optimal when charging happens during off-peak hours and discharging at rather expensive/peak hours; for the battery from the time of charge or discharge and charge level is defined as: where P ch with positive (negative) sign indicates the charge (discharge) mode for the battery. II: Problem constrain: Load clipping constrain: This upper and lower boundary limit in load clipping must be observed at any time. In the below equation, ∆P clip t,n is the amount of load from n that is curtailed at the moment t. U clip n is also a variable that determines whether or not the load participates in the load clipping strategy, which is one if it participates and zero otherwise. Complete load transfer constraints: this means the complete transfer of load from one time to another in order to avoid the activity of electrical appliances in the peak load. In this strategy, it is assumed that the shape of the load does not change, it is only transferred from time to time. Load participation in the load transfer strategy is shown in Equations (10) and (11). In the equation below, the parameter ∆P trans t,n shows the difference between the load n, before and after the transfer at moment t. U trans n . also indicates the load participation in the transfer strategy, the value of which is zero or one. It can be one when the load can be turned off and transferred to another hour, and, vice versa, when the load is not transferable (for example for Central air conditioning), this index is zero. Also, in the following equations, the parameter y n,∆t indicates whether the load n has been transferred by ∆t or not. The range of this index is zero or one. ∆P trans t,n = U trans n P t,n − ∑ ∆t y n,∆t P t+∆t (10) ∑ ∆t y n,∆t = u trans Charging and discharging constrain: this constraint for the ESS system according to the minimum and maximum charge rate expresses a relationship as follows that, in the following relationships, P ch,max is the maximum battery charge rate in kW and P disch,max is the maximum battery discharge rate in kW. Also, EV BC battery capacity in kW. Finally, the EV SOC min is the minimum amount allowed to charge the battery in kWh. Problem Solving Algorithm In formulating the electrical task scheduling by the genetic algorithm, each subsolution (the moment the electrical task starts) is defined as an individual within a set of subsolutions called a chromosome. The main idea behind the genetic algorithm is that these chromosomes must include the subroutines that provide the most overall optimization during several stages of change. In fact, after a few steps of the algorithm, the algorithm's output should be the moments of performing electrical tasks so that the cost of power consumption is minimized. Each time the algorithm is implemented, each chromosome introduces a new generation of sub-solutions. In each generation, chromosomes are evaluated and allowed to survive and reproduce in proportion to their value. Generation is done in the discussion of the genetic algorithm with intersection three and mutation four operators. Top parents are selected based on a fitness function. In the genetic algorithm, a group of points is randomly selected in the search space. A sequence of sub-solutions is assigned to each point in this process, to which genetic operators are applied. The resulting sequences are then decoded to find new issues in the search space. Finally, based on the objective function value in each, the probability of their participation in the next step is determined. Here, the objective function is the amount of empty capacity together relative to the total capacity. The proposed protocol is implemented through the following steps ( Figure 2): 1. The data of each device are collected based on their characteristics, i.e., the type of load and their basic operating hours; 2. All the types of equipment are classified and the values of the desired level of operation for each appliance are entered from the customer or residents' point of view; 3. All the 24-h data of the renewable hybrid system are called, and the amount of stored power is collected; 4. The amount of power requested from the network is determined; 5. In the next part of the formulation, the optimization problem is solved using the genetic algorithm, and optimal energy management and optimal timing for optimal operation of smart home equipment are achieved; 6. The HAEMS protocol is performed and, in the next step, according to the parameters trained in the artificial neural network, the values of MSNE, Rj t , R, W R are checked to be in the acceptable range. If the values are in the unauthorized range, the determined power of the main grid and the amount of power requested are increased by 5%, and this process continues until the evaluation parameters of the proposed protocol are converged and minimized; 7. The data of each device are layered by cloud computing taken bilaterally from the HAEMS protocol. The second is sent from the second layer to the first layer, and, in this part, which is the physical level of equipment in the smart home, they are controlled and operated optimally; 8. The 24-h time limit is checked, and the program is terminated. operation for each appliance are entered from the customer or residents' point of view; 3. All the 24-h data of the renewable hybrid system are called, and the amount of stored power is collected; 4. The amount of power requested from the network is determined; 5. In the next part of the formulation, the optimization problem is solved using the genetic algorithm, and optimal energy management and optimal timing for optimal operation of smart home equipment are achieved; 6. The HAEMS protocol is performed and, in the next step, according to the parameters trained in the artificial neural network, the values of MSNE, , , are checked to be in the acceptable range. If the values are in the unauthorized range, the determined power of the main grid and the amount of power requested are increased by 5%, and this process continues until the evaluation parameters of the proposed protocol are converged and minimized; 7. The data of each device are layered by cloud computing taken bilaterally from the HAEMS protocol. The second is sent from the second layer to the first layer, and, in this part, which is the physical level of equipment in the smart home, they are controlled and operated optimally; 8. The 24-h time limit is checked, and the program is terminated. Classification of Household Electrical Appliances Household appliances are divided into responsive and non-responsive loads according to their capabilities in the load response program. Responsive loads such as washing machines and water heaters can transfer their consumption from time to time in response to the received tariff. Devices such as televisions and personal computers, which are usually used based on the customer's wishes and without considering tariffs, are called non-responsive devices. Although the time and amount of consumption of these devices cannot be controlled, several time intervals can be suggested as operating times to the owners of these devices. Here, it is assumed that the subscriber turns on his device at one of the recommended times. Responsive appliances are of two types: (1) appliances that only have their on/off status determined by the program provided, such as washing machines. These devices consume their energy consumption at each interval when they are on. The subscriber selects the allowable operating time for these devices. For some, the operating intervals of these devices should be consecutive and, for some, can be incoherent. For example, a washing machine must have a working clock to wash clothes properly. However, the clothes dryer can do its job at non-consecutive intervals. (2) Another category of responsive devices is devices whose consumption level in each allowable performance interval is determined by implementing the program. These devices have an acceptable range of energy consumption in each interval. The customer can also select the desired level of consumption of the device in each interval. To ensure the well-being of the joint, the total deviation from this desired joint amount can be limited to a certain amount. Among the devices in this category is the electric cooling/heating system. Energy Storage Systems It is expected that a modern family in an SMG is equipped with some storage/production devices; for example, energy storage systems such as batteries or plug-in hybrid electric vehicles (PHEVs). To keep returns high, battery, charge/discharge, and charge mode (SOC) should be limited to a specific range as follows: Soc min ≤ Soc(h) ≤ Soc max (16) where P dch and P dh are the maximum charge and discharge power of the battery and Soc min and Soc max are the upper and lower limits of the battery SOC. Similarly, η ch and η cdh are battery charge and discharge efficiencies. u Batt Is a binary variable that shows the battery status at h ("1" charge and "0" = discharge). Due to the above limitations, the SOC update function is equal to: where, E Batt is the battery capacity in kWh. Although a PHEV is essentially the same as the battery, a few additional limitations (such as a cut-off signal) indicate that the PHEV battery can only be charged/discharged when it is at home, and Soc min hourly indicates the minimum PHEV battery power must also be satisfied. F is scheduling tasks and residential load model. Residential loads are generally divided into two categories: (1) Schedulable loads (removable and interruptible tasks); (2) Fixed loads. While loads such as refrigerators and stoves are considered fixed loads, space heating and cooling, vacuum cleaners, washing machines, and clothes dryers are examples of timed tasks that provide the most electricity in a household. They consume and behave differently in response to changes in electricity prices over time [24]. Modeling the Production Capacity of the Wind-Solar Hybrid System The power generation regime of the wind-solar hybrid system separately for each wind turbine and solar system is given in Figure 3, respectively. In this section, the first interval shows 00:00 to 00:15 in the morning. differently in response to changes in electricity prices over time [24]. Modeling the Production Capacity of the Wind-Solar Hybrid System The power generation regime of the wind-solar hybrid system separately for each wind turbine and solar system is given in Figure 3, respectively. In this section, the first interval shows 00:00 to 00:15 in the morning. Functional Range One day will turn into 96 lot 15-min intervals. The interval starts at 6:00 a.m., and the last interval is at 5:00 a.m. Functional Range One day will turn into 96 lot 15-min intervals. The interval starts at 6:00 a.m., and the last interval is at 5:00 a.m. Bars and Their Profiles Dishwasher: has three primary performance cycles. This time is considered a transferable load (Figure 4). Bars and Their Profiles Dishwasher: has three primary performance cycles. This time is considered a transferable load (Figure 4). Washing machine: which works for washing, rinsing, and then drying, this time is also portable ( Figure 5). Refrigerator (cuft-6.15) with freezer: the refrigerator is a non-transferable and clipping appliance, and operates 24 h a day ( Figure 6). Washing machine: which works for washing, rinsing, and then drying, this time is also portable ( Figure 5). Bars and Their Profiles Dishwasher: has three primary performance cycles. This time is considered a transferable load (Figure 4). Refrigerator (cuft-6.15) with freezer: the refrigerator is a non-transferable and clipping appliance, and operates 24 h a day ( Figure 6). Refrigerator (cuft-6.15) with freezer: the refrigerator is a non-transferable and clipping appliance, and operates 24 h a day ( Figure 6). Smart Cities 2021, 4, FOR PEER REVIEW 13 Figure 6. Load profile of refrigerator. Central air conditioning: The use of this device depends on the weather and ambient temperature and is non-transferable (Figures 7 and 8). Central air conditioning: The use of this device depends on the weather and ambient temperature and is non-transferable (Figures 7 and 8). Central air conditioning: The use of this device depends on the weather and ambient temperature and is non-transferable (Figures 7 and 8). Hybrid Micro-Grid The proposed objective function, and the constraints considered for the HAEMS, including the wind/solar micro-grid, such as taking the energy storage by the genetic algorithm for the discussed input loads, have been solved. The price of electricity at different tariffs is shown in Figure 9. Hybrid Micro-Grid The proposed objective function, and the constraints considered for the HAEMS, including the wind/solar micro-grid, such as taking the energy storage by the genetic algorithm for the discussed input loads, have been solved. The price of electricity at different tariffs is shown in Figure 9. Smart Cities 2021, 4, FOR PEER REVIEW 15 Figure 9. Different electricity tariffs. The parameters and numerical values for solving the problem are shown in Table 2. 1 kW Wind turbine capacity installed 1 kW The capacity of an installed photovoltaic system 0.2 kWh Minimum battery charge 0.5 kW Charging rate every 15 min 0.9 Per Unit Charging tool efficiency 2 kWh Battery capacity 1.03*daily-price The cost of discharging or selling energy to the grid 0.04*daily-price Profits from participation in consumption reduction All loads in the HAEMS and peak, medium, and low load times in the network base state and the initial assumption are shown in Figure 10. The parameters and numerical values for solving the problem are shown in Table 2. All loads in the HAEMS and peak, medium, and low load times in the network base state and the initial assumption are shown in Figure 10. Optimization, charging, and discharging of storage and load shift and load clipping operations on different loads according to the information given in the table below have been done. According to Table 3, the central air conditioning is on at 96 time slots of 15 min, i.e., the whole day and night, and it is not possible to shift. Still, by adjusting the temperature, its consumption can be reduced, or its consumption can be increased by applying a lower temperature. Therefore, it is possible to reduce the load. The refrigerator is on all day, and it is not possible to shift or cut part of the load. The dishwasher can have both load shift and load clipping according to its settings. This equipment can shift and transfer loads from 67 to 96, and up to 2.0 saws for load reduction and clipping are considered. For washing machines and dryers from range 17 to 96, load transfer capacity and function are considered. It is assumed that the maximum load reduction is regarded as 30% according to the settings of this car. In the following, we will deal with the results obtained by applying the proposed simulation conditions, and the obtained results will be discussed: 0.9 Per Unit Charging tool efficiency 2 kWh Battery capacity 1.03*daily-price The cost of discharging or selling energy to the grid 0.04*daily-price Profits from participation in consumption reduction All loads in the HAEMS and peak, medium, and low load times in the network base state and the initial assumption are shown in Figure 10. Central Air Conditioning (AC) After optimization, the AC power consumption profile was obtained as follows ( Figure 11). Optimization, charging, and discharging of storage and load shift and load clipping operations on different loads according to the information given in the table below have been done. According to Table 3, the central air conditioning is on at 96 time slots of 15 min, i.e., the whole day and night, and it is not possible to shift. Still, by adjusting the temperature, its consumption can be reduced, or its consumption can be increased by applying a lower temperature. Therefore, it is possible to reduce the load. The refrigerator is on all day, and it is not possible to shift or cut part of the load. The dishwasher can have both load shift and load clipping according to its settings. This equipment can shift and transfer loads from 67 to 96, and up to 2.0 saws for load reduction and clipping are considered. For washing machines and dryers from range 17 to 96, load transfer capacity and function are considered. It is assumed that the maximum load reduction is regarded as 30% according to the settings of this car. In the following, we will deal with the results obtained by applying the proposed simulation conditions, and the obtained results will be discussed: Central Air Conditioning (AC) After optimization, the AC power consumption profile was obtained as follows ( Figure 11). Comparison of AC consumption profile before and after application of the proposed method. Dishwasher: The profiles before and after optimization are compared; the result is shown in Figure 12. Smart Cities 2021, 4, FOR PEER REVIEW 17 Dishwasher: The profiles before and after optimization are compared; the result is shown in Figure 12. Washing machine: the power consumption profile of the washing machine, and the dryer after and before the optimization is shown in Figure 13. Electric Oven Total load, i.e., the sum of HAEMS loads is shown in Figures 14 and 15, and for comparison with profiles such as simulation and proof of the efficiency of the Figures 16 and 17 method, the load profile is more linear and the peak load is reduced. Washing machine: the power consumption profile of the washing machine, and the dryer after and before the optimization is shown in Figure 13. Smart Cities 2021, 4, FOR PEER REVIEW 17 Dishwasher: The profiles before and after optimization are compared; the result is shown in Figure 12. Washing machine: the power consumption profile of the washing machine, and the dryer after and before the optimization is shown in Figure 13. Electric Oven Total load, i.e., the sum of HAEMS loads is shown in Figures 14 and 15, and for comparison with profiles such as simulation and proof of the efficiency of the Figures 16 and 17 method, the load profile is more linear and the peak load is reduced. Electric Oven Total load, i.e., the sum of HAEMS loads is shown in Figures 14 and 15, and for comparison with profiles such as simulation and proof of the efficiency of the Figures 16 and 17 method, the load profile is more linear and the peak load is reduced. Smart Cities 2021, 4, FOR PEER REVIEW 18 ESS is in charging mode during off-peak hours and is scheduled in discharge mode during peak hours. The maximum and minimum charge limits are 2 and 0.2 kW. Figure 18 clearly shows that, at the end of the day and low load rates, the charging mode strategy is planned by HAEMS . ESS is in charging mode during off-peak hours and is scheduled in discharge mode during peak hours. The maximum and minimum charge limits are 2 and 0.2 kW. Figure 18 clearly shows that, at the end of the day and low load rates, the charging mode strategy is planned by HAEMS . ESS is in charging mode during off-peak hours and is scheduled in discharge mode during peak hours. The maximum and minimum charge limits are 2 and 0.2 kW. Figure 18 clearly shows that, at the end of the day and low load rates, the charging mode strategy is planned by HAEMS. Smart Cities 2021, 4, FOR PEER REVIEW 20 Figure 18. Storage charge level. Power generation and consumption in the HAEMS smart home network are not the same. Figure 19 shows this difference. Figure 20 also shows the amount of power received from the main network to the smart home network. As has been made clear, the proposed method with optimal timing for the day ahead, in addition to high speed and accuracy, has been able to minimize the amount of power required from the main network. This has resulted in a 45% reduction in the purchasing power of the network from the main network. Figure 21 shows the total cost per day with and without considering the proposed HAEMS-based optimization method. As it turns out, this significantly reduced the cost of electricity. With this idea, customers save about $2 and 86 cents a day in payment . Power generation and consumption in the HAEMS smart home network are not the same. Figure 19 shows this difference. Power generation and consumption in the HAEMS smart home network are not the same. Figure 19 shows this difference. Figure 20 also shows the amount of power received from the main network to the smart home network. As has been made clear, the proposed method with optimal timing for the day ahead, in addition to high speed and accuracy, has been able to minimize the amount of power required from the main network. This has resulted in a 45% reduction in the purchasing power of the network from the main network. Figure 21 shows the total cost per day with and without considering the proposed HAEMS-based optimization method. As it turns out, this significantly reduced the cost of electricity. With this idea, customers save about $2 and 86 cents a day in payment . Figure 20 also shows the amount of power received from the main network to the smart home network. As has been made clear, the proposed method with optimal timing for the day ahead, in addition to high speed and accuracy, has been able to minimize the amount of power required from the main network. This has resulted in a 45% reduction in the purchasing power of the network from the main network. Figure 21 shows the total cost per day with and without considering the proposed HAEMS-based optimization method. As it turns out, this significantly reduced the cost of electricity. With this idea, customers save about $2 and 86 cents a day in payment. Conclusions In this paper, the HAEMS protocol was presented using cloud computing. The data of home appliances were analyzed by using cloud computing, which was exchanged bilaterally from the HAEMS protocol. An optimal schedule was made for a day ahead for the optimal operation of the electrical equipment of smart residential houses under welfare indicators. The efficiency of the neural network was evaluated in the form of averaging, several times, the complete implementation of the neural network on the Moore dataset, and, finally, welfare indicators such as MSNE, , , and were evaluated. In addition to welfare indicators, the proposed protocol with high accuracy, speed, and proper convergence at the level of welfare indicators was able to minimize the amount of power requested from the main network, which has resulted in a 45% reduction in the purchasing power from the grid. On the other hand, the total cost per day, regardless of the proposed HAEMS-based optimization method, has shown that the electricity costs were significantly reduced. With this method, customers save about $ 2 and 86 cents a day in payment. The proposed method was implemented by GA algorithm Conclusions In this paper, the HAEMS protocol was presented using cloud computing. The data of home appliances were analyzed by using cloud computing, which was exchanged bilaterally from the HAEMS protocol. An optimal schedule was made for a day ahead for the optimal operation of the electrical equipment of smart residential houses under welfare indicators. The efficiency of the neural network was evaluated in the form of averaging, several times, the complete implementation of the neural network on the Moore dataset, and, finally, welfare indicators such as MSNE, , , and were evaluated. In addition to welfare indicators, the proposed protocol with high accuracy, speed, and proper convergence at the level of welfare indicators was able to minimize the amount of power requested from the main network, which has resulted in a 45% reduction in the purchasing power from the grid. On the other hand, the total cost per day, regardless of the proposed HAEMS-based optimization method, has shown that the electricity costs were significantly reduced. With this method, customers save about $ 2 and 86 cents a day in payment. The proposed method was implemented by GA algorithm The Overall Dally utility Cost utility cost(optimized) utility cost(unoptimized) Figure 21. Total utility cost of the smart home after/before using the proposed HAEMS. Conclusions In this paper, the HAEMS protocol was presented using cloud computing. The data of home appliances were analyzed by using cloud computing, which was exchanged bilaterally from the HAEMS protocol. An optimal schedule was made for a day ahead for the optimal operation of the electrical equipment of smart residential houses under welfare indicators. The efficiency of the neural network was evaluated in the form of averaging, several times, the complete implementation of the neural network on the Moore dataset, and, finally, welfare indicators such as MSNE, Rj t , R, and W R were evaluated. In addition to welfare indicators, the proposed protocol with high accuracy, speed, and proper convergence at the level of welfare indicators was able to minimize the amount of power requested from the main network, which has resulted in a 45% reduction in the purchasing power from the grid. On the other hand, the total cost per day, regardless of the proposed HAEMS-based optimization method, has shown that the electricity costs were significantly reduced. With this method, customers save about $2 and 86 cents a day in payment. The
9,961
2021-09-16T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Effects of Computer-Aided Manufacturing Technology on Precision of Clinical Metal-Free Restorations Purpose. The purpose of this study was to investigate the marginal fit of metal-free crowns made by three different computer-aided design/computer-aided manufacturing (CAD/CAM) systems. Materials and Methods. The maxillary left first premolar of a dentiform was prepared for all-ceramic crown restoration. Thirty all-ceramic premolar crowns were made, ten each manufactured by the Lava system, Cercon, and Cerec. Ten metal ceramic gold (MCG) crowns served as control. The marginal gap of each sample was measured under a stereoscopic microscope at 75x magnification after cementation. One-way ANOVA and the Duncan's post hoc test were used for data analysis at the significance level of 0.05. Results. The mean (standard deviation) marginal gaps were 70.5 (34.4) μm for the MCG crowns, 87.2 (22.8) μm for Lava, 58.5 (17.6) μm for Cercon, and 72.3 (30.8) μm for Cerec. There were no significant differences in the marginal fit among the groups except that the Cercon crowns had significantly smaller marginal gaps than the Lava crowns (P < 0.001).  Conclusions. Within the limitation of this study, all the metal-free restorations made by the digital CAD/CAM systems had clinically acceptable marginal accuracy. Introduction With increasing demand for aesthetics, many studies on zirconia, which is the most representative element for metalfree restoration in the field of restorative dentistry, have been recently performed due to its acceptable aesthetics and high strength that is comparable with the strength of a metal ceramic crown [1][2][3][4][5][6][7][8]. Yttria-stabilized tetragonal zirconia polycrystal is provided as a block form to secure the maximum strength [6,7]. A new precise mechanical subtracting process has been introduced instead of the previous adding method including waxing, investing, and casting to fabricate a prosthodontic shape from the block. A computeraided design/computer-aided manufacturing (CAD/CAM) system has been further developed in dentistry over the last 20 years to handle very precise data acquisition, complex restoration design, complete task processing, and high-end cutting system [9]. One of the most important elements in evaluating a fixed prosthodontic device is marginal accuracy. Every prosthodontic restoration process, from abutment preparation to cementation, has effects on the marginal fit of the restoration [10]. Unlike the traditional analogue methods, the CAD/CAM system needs the precision of the system itself, including the accurate digital conversion of acquired information and calibration of the digitalized data according to materials used in CAM. Therefore, it is important in clinical CAD/CAM application to prosthodontic restoration to understand both the differences between the CAD/CAM systems and the accuracy of the resulting crowns. This study aimed to investigate the marginal fit of zirconia crowns made by widely used CAD/CAM systems: Lava (3M ESPE, Seefeld, Germany), Cercon (DeguDent, Hanau, Germany), and Cerec (Sirona Dental Systems GmbH, Bensheim, Germany). This study also compared the marginal fit of the zirconia crowns with that of a metal ceramic gold (MCG) crown, which is one of the restoration forms clinically used for the longest period. Materials and Methods The maxillary left first premolar (#24) of the dentiform (Columbia Dentoform Corp., New York) was prepared to form an abutment tooth. Two millimeters of the occlusal surface and 1.0-1.4 mm of the lateral side were reduced. The completed convergence angles of the abutment were about 8-10 ∘ both mesiodistally and buccolingually. The margin was assigned with 1 mm of a heavy chamfer margin in the overall range of the cervical aspect ( Figure 1). After the abutment preparation, the resin tooth was invested onto the plaster, and the impression was acquired by using the additional silicone impression products of putty and light body (Exafine, GC Co., Tokyo, Japan). Forty original resin models (Exakto-Form, Bredent, Senden, Germany) were manufactured from the silicone impression. These resin models were subsequently used for the measurements of the marginal openings after the final restorations were cemented to these models. The models were divided into 4 groups by assigning 10 models to each group. Lava, Cercon, and Cerec systems were used to fabricate final restorations. Ten single MCG premolar crowns served as control, which were made by the conventional casting method. The other all-ceramic crowns were fabricated according to the manufacturers' recommendations of the systems evaluated in this study. The gap for cement was all assigned as 60 m. The working die productions for the MCG, Lava, and Cercon crowns were performed using high strength dental stone (GC Fujirock EP, GC Europe N.V., Leuven, Belgium) after taking impressions of the original resin models with the additional silicone impression materials of putty and light body (Exafine). The virtual working dies for the Cerec crowns were produced by direct scanning method. For the production of MCG crowns, a wax pattern was produced by using a conventional method with high strength dental stone model. The die spacer (Pico-Fit Die Spacer Varnish (silver), Renfert USA, Il, USA) was coated 3 times on high strength dental stone. Considering the fact that 1 time die spacer coating creates a layer thickness of 14-20 m from the manufacturer's technical data, the practice allowed for a cement space of approximately 42-60 m. The gold (Bio Herador SG, Heraeus, Germany) coping was produced by following the investing and casting procedures and then veneered with porcelain. For the Lava crowns, the high strength dental stone dies were scanned with a scanner (Lava Scan Scanner) and zirconia copings were designed under a CAD system (Lava CAD), which gave the cement space of 60 m. The copings were produced by milling zirconia blocks (Lava zirconia blocks) with a CAM system (Lava Form Milling Unit). The copings were manufactured by setting the thickness of the coping at 0.5 mm. The final crowns were completed by veneering porcelain (Lave Ceram) on the copings after sintering. In the manufacturing of Cercon crowns, the working dies were also scanned using a scanner (Cercon EYE) and zirconia frameworks were designed using a CAD software (Cercon ART). Zirconia blocks (Cercon zirconia blocks) were then milled using a CAM system (Cercon BRAIN), to make the frameworks that were 0.5 mm thick. The milled zirconia frameworks were sintered and were veneered with a heat-pressed material (IPS e.max Ceram, Ivoclar Vivadent AG, Benderer Str. 2, Liechtenstein) and technique, to manufacture the final crowns. For the fabrication of Cerec crowns, the original resin models were directly scanned (CEREC Bluecam) to make the software working dies and the final crowns were designed using a CAD software (CEREC 3D Ivoclar Vivadent AG, Schaan, Liechtenstein) were milled by a CAM system (CEREC in Lab MC XL milling machine) and sintered to make the final restorations with no veneering procedure. The procedures, instruments, and materials to make the specimens are summarized in Figure 2. The MCG, Lava, Cercon, and Cerec crowns were, respectively, cemented to their own resin models by using a resin cement (RelyX Unicem Clicker, 3M ESPE, Germany). During cement setting time, 50 N loading was applied with finger pressure by the person who had trained to calibrate the 50 N load with a laboratory scale. Excessive cement material was cleaned with cotton pellets. The marginal fit of each sample was measured by using a stereoscopic microscope (Nikon DS-Fi 1, Nikon, Japan) at 75x magnification. The marginal gap was defined in this study as a distance on the microscope from a point of the tooth margin to the intersecting point between the restoration margin and the line perpendicular to the tangent line to the tooth margin at the tooth margin point. For each crown, the gap was measured at one point of the labial, lingual, mesial, and distal surface. The marginal gap of a crown was calculated as the mean of the measured four gaps. The mean and standard deviation (SD) were calculated for the measured marginal gaps of each group. One-way ANOVA and a post hoc test, Duncan's test, were used to find any statistically significant difference among the groups at the level of significance of 0.05. Results The mean marginal gaps (SD) of MCG, Lava, Cercon, and Cerec crowns were determined to be 70. 5 Table 1. One-way ANOVA and Duncan's post hoc test showed that there were no significant differences in the marginal fit among the groups except that the Cercon crowns had significantly smaller marginal gaps than the Lava crowns ( < 0.001). Discussion There are many and various criteria about the clinically acceptable marginal fit of prosthodontic restoration [11][12][13][14][15]. ADA specification number 8 defined that the range should be 25-40 m, and Ostlund stated that the value should not exceed 50 m [11]. Unfortunately, those values appear to be very difficult to obtain clinically. Christensen reported that a maximum marginal distance of 119 m was allowed by dentists for the proximal surface of gold inlays through observations using eyes, probes, and radiographic images and stated an approximate 39 m maximum marginal distance for the occlusion surface [12]. McLean and von Fraunhofer stated that a marginal gap of about 100 m does not cause any clinical problems in a study observing 1,000 dental restorations performed over more than 5 years, concluding that the clinically allowable maximum marginal discrepancy was 120 m [13]. Another previous study evaluated that a marginal gap up to 100 m was clinically acceptable, while still another extended the clinically acceptable marginal gap to 200 m [14,15]. There is still controversy over the clinically acceptable marginal fit standard. However, most authors are considered to agree upon the fact that the marginal discrepancy should be less than 200 m [16][17][18][19][20][21][22][23]. The measurement values that were acquired in the present study were in the clinically acceptable range for all the test groups. Most of the currently used CAD/CAM systems were found to show appropriate clinical marginal fit by exhibiting a mean marginal discrepancy value of less than 200 m. Bindl and Mörmann found no significant difference in the marginal fit of crowns, when comparing the marginal fit of CAD/CAM all-ceramic crowns of Cerec inLab, DCS, Decim, and Procera, the slip cast type crown of In-Ceram zirconia, and heat-pressing type crown of Empress 2, showing a marginal opening range of about 20-70 m [24]. The marginal fit of the 4-unit fixed dental prostheses made by four CAD/CAM systems (Cercon, Cerec inLab, Digident, Everest) was evaluated to be 57.9-206.3 m [25]. Another previous study investigating the marginal accuracy of 3-unit fixed dental prostheses showed the mean marginal gaps of 77-92 m for the Cerec inLab, Digident, and Lava systems [26]. The previous results were similar to those of this study although there were some numerical differences according to the experimental conditions including the restored teeth (anterior, posterior), the restoration types (single, multiple), and the fabrication procedures. The Cercon premolar crowns exhibited significantly superior marginal fit to the Lava crowns in this study. However, these statistics were unable to be interpreted as superiority of one system in precision to the other because there were no significant differences either between the Cercon and the control (MCG) groups or between Lava and control. Differences in the veneer techniques, not those in the CAD/CAM systems, could explain some causes of the results shown in this study. Some previous studies showed the differences in accuracy between the restorations with and without the porcelain build-up procedures and the significant effects of the veneering methods on restoration precision [27,28]. This investigation, however, did not consider a CAD/CAM system and a veneer technique as two independent variables, which was one of the limitations. Further studies are required to evaluate and to compare the effects of those two factors, the systems and veneering methods, on the marginal accuracy of prosthodontic restorations. In addition, this study indicated that the accuracy of a dental restoration fabricated by digital technology may be clinically acceptable, when compared with that by conventional analogue method. However, various approaches were found according to the CAD/CAM systems: pure digital techniques and digital-analogue combinations, as shown in Figure 2. Further studies are needed to compare each step in digital procedures with that in analogue. Conclusions Computer-aided digital technologies may manufacture metal-free restorations that are clinically acceptable in precision. Considering the results in this study, the marginal gaps of the digitalized metal-free crowns were similar to those of the conventional metal ceramic gold crowns. All the accuracy investigated in this study may be within the generally agreed clinically acceptable marginal fit standard.
2,847.6
2015-10-18T00:00:00.000
[ "Medicine", "Materials Science" ]
Laser Induced C60 Cage Opening Studied by Semiclassical Dynamics Simulation Laser induced opening of the C60 cage is studied by a semiclassical electron-radiation-ion dynamics technique. The simulation results indicate that the C60 cage is abruptly opened immediately after laser excitation. The opening of the C60 cage induces a quick increase in kinetic energy and a sharp decrease in electronic energy, suggesting that the breaking of the C60 cage efficiently heats up the cluster and enhances the thermal fragmentation of C60 fullerene. Introduction Fullerene (C 60 ) has an icosahedral symmetry. It has a closed cage structure, which consists of 32 faces of which 20 are hexagon and 12 are pentagon. Each carbon atom in C 60 is bonded to three others through sp 2 hybridization. With this unique structure, C 60 exhibits an extremely fast response upon laser excitation [1][2][3] and therefore has become a model system for studying the electronic and nuclear dynamics induced by ultrafast laser pulses [4,5]. OPEN ACCESS Photoinduced fragmentation of C 60 has attracted a great deal of interest [1][2][3][4][5][6]. Using mass spectroscopy, the fragmentation patterns of C 60 have been well studied experimentally [2][3][4][5][6][7]. However, the mechanism behind photoinduced fragmentation is not well understood. It has been suggested that fragmentation at different laser pulse durations follows different mechanisms [7][8][9]. For nanosecond laser pulses, experimentally observed fragmentation patterns can be explained by statistical processes since nanosecond excitation allows the fullerene to achieve the complete equilibration of electronic energy and thermal energy through coupling between vibrational and electronic degrees of freedom [7]. For femtosecond laser pulse excitation, the excitation time scale is smaller than or similar to the electron-phonon coupling time (~250 fs) [7] and the response of the C 60 is more complicated [8][9][10]. Experimental evidence shows that the relaxation following femtosecond laser excitation goes through different channels, including thermal and nonthermal fragmentations, which produce a superposition of ionized and neutral fragments [3,10,11]. It is difficult to differentiate these relaxation channels experimentally. For nanosecond laser excitations, the observed fragmentation pattern in the mass spectrum shows a series of small fragments C n (n << 60) and a bimodal distribution of heavy fragments C 60−2n generated by a sequential loss of a C 2 unit [2]. For femtosecond laser pulses, a large distribution of multiple charged heavy fragments was observed and the fragmentation shows significantly different behavior [3,4]. In this communication, we report a semiclassical electron-radiation-ion dynamics (SERID) simulation study on the fragmentation of an isolated C 60 irradiated by a 40 fs (full-width at half maximum, FWHM) laser pulse. The simulation study is focused on excitations below the continuum levels and the relaxation channels that lead to the formation of neutral fragments. Although ionization is an important channel of de-excitation, especially at high laser intensity, Jeschke and co-workers [12] concluded from phase-space argument that the processes that do not involve ionization of the C 60 should contribute significantly to the relaxation channels if the laser intensity is not extremely high. Methodology In the SERID method, the state of the valence electrons is calculated by the time-dependent Schrödinger equation, but the radiation field and the motion of the nuclei are treated classically. A detailed description of this method has been published elsewhere [13][14][15] and only a very brief explanation is presented here. The total energy of a molecule is described by where the first term is electronic energy and sum goes over the occupied Kohn-Sham orbitals, which are presented by an optimized LCAO basis set. The second term is effective repulsion potential, which is approximated as a sum of two body potential as below: The Hamiltonian matrix elements, overlap matrix elements, and effective nuclear-nuclear repulsion are obtained by the density functional based tight-binding method [16]. This approach has been tested extensively for reaction energies, geometries, rotational and proton transfer barriers for a large set of small organic molecules [17] and yields very good results for homonuclear systems, like silicon and carbon, and hydrocarbon systems [18]. The one-electron states are calculated at each time step by solving the time-dependent Schrödinger equation in a nonorthogonal basis, where S is the overlap matrix for the atomic orbitals. The laser pulse is characterized by the vector potential A, which is coupled to the Hamiltonian through the time-dependent Peierls substitution [19] Here H ab (X − X') is the Hamiltonian matrix element for basis functions a and b on atoms at X and X' respectively, and q = −e is the charge of the electron. The nuclear motion is solved by the Ehrenfest equation of motion is the expectation value of the time-dependent Heisenberg operator for the α coordinate of the nucleus labeled by l (with α = x, y, z). Equation (5) is derived by neglecting the terms of second and higher order in the quantum fluctuations  l X X in the exact Ehrenfest theorem. A unitary algorithm obtained from the equation for the time evolution operator [15] is used to solve the time-dependent Schrödinger equation (2). Equation (5) is numerically integrated with the velocity Verlet algorithm (which preserves phase space). A time step of 50 attoseconds was selected for this study. It was found that this time step produced energy conservation better than 1 part in 10 6 in a one ps simulation. The strengths of the present approach are that it retains all of the 3N nuclear degrees of freedom and it includes both the excitation due to a laser pulse and the subsequent de-excitation at an avoided crossing near a conical intersection. The weakness of this method is that it amounts to averaging over all the terms in the Born-Oppenheimer expansion [20][21][22][23][24] rather than following the time evolution of a single term. However, when the process is dominated by many electron excitations, like the interaction of the C 60 with intense laser pulses, many electronically excited states are involved and the wave packet actually moves along a weighted-average path due to all of the electronic potential energy surfaces involved. In this case, the present approach yields very good results [25]. Results and Discussion The initial geometry of the C 60 was simulated for 2000 fs relaxation at 298 K using the present technique, prior to the application of the laser pulse. The calculated lengths of the double bond and single bond are 1.397 and 1.449 Å respectively, in close agreement with the experimental values [26]. The calculated HOMO-LUMO gap is 1.81 eV, which is in good agreement with the experimental value of 1.9 eV [27]. The ordering and degeneracy of the molecular orbital energy levels within 10 eV of the HOMO level are also in good agreement with experimental measurements [27]. A Gaussian shape laser pulse of 40 fs (FWHM) with a photon energy of 2.0 eV was chosen for this study. The simulation was run for an additional 1000 fs without laser to generate the initial geometries for the dynamics simulation. From this trajectory, five geometries taken at equal time intervals were selected as starting geometries. Each trajectory was propagated for 4000 fs from application of the laser pulse. Laser pulse intensity for this study is 2.55 × 10 12 W/cm 2 . Five trajectories yielded very similar results; a representative result is presented and discussed in this paper. Bond breaking is considered to have occurred if the distance between two neighboring carbon atoms becomes greater than 1.9 Å and no recombination of these two carbons occurs thereafter. Fragmentation is deemed to have occurred if the distance between any two carbon atoms of two different fragments exceeds 1.9 Å and no subsequent bond formation between any two carbons occurs. Four snapshots taken from the simulation at various times are shown in Figure 1. Starting from the equilibrium geometry in the electronic ground state at 0 fs, the C 60 is electronically excited by the laser pulse. At about 200 fs (120 fs after laser irradiation), a greater number of C-C bonds have broken and the C 60 cage has "opened up" At about 800 fs, a C 2 dimer is observed breaking off from the C 60 cage. Thereafter, until the end of the 2000 fs run, no further bond cleavage is observed. The number of C-C bonds broken at different times is plotted in Figure 2. It is seen that extensive bond breaking occurs from 100 fs to 150 fs, immediately after laser pulse irradiation, and most bond breaking events occur before 1000 fs, including the release of a C 2 dimer at about 800 fs. No other fragmentation is observed. Variations with time of electronic, potential, and kinetic energies are presented in Figure 3a. Figure 3b is an expanded scale for electronic energy and potential energy variations, which is compared to kinetic energy variation. Immediately after laser irradiation, electronic energy rises from about −2930 eV to −2600 eV due to the excitation of electrons from occupied molecular orbitals to unoccupied molecular orbitals while potential energy drops down 105 eV to 43 eV as a result of the expansion of the cage size. On the other hand, kinetic energy increases by about 3 eV because of the excitation of vibrational motion. It is seen from Figure 3b that from 100 fs to 150 fs there is a sharp decrease in electronic energy and a quick increase in potential energy and kinetic energy. The decrease in electronic energy must result from the extensive C-C bond breaking found in this same period of time. Electronic energy is converted to kinetic energy and potential energy through C-C bond breaking. After 200 fs, kinetic energy decreases gradually until 800 fs. This decrease is accompanied by an increase in potential energy. The extensive bond breaking observed soon after laser pulse irradiation occurs within about 100 fs. This ultrafast process provides a decay channel for the excited C 60 . From this decay channel electronic energy is partially converted to kinetic energy. The reduction of electronic energy is due to the decrease in the energies of occupied molecular orbitals, the changes in the populations of different molecular orbitals, or both because of the breaking of chemical bonds. The increase in kinetic energy is due to the release of the energy stored in the chemical bonds broke. Consequently, the C 60 turns out to be extremely hot, which is evidenced by the observation that kinetic energy rises up from 3 eV to 13 eV from 80 fs to 150 fs. The damaged and hot C 60 cage may take thermal fragmentation or nonthermal fragmentation. To explain the production of the hot C 60 cage, Laarmann and co-workers proposed that a strong shaped laser pulse triggers a multielectron excitation via the t 1g doorway state and the electronic excitation is followed by efficient coupling to the symmetric breathing mode of the nuclear backbone of C 60 [28]. The simulation results presented above suggest an alternative heating mechanism: An ultrashort laser pulse induces multielectron excitation and the excited C 60 fullerene is rapidly heated up as the C 60 cage suddenly opens up due to the transfer of partial electronic energy into kinetic energy. Conclusions In summary, we performed a semiclassical electron-radiation-ion dynamics simulation study for the response of the C 60 to ultrashort laser pulses. The simulation shows that C 60 undergoes an abrupt opening following laser excitation. The similar behavior is also observed in the cap opening in carbon nanotubes irradiated by a femtosecond laser pulse [29,30]. The opening of the C 60 cage leads to the conversion of electronic energy to kinetic energy and potential energy. Consequently, the C 60 cluster is effectively heated up. These simulation results reveal a new mechanism for laser heating of the C 60 fullerene.
2,716
2011-01-13T00:00:00.000
[ "Physics" ]
Enhanced Power Utilization for Grid Resource Providers , Introduction This paper is an extension of work presented initially in the ITT2019 conference [1]. Grid computing is an improved technology that involves a pool of linked machines that be owned by numerous organizations in diverse locations to construct a distributed system. The grid system can be accustomed to working with complicated scientific and business problems. It is created to assist in the sharing of distributed and diverse resources and to simplify resolving a considerable volume of computing issues [2]. The essential purpose of grid computing is to improve reliability, cut the cost of computing, and enhance flexibility by changing computers from an object that we purchase and operate privately to an object managed by a third organization [3]. However, grid computing is encountering various challenges, such as finding the right resources and decreasing the number of discarded jobs [4], [5]. Figure 1 displays an example of a grid computing environment [6], where the grid may contain different types of resources which need a resource management system in order to coordinates them. The grid structural design shown in Figure 2 contains the following components [7]: a grid portal, which is also named a grid interface. It is used by the authorized grid clients to access the grid providers (resources). The portal should hide the complexity of the grid from clients by employing an uncomplicated interface. In this manner, it enables the sorting of grid job requisites. The second element is the grid broker, where this element is described as the core of grid computing. It performs critical jobs in creating an effective grid system by connecting the user demands to the grid providers to fulfill the clients' tasks. The broker is also responsible for reducing transmission delays and increasing the ASTESJ ISSN: 2415-6698 resource assigning jobs through resources by avoiding the depending on a certain one. The grid broker's primary role is to locate and reserve the most appropriate support for the users' tasks. The broker receives the requests from the clients and sends the job's description files to the clients' requirements is responsible for monitoring the running tasks and hands over the outputs to their related users. To process the grid users' requests, the broker asks the Replica Catalogue Information Service to locate the available resources and data in the grid. The third element is the Information Service. It is the directory service of the grid, where the grid keeps the information about all grid resources (nodes) and all the running jobs on those nodes. The grid resources have to register and update their information with the Information Services and their available resources. The grid broker retrieves all the information about the available and free resources from the Information Services. The information stored in the Information Services can be dynamic or static. The static information is employed to specify the operating system specifications and hardware requirements. In contrast, the dynamic information is associated with the current job running on the grid resources, the type of software specification, the available time in resources, disk space, and policies. Replica Catalogue (RC) is the fourth element. RC is the directory that presents information, which helps in locating the saved data in the grid. To simplify employing the grid's data, the grid broker requests the RC for information about locations of the data in the grid and the way to identify the access mechanism needed to utilize this data. To locate the best resources for the grid clients' requests, the grid broker gets the job specifications from the grid portal and looks for the suitable resources that can fit these specifications. For this purpose, the broker inquires for resources information and the data stored in the grid from the Information Service and the Replica Catalogue. Later it chooses the resource or resources that able to fit the job specifications. Processing and modeling jobs have grown to be more complicated as the system scopes increased. Accordingly, it is difficult to carry out tasks with such an extensive system using centralized methods. Multi-Agent System (MAS) presents an excellent manner for delivering control. It also can be used to explore and change the local communications amongst entities [9]. Furthermore, artificial intelligence mechanisms can be combined with it. Intelligent agents are incorporated to operate autonomously in a changing environment. The intelligent agent also has the following main features: [10]. • Autonomy: the power to behave dynamically to a particular point in support of the clients and programs and adjust how agents accomplish their jobs. • Cooperation and Communication: the power to cooperate and communicate with other agents to swap information, receives orders, and respond. • Learning: the power to enhance the performance steadily while behaving with the external world. • Re-activity: the ability to react to exterior calls and moderate the agents behave concerning these calls. • Pro-activity: the ability to create choices as a consequence of internal evaluations. In this introduction, we have introduced some background about Grid and MAS as a suitable solution for open systems, such as Grid, that modify frequently. This paper is organized as follows. Section 2 introduces related works to this research. Section 3 gives the system model. The experiment and results are presented in Section 4. Lastly, the conclusion of this research is given in Section 5. Related Works In recent times, combining the MAS with grid computing has taken the interest of lots of researchers. Most topics that have been concerned in the area of grid computing are how to achieve better use of grid data storage, service distribution, infrastructures, cost, and energy efficiency. Nevertheless, the worry in the agent environment's area is on the intelligent side of agents and the procedures that can be exploited to solve complicated problems. The authors in [11] suggested a multi agent system that can support the growth of the IDAPS micro-grid where this proposed system includes a DER agent, database agent, and control and a user agent. The Agents transmit their requirements using a TCP/IP protocol embedded in the IEEE FIPA standard to assure the system capability. In addition, authors in [12] suggested a model for agentbased grid computing (AGEGC). Based on their model, AGEGC was created by using the MAGE (Multi-Agent Environment) platform. This research included various levels, such as; information tier, knowledge tier, service-oriented tier, and operating system tier. AGEGC focuses on service-oriented tier provided that current processing system. Garimella in [19] employs a new preserve server that works in assistance with the Dynamic Soft Real-Time system [20]. The purpose of the Garimella system is to reserve the resource of the CPU beforehand. By using this system, the user request for some quality of service conditions. E.g. of that the percentage of CPU needed and the beginning time and the duration. Once the request for the preservation is begun, the reserved resources are ready to be reachable for the users. One more proposed model is the Resource Broker (RB) presented in [21] merged with a new preserve server shown in [19]. Both system have continual and quick response times and numerous negotiation probability for the users. Nevertheless, the volume of re-negotiation introduces an extra-large cost to the system, in the event of an unpredicted lack of resources. The authors in [22] presented several algorithms for keeping an improved preservation approach for supercomputer scheduling systems. Those algorithms improve traditional organizing techniques by combing the reservation demand mechanism with scheduling regular tasks mechanism. Using this way of reservations, let clients ask for more than one resource in parallel from the scheduling systems at the same time. However, the mechanisms used in [22] preserve the whole time slot, which means the resources are not carried on in a distributed way by multiple users for the same amount of time. System Model Our suggested architecture is adopted from the grid architecture in [23], as shown in Figure 3. It employs Client/Server architecture since it is the most famous architecture in the heterogeneous system [24]. Our proposed architecture uses multitier architecture, considering that there is a necessity for the parted database to save the grid resources' location. Our grid architecture includes the following entities; the grid portal, the grid broker, and the grid resource providers. A grid portal is an interface for an authorized client to send their jobs to the grid. A portal should hide the complexity of the grid from clients by employing an uncomplicated interface. In this manner, it enables the sorting of grid job requisites. The grid broker can connect with agents that operate, control, and support both the grid clients and grid resource providers. The broker is an agent handler who organizes and monitors the running jobs and maintains the grid clients updated with their running jobs. The grid broker in this paper is moved from operating as a database warehouse that saves the available resources and conditions and matches the clients' jobs with these resources into an agent manager with the ability to control event creations and create and eliminate several agents in the system. The grid broker in our proposed architecture, is a MAS that can interconnect with the grid system on account of grid users to find the information regarding grid resource provider whom they can fit the users' requirements and choose the best one among them. Furthermore, our advanced system can provide and monitor the implementation of the jobs and support the grid users with live feedback concerning the job processes. The grid broker is split into three primary entities. The first one is the monitor & scheduler. This entity is in charge of getting the grid clients' tasks and deciding the best available resource providers. The scheduler requests all data about the available resources from the Information Service entity and the data information saved in the Replica Catalogue (RC) entity. The monitor has the authority to monitor and provision the currently running jobs in the grid resource providers. The grid broker might have the improvements and ability to split clients' tasks between multiple resource providers to cut the cost and speed up the operations. After the Monitor & Scheduler transmit the jobs to the available resource(s) for processing, it receives frequent feedbacks from the agents at the resource providers to verify the standing of the processing jobs and the percentage of the job completed to the grid broker. This point helps the grid broker to provision the running jobs' accomplishment, and by its turn, it will help determine an alternate resource provider(s) in case of failure at any point. The local agent at the resource providers is similarly in charge of updating the grid broker as the job is finished. The resource provider can include several physicals and virtual machines. The Hypervisor in every machine is in charge of organizing and generating virtual machines on the upper of every physical machine. To react to the fast development of clients' requests for data processing ability and storage, grid resource providers create a massive amount of data centers throughout the world. The grid datacenters typically include a large number of interconnected resources from physical machines and virtual machines, which will consume a considerable amount of electricity for their operations. The proposed system depends on switching inactive virtual and physical machines to lower power positions (Sleep/Wakeup or switched off) while still preserving customers' performance requirements. The local agents at the resource providers are responsible for this operation. Each virtual machine (VM) has a local agent that can report the VM utilization to the data center's primary agent. The utilization here means the CPU utilization. Suppose the utilization for a specific VM has a zero amount for 300 seconds. In that case, the data center's primary agent will automatically shut down this VM to reduce the power consumption of the physical machine containing that turned off VM. Implementation and Testing The Jade platform was chosen to execute our proposed topology. Jade stands for the Java Agent Development framework, developed to help build out and enhance agent software as ruled by the FIPA conditions for smart MAS [25]. We introduced a simulation that deals with 200 physical hosts. The specifications of physical and virtual machines are shown in Table 1 and Table 2, respectively. To create real power-consuming data, we took advantage of the actual data of power consumption offered by the SPECpower benchmark [26] and [27], as shown in Table 3. The power consumption by physical machines can be precisely expressed by a linear relationship between CPU utilization and power consumption. It can be noticed from Table 3 that even at small utilization, the machine exhausts a large volume of power. Therefore it is necessary to shut down such machines or reduce the number of VMs in it when not in use to reduce the power consumption of the physical machine. Figure 4 shows part of our simulation. The following stages explain the main steps of our simulation: • Grid users state their tasks by choosing the required hardware and software specification to run their jobs through the grid portal. • The grid broker receives the grid users' jobs and looks for the best provider(s) for those jobs. This step is done by helping the Information Services and the Replica Catalogue. • Later, the grid broker sends the jobs to the suitable resource provider(s) and connects to the primary agent attached to that provider. • The grid broker can divide the job between multiple resource providers to cut down the cost and speed up the operations. • As the job is running at the resource provider, the provider's agent (primary agent) sends periodic feedbacks about the percentage of completion of the running job to the grid broker. This stage helps the grid broker to supervise the completion of the running job, and that will help find an alternative resource provider in case of failure. • At the same time, the primary agent at the resource provider periodically checks the CPU utilization of its VMs at the physical machines. • If the utilization reaches the zero point for 300 seconds, the primary agent will automatically shut down that VM. • Finally, the grid broker sends the job outputs to grid users. From the simulation results in Figure 5. It can be seen that by using our proposed system, the power consumption has been slightly reduced for the two types of physical hosts (HP ProLiant ML110 G4 servers and HP ProLiant ML110 G5 servers), especially after the first 300s. This reduction is because the utilization of the physical machines has been reduced as the primary agents in both physical machines automatically shut down that VMs if the its utilization reaches the zero point for 300 seconds. As a result, the power consumption has been reduced in total for machines. Conclusion The purpose of the grid is to produce a broad scale, and heterogeneous systems intend to solve industrial or scientific problems. The integration of the Multi-Agent Systems (MAS) with the grid environment has a significant impact on grid performance. Grid offers a wide range of resources for its users, and some of these resources might not be used or utilized for some time before any new jobs coming to the grid for processing. As a result, the grid servers will consume a considerable amount of electricity for their operation. The growing usage of grid computing has led to an increase in electrical energy usage by the massive amount of servers in their data centers. In this paper, we have proposed an automated system composed of modern agents that can be used to reduce the power wastage resulted from inactive servers inside the data centers if the utilization of the VM reaches the zero point for 300 seconds, In this case the primary agent in the physical machine will automatically shut down that VM. The Automated system has been tested and evaluated, and the results show that the newly proposed method can reduce the amount of power wastage for the inactive grid resources.
3,722.6
2020-11-20T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
One-Loop $\beta$ Function of the Double Sigma Model with Constant Background The double sigma model with the strong constraints is equivalent to a classical theory of the normal sigma model with one on-shell self-duality relation. The one-form gauge field comes from the boundary term. It is the same as the normal sigma model. The gauge symmetries under the strong constraints are the diffeomorphism and one-form gauge transformation in the double sigma model. These gauge symmetries are also the same as the Dirac-Born-Infeld (DBI) theory. The main task of this work is to compute one-loop $\beta$ function to obtain the low energy effective theory of the double sigma model. We implement the self-duality relation in the action to perform the one-loop calculation. At last, we obtain the DBI theory. We also rewrite this theory in terms of the generalized metric and scalar dilaton, and define the generalized scalar curvature and tensor from the equations of motion. Introduction The most important problem of physics is to answer how to unify all fundamental theories. One intelligent way is duality. It is the main idea of the M-theory. We can use the T-duality and S-duality to unify all ten dimensional superstring theories. If we combine the T-duality and S-duality, it is the so-called U-duality. We expect that the U-duality is a symmetry of the M-theory. However, the M-theory is still mysterious at the current stage. The main problem is that we do not completely understand our tool, duality. The S-duality is an equivalence between strong and weak coupling constant. The familiar example is invariance of the Maxwell's equations by exchanging electric and magnetic fields. Because this duality should be a non-perturbative duality, it is difficult to study from the perturbative way explicitly. The other one duality, T-duality, is an equivalence between radius and inverse radius on a compact torus. This duality is equivalent to exchanging momentum and winding modes in closed string theory, or the Dirichlet and Neumann boundary conditions in open string theory. However, there is one remaining serious problem that we cannot solve it. This problem is called T-fold problem. This problem is found in closed string theory. It is mainly due to the fact that the T-duality is not a well-defined transition function as gauge transformation or diffeomorphism in the presence of non-zero flux. For the low energy massless closed string field theory with the H-flux [1,2], we can perform the T-duality on one direction to let the H-flux to become the f -flux. At this step, it is still well-defined. If we perform the second T-duality, we will get the Q-flux. The problem occurs because fields cannot be described as single valued. For the third T-duality, it gives a more serious problem, we do not know how to perform this third T-duality because we lose isometry. However, we expect that the R-flux can be found by the T-duality. In this T-fold problem, we meet two problems, the first problem is how to define our fields with single values in a new geometry and the second problem is how to extend our T-duality definition to obtain the R-flux. Solving the T-fold problem should give us a new perspective to our M-theoretical frame work. It should lead us to understand a new supergravity or superstring theory. The dynamics of the superstring or M-theory is hard to obtain by the first principle directly. One way is to study low energy effective theory. At the level of field theory, we can understand symmetry principle and dynamics. String theory can be described by a two dimensional sigma model. From low energy effective theory, we can understand what kind of low energy effective theory can be realized on the target space. It leads us to understand corresponding gauge symmetry on target space. Low energy physics inspires us to study non-local field theories beyond the standard model and normal particle physics. Non-local theories help us to develop techniques to study dynamics of field theory. The development not only gives us a new way to study partition functions or amplitudes on field theories, but also the conceptual aspects of the M-theory. However, the M-theory is still mysterious, it is difficult to write a consistent Lagrangian to describe it now. Nevertheless, the low energy effective theory of the M2-M5 system has already been constructed. One consistent single M5 system is the Nambu-Poisson M5 (NP M5) [3,4]. The way of the construction is analogous to the stack of the Dp-brane in the B-field background to obtain the D(p + 2)-brane theory. The Nambu-Poisson M5-brane theory can be stacked from the C-field background of the multiple M2-brane. It gives us a single M5-brane in the large constant C-field background (Only three spatial dimensions are not zero.) The coupling of this single M5-brane is the inverse C-field background. The role of the Nambu-Poisson M5-brane theory is similar to the non-commutative D-brane theory. The symmetry of the Nambu-Poisson M5-brane theory is the volume preserving diffeomorphism (VPD) governed by the Nambu-Poisson bracket which satisfies the fundamental identity. The consistency of a single M5-brane theory is to perform the direct dimensional reduction to find the non-commutative D4-brane in a constant NS-NS B-field background. This consistent check is already shown. The problem is that they only obtained the Poisson bracket, not the deformed version. It also implies that the Nambu-Poisson M5-brane is just a truncated M5-brane theory. Even for a truncated M5-brane theory, it is still interesting to study new theories by the dualities. By performing a double dimensional reduction on the Nambu-Poisson M5-brane, we can obtain the non-commutative D4-brane in a large constant R-R C-field background. It can be generalized to the Dp-brane based on the gauge symmetry, covariant field strength, rotational symmetry of the scalar field and duality rules. This Dp-brane is built on the non-commutative space in a large constant R-R (p − 1)-form background. The NS-NS Dp-brane, R-R Dp-brane, and Nambu-Poisson M5-brane are welldefined low energy effective theories under the decoupling limit. Especially, the NS-NS D3-brane and R-R D3-brane theories are also consistent with the electric-magnetic duality. It shows that the Nambu-Poisson M5-brane theory has consistency on the T-duality and S-duality [5][6][7]. These studies are also interesting to see relations between the background and brackets. The symmetry of the (p − 1)-form background theory can be described by the (p − 1)-form bracket exactly. The most important direction of this single M5-brane is the way of deformation. The hint of the deformation can be found from the direction of the S-duality because we also have the same problem on the D-brane theory. The way is to use all orders of the non-commutative NS-NS D3-brane to find the deformed non-commutative R-R D3-brane by the electric-magnetic duality. This study gives us a new product to write this R-R D3-brane theory. It should indicate how to write the full-order low energy effective theory consistent with dualities. These Nambu-Poisson M5-brane studies not only answer problems of the M-theory, but also other difficult problems of field theories, e.g., the electric-magnetic duality of the non-abelian gauge theory in four dimensions. Since the U (1) non-commutative gauge theory is similar to the non-abelian gauge theory, we can point out that the electric-magnetic duality of the non-commutative gauge theory inspires us to solve the electric-magnetic duality of the non-abelian gauge theory. This way can be understood from the string duality. It is a good example to show that string duality not only concerns unification, but also new structure of field theories. The Nambu-Poisson M5-brane theory is built on the non-commutative space by the stack of the multiple M2-brane theory. From the perspective of non-commutative geometry, we should be possible to build the M5-brane from the equivalence of commutative and noncommutative gauge theories or from the Seiberg-Witten map. For the DBI theory (a string ending on a p-brane), we need to change from the closed to open string parameters. We can find this redefinition from the Poisson-Sigma model. For a higher form field, we can consider the Nambu-Sigma model [8,9], which is classically equivalent to a p-brane theory. From the Nambu-Sigma model, we can change variables to consider a non-commutative theory. Starting from the equivalence of non-commutative and commutative gauge theories, we can find the form of field theories without many degrees of freedom. This theory is called generalized DBI theory, which describes a q-brane ending on a p-brane. This generalized DBI theory can reduce to the DBI theory when q=1. In the case of a two brane ending on a five brane, we can find the same form for the M5-brane action up to the second order perturbatively (derivative expansion) [8,9]. From dimensional reduction on this special case, we find that the two brane ending on the five brane becomes a one brane ending on a four brane [10]. Even though this calculation is not a general consideration, this consistency on the DBI-form of the M5-brane still gives a strong support. This approach is to offer a new generalized metric, which gives a new structure of non-commutative theory. It gives another way to obtain the same theory as the generalized DBI [11,12]. It not only shows that the equivalence of non-commutative and commutative gauge theories for an arbitrary form field can be described by a new generalized metric, but also shows that this equivalence is strongly restricted to our action. On the other hand, supergravity interpretation of the generalized DBI theory should have supersymmetric extension. The related supersymmetric extension has already been done in [13]. The Nambu structure of the p-brane theory can be found manifestly on the formulation of taking off the square root [14]. It gives a consistent understanding with the Nambu-Poisson M5-brane. It may indicate that the p-brane theory has some relations with the M-theory. The Nambu-Poisson M5 and the generalized DBI theories are still defined on the local geometry so they do not really strike the main problem of the T-fold. Although their constructions have already provided insight to the M-theory, they may not solve the Tfold problem or the global geometry. Now we have a way to find a new geometry to understand string theory. The new geometry is called "stringy geometry" [15,16]. This way is to double coordinates. It embeds the T-duality rule in the O(D, D) group. This type of theory is called double field theory [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32]. Now we have a well-defined theory to describe the massless closed string field theory with the strong constraints. Although the strong constraints are equivalent to removing half additional coordinates, this theory is not a new theory without considering the T-duality. Double field theory is a way to provide the extension of the T-duality or solve the T-fold problem [33,34]. The T-fold problem is that the local geometry is unable to find a theory with the R-flux by using the T-duality. If we want to define a theory with the R-flux, we need to go beyond the original supergravity and T-duality. Double field theory is built on the doubled space. The gauge transformation is governed by the Courant bracket. On this doubled space, we can perform the T-duality three times to find the R-flux in the massless closed string field theory. It implies that we can use this doubled space to know how to perform the T-duality for nonisometric case. Double field theory extends the supergravity from local to global geometry. The so-called non-geometric flux (Q-and R-flux) can be understood from a geometric way. The understanding of the non-geometric flux has important influence on brane theory. The sources of the exotic brane theory are non-geometric fluxes. The exotic brane theory can be shown by performing the T-duality two times on the Neveu-Schwarz five-brane (NS5-brane). This exotic brane is called the 5 2 2 -brane theory. The background of the exotic brane is no longer single-valued. It implies that global description is needed. However, worldvolume theory for the 5 2 2 -brane theory is constructed from the NS5-brane theory by performing the T-duality two times [35]. The exotic brane theory should play an important role on the extension of our understanding for the M5-brane because of the NS5-brane can be uplifted to the M5-brane theory. Although we have new concepts with the strong constraints in double field theory, we still want to relax the constraints. Because the strong constraints equivalently imply that solutions will be annihilated by the constraints. We want to relax the constraints to get more solutions that will not be annihilated. It is a very important study, but the closed property of the generalized Lie derivative makes this problem be a very hard task. The approaches can be seen in [36,37]. For more extension of the original theory, we need to consider α corrections. We already formulate the theory in the language of double field theory [38] at the first step. Some good reviews of double field theory are in [39][40][41]. Double field theory is a formulation for the ten dimensional supergravity. The extension from the T-duality to U-duality or from ten dimensions to eleven dimensions, we need to consider exceptional field theory [42][43][44]. The low energy limit of the M-theory is the eleven dimensional supergravity. The symmetry of the eleven dimensional supergravity should be the exceptional Lie group. For the analogue consideration of double field theory, we need to embed the exceptional Lie group into a bigger space. From the theoretical point of view of the M-theory, the manifest symmetry becomes important to give us the insight to know properties of the M-theory although we just know the low energy level. The first difficulty of this task is the E 8(8) case, it does not have closed algebra. However, this problem is already solved by sacrificing some Lorentz gauge freedoms. The exceptional field theory helps us to show the U-fold problem as a double field theory. It also needs a constraint as double field theory. The current stage of relaxing constraint does not have too much progress. With the strong constraint of the exceptional field theory, we can obtain the exceptional generalized geometry. It provides intuition to realize the eleven dimensional supergravity to inspire exceptional field theory from a different way [45]. Double field theory extends string theory from local to global geometry. For a selfconsistent double field theory, we need to extend our understanding of closed string theory to open string theory. Otherwise, we cannot write full string theory in terms of double field theory. Then double field theory may not be a fully consistent understanding of string theory. The first proposal of double field theory for open string is to double coordinates with two types of boundary conditions and also introduces projectors to satisfy the suitable boundary conditions [46]. For the projectors, it does not give a full understanding until [47]. This paper extends the idea of the projectors to show a consistent boundary condition. However, their discussion only considers the background without the one-form gauge field. For the first study of the one-form gauge field, they put the normal boundary term after they introduce the self-duality relation at the off-shell level. They cannot consistently obtain the DBI action from the one-loop β function [48]. From the generalized geometry [49][50][51], they used the Courant bracket to understand the properties of the D-brane [52,53]. Especially for [53], they construct the gauge transformation based on the language of generalized geometry. It inspires [54] to find the gauge transformation of the open string theory in the language of double field theory. From the gauge transformation (governed by the Fbracket), the double sigma model is also proposed. The main difference is that the double sigma model does not use projectors to satisfy the boundary conditions, but they have the classical equivalence with the normal sigma model without modifying the self-duality relation. Since this double sigma model only puts the normal boundary term and the self-duality relation does not have modification with the one-form gauge field, the oneloop β function can be performed in this double sigma model. Quantum fluctuation from string theory inspires the higher derivative gravity model [55] so a calculable sigma model is undoubtedly necessary. The R-flux can be found from the Courant bracket without an action in [56]. The suitable action for the D-brane theory is proposed in [54]. It should give a consistent R-flux as [56]. We implement the self-duality relation at the off-shell level with the strong constraints. Then we use the action to perform the one-loop β function to obtain a consistent DBI theory. We rewrite this low energy effective theory in terms of the generalized metric and scalar dilaton. We also use the equations of motion to define the generalized Ricci scalar curvature and tensor. The plan of this paper is to first review the double sigma model in Sec. 2. Then we calculate the one-loop β function and obtain the low energy effective action in Sec. 3. We also discuss the generalized metric formulation, and show the generalized Ricci scalar curvature and tensor in Sec. 4. Finally, we conclude and summarize in Sec. 5. Review of the Double Sigma Model We review the double sigma model in this section. We first show the notation and set up. Then we write the gauge transformation of the double sigma model. In the end of this section, we show classical equivalence between the double and normal sigma model. Notation and Set Up Our theory is defined on the doubled space. The normal coordinates are associated with the Neumann boundary condition and the other coordinates (transverse coordinates) are associated with the Dirichlet boundary condition. The field components are the metric field (g mn ), antisymmetric background field (B mn ), scalar dilaton (d) and one-form gauge field (A m ). In this theory, we need two constraints (strong constraints) to guarantee the gauge invariance. The constraints are The index m = 0, 1 · · · , D − 1 (We denote the non-doubled target index from m to z.). If we only consider the first constraint, the conventional name is called weak constraint. Imposing the weak constraint leads to where δ is the gauge transformation. For a consistent gauge invariance, we need to impose the strong constraints to annihilate non-gauge invariant parts. We can rewrite the weak constraint as where ∂ A is defined by and ∂ A = η AB ∂ C . The index, A = 0, 1 · · · , 2D − 1, is a doubled dimensional target index (We represent them from A to K.). We use η to raise and lower the indices for O(D, D) tensor where a, b, c and d are D × D matrices. We can also define X A from the combination of the normal and dual coordinates by C-and D-Bracket We introduce the generalized Lie derivative, C-bracket and D-bracket [31,32]. The gauge transformation of the background independence field (E) and scalar dilaton are where This matrix is defined by The inverse of H can be obtained by 14) The gauge transformation of H AB is where The generalized Lie derivative can be defined from the gauge transformation which satisfies the Leibniz rule. The special property of the generalized Lie derivative is the acting on the constant metric (η) is zero, but the ordinary Lie derivative is not. The gauge algebra is closed under the strong constraints where the C-bracket is defined by The D-bracket for the generalized vector is defined by At the end of the C-and D-bracket, we assume that all parameters are independent ofx on the C-bracket. Then it reduces to the Courant bracket [31]. It shows the Courant bracket where A, B are vectors, and α, β are one-form. We also obtain the Dorfman bracket [49] from the D-bracket. but it is not antisymmetric. For the C-bracket, it does not satisfy the Jacobi identity, but it is antisymmetric. In other words, the C-and D-bracket are not Lie brackets. F -Bracket We discuss the F -bracket [54] in this section. The gauge transformation of the gauge field is (2.27) We define the F -bracket from this closed algebra. We use η to raise or lower index for Z. It is easy to find We perform the B-transformation on the C-bracket and F -bracket with the strong constraints. The B-transformation is defined by This transformation is a symmetry of the sigma model. We show the calculation on the Courant bracket. If dB = 0, we can obtain automorphism. It shows that the symmetry of a theory governed by the Courant bracket can define a non-zero H-flux (dH = 0) and possibly be extended to the O(D, D) description. For the closed string theory, we use the O(D, D) structure to rewrite this theory. For the D-brane theory without the one-form gauge field, we should have the same story. Before we calculate the F -bracket, we define the notation for the F -bracket with the strong constraints Therefore, we obtain This means that we cannot use dB = 0 to show automorphism as the Courant bracket. The information of the F -bracket shows that the O(D, D) structure is not suitable to describe the DBI term. Classical Equivalence We prove classical equivalence between the double and normal sigma model in this section. We start from the bulk action The worldsheet metric is (−, +) signature on the bulk. The equation of motion of X A is We use the strong constraints (∂ m =0) to show the equivalence. Then we use the self-duality relation to remove half degrees of freedom. It is equivalent to The gauge transformation of X A is governed by the generalized Lie derivative as the generalized metric. The gauge transformation is We assume that the gauge parameters do not depend on the worldsheet coordinates. Then we can show (2.37) is covariant under the gauge transformation with∂ m =0. It implies that (2.37) do not need to be modified from the covariant property. Then we substitute (2.37) to the other one equation of motion. We combine (2.40) and (2.41) to find the same equations of motion as to guarantee the gauge invariance. This boundary term breaks the O(D, D) structure with the consistent understanding from the F -bracket. One-Loop β Function We implement the self-duality relation (2.37) at the off-shell level. Then we obtain the DBI theory from the one-loop β function. Self-Duality Relation at the Off-Shell Level We can have the classical equivalence with the on-shell self-duality relation. But quantum fluctuation of the double sigma model needs the self-duality relation beyond the on-shell level. For the constant background fields, we can show it. We first set B=0. We can always redefine the one-form gauge field to find the non-zero constant B field without losing generality. The equations of motion on the bulk is They can be rewritten as where f is an arbitrary function of σ 0 . Then we can redefine X m (X m → X m + h m (σ 0 )) to find the consistent equations of motion with the self-duality relation. We assume −g∂ 0 h = f . Then we obtain ∂ 1X − g∂ 0 X = 0, The first equation is the self-duality relation and second equation is the equation of motion on the bulk. Then we discuss the equation of motion on the boundary. The Neumann boundary condition is This boundary condition is still an invariant form from the redefinition. The above discussion shows that we can have the self-duality relation at the off-shell level to describe the same equations of motion with the normal sigma model in the case of the constant background. The difficulty of quantization for the non-constant background is the same as the chiral boson theory [57][58][59][60]. Nevertheless, we will show that we can obtain the DBI theory from the one-loop β function in the next section. One-Loop β Function We set B = 0 and g = I (I ≡ identity matrix) to simplify the calculation without losing generality in the end of this section. We first show the standard calculation of the one-loop β function as [61]. From the variation (X → X + ξ) of the boundary term, we obtain where Then the Green's function on the bulk are and on the boundary is The counter term is where The β function is Then we try to solve the Green's functions on the bulk. We first change the coordinates We only need to solve on the bulk. The solutions of the Green's function on the bulk are The equation of the Green's function on the boundary is followed by The solution of the Green's function on the boundary is Therefore, the β function is So that we can get where Let us show an useful identity We can use the Bianchi identity to rewrite it. (3.25) After that, we can obtain It is equivalent to β m = 0. We consistently obtain Then we show how to obtain the effective action for the general constant metric g. Because g is a symmetric matrix, we can diagonalize g. Then we rescale the diagonal matrix and redefine the one-form gauge field. We equivalently obtain det g + F . (3.29) This calculation shows that this double sigma model can be a consistent model with quantum fluctuation. It is a non-trivial consistent check beyond the classical equivalence. The Generalized Metric Formulation We construct the low energy effective action based on the symmetry point of view. This action is written in terms of the generalized metric and scalar dilaton. We use the equations of motion to define the generalized scalar curvature and tensor. The Low Energy Effective Action We construct this low energy effective action in two parts. The first part is based on the diffeomorphism and one-form gauge transformation. The candidate is the DBI action. The second part is based on the O(D, D) structure, Z 2 symmetry, gauge symmetry with strong constraints and two derivative terms. We first discuss the Z 2 symmetry For∂ m → −∂ m , we can rewrite it as The transformation of H AB under the transformation B mn → −B mn can be written as This action is uniquely determined from the above criteria. For the goal of rewriting total theory without using the field strength or one-form gauge field, we redefine the generalized metric by This field redefinition does not modify all results of the closed string part. The action of the DBI part is where α is an arbitrary constant. If we use∂ m =0, we obtain where R is the Ricci scalar and H = dB is the three form field strength. This theory is determined from the symmetry point of view up to a relative coefficient. This coefficient can be determined from the one-loop β function. This action is also consistent with [62]. If we set D=10, it is the low energy effective theory of the D9-brane on the curved background. Then the nontrivial flux can be realized on the D-brane theory. After we perform the T-duality on this theory, we should find the non-geometric flux on the lower dimensions. Generalized Scalar Curvature and Tensor We show the equations of the motion in this section. We also define the generalized scalar curvature and tensor from the equations of motion. We first define the equations of motion for d to be the generalized scalar curvature. It satisfies the suitable symmetry At the end of this section, we vary the H CD , which provides the generalized Ricci tensor. To calculate the variation, we need to introduce one auxiliary field in the action e −2d λ mn (HηH − η) mn . (4.11) From the variation of H CD , we can get where λ ≡ λ •• , S ≡ Hη, S 2 = 1, (4.14) Other variation of the generalized metric do not give non-zero contribution. The equation of motion of H CD is equivalent to It implies Therefore, we define the generalized Ricci tensor We provide the generalized scalar curvature and generalized tensor from the equations of motion. Conclusion We compute the one-loop β function for the double sigma model with the strong constraints in the constant background. We can obtain the consistent low energy effective theory, the DBI theory. It shows that the construction of this double sigma model is calculable for quantum corrections. A calculable double sigma model as normal sigma model is important to understand new physics. Although this calculation is only in the case of the constant background, it is still an important step for double field theory. So far we did not have any consistent check with quantum fluctuation for double sigma model of open string. We also rewrite the low energy effective theory in terms of the generalized metric and scalar dilaton. This construction also leads us to define the generalized scalar curvature and tensor. It is the usefulness of the generalized metric formulation [32]. This double sigma model provides a general way to put the boundary term. Not only restricted to the open string sigma model, it should be able to extend to the other sigma model. Double sigma model of open string provides a possibility to unify all string theory in the language of double field theory. It should be used in all different kind of theories, not only for some special theories. We believe that this work opens a new door to reformulate these theories by a more powerful formulation for the T-fold problem. The formulation of open string relies on the boundary conditions. Choosing boundary conditions is equivalent to choosing the boundary terms. It should be interesting to embed different boundary conditions in the projectors. From choosing projectors to determine the boundary conditions should be interesting. Then the other one interesting issue is to use canonical way to quantize this open string theory to find the non-commutative relation. We leave boundary conditions and quantization two interesting problems to the future. The most serious problem of the double field theory is to relax strong constraints to obtain more physical solutions. The difficulty is that the generalized Lie derivative is not closed without the strong constraints. The way is to develop new algebraic structures or introduce more fields. However, we do not get more understanding from the open string than the closed string. We still need to go back to the closed string to understand this problem. The D-brane theory can be lifted to the M5-brane theory. The construction of the D-brane theory should shed the light on understanding properties of the M5-brane theory. In this low energy D-brane theory, we can find the non-geometric flux from the T-duality. At the low energy level, we should also find the non-geometric flux on the M5-brane. It should be interesting to study it. From symmetry point of view, we can deduce the low energy effective action up to a relative coefficient. It should be interesting to obtain this coefficient from symmetry between the closed and open string theories without calculating one-loop β function. The answer may be hidden in the α correction of the closed string theory. However, the probe of principles for brane theory is an interesting direction. It should help us to find the action for the M5-brane theory from these principles. Before we worked on the non-geometric frame to study the non-geometric flux in the massless closed string theory, we can see that this non-geometric frame in the open string theory is equivalent to using the open string parameters. From this manifest formulation, it should offer a clear picture. We do not know how to deal with the non-geometric problems on the commutative space, but the extension of the non-geometric flux can be defined on the non-commutative space or by the open string parameters. The non-commutative space possibly be a more natural space to describe string theory than the commutative space.
7,406
2014-12-05T00:00:00.000
[ "Physics" ]
An Augmented Reality Geo-Registration Method for Ground Target Localization from a Low-Cost UAV Platform This paper presents an augmented reality-based method for geo-registering videos from low-cost multi-rotor Unmanned Aerial Vehicles (UAVs). The goal of the proposed method is to conduct an accurate geo-registration and target localization on a UAV video stream. The geo-registration of video stream requires accurate attitude data. However, the Inertial Measurement Unit (IMU) sensors on most low-cost UAVs are not capable of being directly used for geo-registering the video. The magnetic compasses on UAVs are more vulnerable to the interferences in the working environment than the accelerometers. Thus the camera yaw error is the main sources of the registration error. In this research, to enhance the low accuracy attitude data from the onboard IMU, an extended Kalman Filter (EKF) model is used to merge Real Time Kinematic Global Positioning System (RTK GPS) data with the IMU data. In the merge process, the high accuracy RTK GPS data can be used to promote the accuracy and stability of the 3-axis body attitude data. A method of target localization based on the geo-registration model is proposed to determine the coordinates of the ground targets in the video. The proposed method uses a modified extended Kalman Filter to combine the data from RTK GPS and the IMU to improve the accuracy of the geo-registration and the localization result of the ground targets. The localization results are compared to the reference point coordinates from satellite image. The comparison indicates that the proposed method can provide practical geo-registration and target localization results. Introduction UAV-based target monitoring plays an integral part in multiple areas such as traffic management [1][2][3], forest-fire control [4,5], border and port patrolling [6], wild animal tracking [7] and emergency management [8]. Conventional monitoring methods include fixed station monitoring, satellite monitoring and human-crewed aircraft monitoring [1]. Pinned stations are easy to set up and able to acquire various types of data, but the field of view is limited. Satellites can monitor vast areas, but their revisit time is too long for time-sensitive monitoring missions due to the orbit limitations. A single satellite needs typically tens of hours to revisit a particular target area, and thus tracing a moving target is nearly impossible. Instead, human-crewed aircraft are responsive in monitoring tasks, and the onboard sensors can satisfy the spatial and temporal resolution requirements. The main drawback of human-crewed aircrafts is the high cost, and, considering the safety of the pilots, human-crewed aircrafts cannot operate in hazardous environments. As an emerging platform, UAVs have significant advantages in monitoring tasks [9,10]. They have higher mobility than pinned stations, which allows a broader field of view, lower cost and risk than human-crewed aircrafts, with higher time and spatial resolution. The fast-growing market of low-cost electronic multi-rotor UAVs has dramatically reduced the difficulty of using UAV platforms. Thus using such low-cost UAV platforms to obtain information of target area is being popular for researchers. A major problem needs to be solved in related research areas is how to assist the ground operators in obtaining information and establishing the situation awareness of the working zone when a UAV is airborne and monitoring the ground objects. Augmented Reality (AR) is a promising technology to solve this problem. AR can overlay the known information such as vector map, raster map and other attributes of the ground objects in the corresponding locations on the screen by registering virtual 3D objects to the video recording the physical world. The geographic coordinates and the 3-axis attitude of the onboard camera are required to augment the known information on the video from UAVs. The accurate measurement of the camera attitude needs navigation-level IMU, which is not feasible in cost and weight to be used on low-cost electronic multi-rotor UAVs. It is difficult to directly apply navigation-level IMU on low-cost UAVs. In the meantime, high accuracy RTK GPS receivers are becoming popular as the weight and cost decrease. Thus utilizing the high accuracy RTK GPS data to augment the original attitude data is a reasonable way of acquiring accurate attitude data for augmented reality geo-registration. In accordance with the drawbacks of existing researches, this research focuses on using high precision RTK GPS data to reduce the attitude error from the low-cost onboard IMU for a better AR geo-registration and ground target localization. Related Research As a target-monitoring platform, UAV systems have apparent advantages along with considerable challenges. When performing remote monitoring tasks, ground operators can hardly obtain sufficient information for identifying and monitoring targets [11,12]. Calhoun [13] used augmented reality to overlay models of the landing zone, runway, and buildings in the real-time video stream for pilots to emphasize critical spatial information and assist monitoring the target area. However, the focus of this research was to analyze available types of spatial information, and possible manners of presenting the information, the precision of registration was not well discussed. Crowley [14] pointed out that it was difficult for rescue personnel to identify important or obstructed buildings, they need to compare the video stream from UAV to paper maps or electronic maps, which will waste plenty of time and cause certain potential errors. To solve this problem, they overlaid the images from UAV on Google Maps and marked Points of Interest (POIs). Limited by the process capability of the mobile platforms, the system could only process static images. Drury [15] used within-subject design to experiment on the effectiveness of UAV remote monitoring systems with augmented reality technology. During the experiment, the participants watched the original video stream from UAVs and augmented video stream that with and without the terrain information separately and then completed a rescuing task. The experiment result indicated that when the video was augmented by the terrain information, the participants were able to identify more targets and the positioning of rescue targets was more accurate. Therefore, augmented reality is proven to have the ability to solve the problem of a lack of information during UAV target monitoring and promote the situation awareness of ground operators [16,17]. Augmented reality overlays known information of the ground objects on the video in the correspondent positions through conducting geo-registration. Geo-registration (or geo-referencing) defines that overlaying virtual scenes on an actual video stream using the pose data of the camera, which is the key factor of outdoor augmented reality. Eugster [18][19][20] described the method of geo-registering video stream from UAV platforms and proposed a two-phase geo-referencing model, including direct geo-referencing and integrated geo-referencing. Direct geo-referencing means directly using the GPS/IMU data provided by a UAV as the exterior orientation elements. Integrated geo-referencing uses additional observations derived from images to known control points to estimate the exterior orientation elements of the camera. The experiment results indicated that when the flying height of UAVs is 50 m, the direct geo-referencing error was around 3 m, and the error of integrated geo-referencing was able to achieve 0.6 m. Despite integrated geo-referencing achieved sub-meter accuracy, the complicated operations such as edge detection, feature point extraction and Hough transformation limited the application on small UAV platforms. Meanwhile, direct geo-registration is widely used for its low computational load and well real-time ability. Ruano [21] proposed an augmented tool for situational awareness of the ground operators. However, this research didn't validate the registration accuracy. Stilla [22] used direct geo-referencing to make the thermal texture of buildings. When the flying height was 400 m, the accuracy was 4 m, which is sufficient for LOD2 (Level of Detail) city modeling. However, the rest platform of this research was a human-crewed helicopter and the high accuracy IMU was not suitable to be used on low-cost UAVs. Eugster [18] used extended Kalman Filter to merge GPS, IMU, barometer and other data to obtain a better estimation of the UAV and realize a better geo-registration result. Nevertheless, the IMU used in this research was also too cumbersome and expensive for electronic multi-rotor UAVs. Augmented reality provides the ability to identify ground targets for UAV monitoring systems, therefore the target localization technology enables extended applications of target monitoring technology. Target localization means estimating the actual geographical location of targets in real-time using various types of sensors onboard UAVs. In principle, there are two types of methods for target localization: the first is based on image matching and the second uses the GPS/IMU data provided by UAVs. The image matching based method corrects the geometry distortion and radiation distorting of the images, and if necessary, the images will be filtered. After these processes, an image is matched with an existing standard map and locates the targets [23]. The high computational cost and the needs of pre-obtained orthoimages of a target area limit the application of this method in target monitoring, although this method is remarkably reliable. The GPS/IMU data-based method determines the image space coordinates of the target using the image point coordinates and calculates the actual geographic coordinates of the targets from the relations between the image space coordinate system, the UAV coordinate system, and the geodetic coordinate system. This type of method can be further divided into triangulation based approaches [24,25], laser ranging based approaches and DEM(Digital Elevation Model) based approaches [26]. Barber [25] presented a triangulation based method of ground targets localization from a fixed-wing miniature air vehicle. In spite of that the localization error can be reduced down to less than 5 m, the method needs a circular trajectory around the target, which makes this method impractical in the most circumstances. Ponda [27] described the GPS/IMU data-based method in detail and pointed out that the low-cost and low-accuracy GPS/IMU was the main source of error. Thus, despite that this type of method is simple to implicate and has low computational cost, it is not accurate enough. To solve this problem, Ponda et al. increased the target localization accuracy through optimized UAV trajectory and adjusting the trajectory in real-time. However, this also increased the workload of the ground operators. Our previous work [28] introduced user interaction to correct the registration error during the augmentation process, the method provided satisfying results along with heavy user burden which made it not feasible to be applied on UAV video streams. Therefore, either a computational expensive feature matching process or navigation-level IMU is needed in existing researches to acquire high accuracy attitude data for the geo-registration process, which is not feasible to be applied on low-cost multi-rotor UAVs. In order to overcome the drawbacks of existing augmented reality geo-registration methods and get a reliable and high accuracy registration result for videos from low-cost UAVs, a new method is proposed in this paper. The proposed method provides a whole solution of augmenting video streams from low-cost UAV platforms and ground target localization. The core idea of the proposed method is using high accuracy RTK GPS data to improve the attitude data, which can make the registration process and the target localization process more accurate and stable, and at the same time avoid high computational cost image-matching process. The Coordinate Conversion in Augmented Reality Geo-Registration 3D geo-registration is the process to project the objects in a virtual scene to the corresponding environment of the real world. Mostly, it implies a conversion between the world coordinate system and the screen coordinate system. There are four coordinate systems will be involved in the process of geo-registering the UAV video stream: • The world coordinate system W: The geographical coordinate system. The GPS data and the ground vector map is in this coordinate system. In this research, W denotes the WGS-84 coordinate system. • The camera coordinate system C: The origin of the coordinate system is the optical center of the lens. This coordinate system is used to relocate the objects in the world coordinate system from the perspective of observation. • The projection surface coordinate system P: This coordinate system is a two-dimensional coordinate system, which is used to define the projection for the points of objects. The Coordinate Conversion in Augmented Reality Geo-Registration 3D geo-registration is the process to project the objects in a virtual scene to the corresponding environment of the real world. Mostly, it implies a conversion between the world coordinate system and the screen coordinate system. There are four coordinate systems will be involved in the process of geo-registering the UAV video stream: • The world coordinate system W: The geographical coordinate system. The GPS data and the ground vector map is in this coordinate system. In this research, W denotes the WGS-84 coordinate system. • The camera coordinate system C: The origin of the coordinate system is the optical center of the lens. This coordinate system is used to relocate the objects in the world coordinate system from the perspective of observation. • The projection surface coordinate system P : This coordinate system is a two-dimensional coordinate system, which is used to define the projection for the points of objects. The O-XYZ coordinate system is the world coordinate system; C is the camera coordinate system; P is the projection surface coordinate system; and S is the screen coordinate system. As shown in Figure 2, the coordinates in the real world coordinate system W will be converted successively to C, P, and S. The O-XYZ coordinate system is the world coordinate system; C is the camera coordinate system; P is the projection surface coordinate system; and S is the screen coordinate system. Figure 2, the coordinates in the real world coordinate system W will be converted successively to C, P, and S. The Coordinate Conversion in Augmented Reality Geo-Registration 3D geo-registration is the process to project the objects in a virtual scene to the corresponding environment of the real world. Mostly, it implies a conversion between the world coordinate system and the screen coordinate system. There are four coordinate systems will be involved in the process of geo-registering the UAV video stream: • The world coordinate system W: The geographical coordinate system. The GPS data and the ground vector map is in this coordinate system. In this research, W denotes the WGS-84 coordinate system. • The camera coordinate system C: The origin of the coordinate system is the optical center of the lens. This coordinate system is used to relocate the objects in the world coordinate system from the perspective of observation. • The projection surface coordinate system P : This coordinate system is a two-dimensional coordinate system, which is used to define the projection for the points of objects. The O-XYZ coordinate system is the world coordinate system; C is the camera coordinate system; P is the projection surface coordinate system; and S is the screen coordinate system. As shown in Figure 2, the coordinates in the real world coordinate system W will be converted successively to C, P, and S. The location of ground objects in the world coordinate system is determined. During the flight, the coordinates of the UAV and the poses of the camera can be obtained in real time. To simplify the model, the latitude and longitude coordinates are projected by Gauss projection [29]: where B is the latitude; l is the difference between the longitude L and the central meridian L 0 , l = L − L 0 ; X is the meridian arc length from the equator; N is the curvature radius of the prime vertical circle; t = tan B; η = e cos B, e is the second eccentricity of the standard spheroid; ρ = 180/π·60·60. Then the translation matrix from W to C is: h is the flying height of the UAV. If we use γ, α and β to represent the roll, pitch, and yaw of the camera, the rotate matrixes of the three axes can be written as: Thus, the conversion matrix from W to C is: The conversion from C to P is a transformation from a three-dimensional space to a two-dimensional space, which is also named as perspective projection. The viewing frustum is determined by the FOV (Field Of View), the aspect ratio of the projection plane, the near plane and the far plane. Only the objects in the viewing frustum are observable. The projection matrix from C to P is: where f ov is the vertical field of view of the camera, near is the distance between the camera optical center and the near plane, f ar is the distance between the camera optical center and the far plane, and aspect is the aspect ratio of the projection surface. The project surface P and the screen S are on the same surface but using different origins and units. In the projection surface coordinate system P, the origin is the cross point of the main axis and the projection surface, and the unit is the physical unit. In the screen coordinate system, the origin is the upper left point, and the unit is pixel. In the screen coordinate system, when the center of the projection surface is (u 0 , v 0 ), the physical size of one pixel is (d u , d v ), a point in the projection surface (u , v ) and its coordinate on the screen coordinate system (u, v) satisfies: The conversion matrix from P to S is: After all the conversions between the four coordinate systems, a point in the real world coordinate system can be converted to the screen coordinate system. If the coordinate of the point in W is (X, Y, Z), its coordinate on the screen will be: The Acquisition of High Precision GPS/IMU Data The poor GPS/IMU data is the primary error source of the camera exterior orientation elements. Furthermore, the error of camera exterior orientation elements might affect the accuracy of the geo-registration and target localization process. Our research tries to increase the accuracy using both hardware and software manners. The hardware manner includes using high accuracy RTK GPS and IMU and the software manner includes error source analysis and the filtering the position and attitude data. High Accuracy RTK GPS A set of high accuracy RTK GPS receivers is on board the UAV. The ground station of the RTK GPS receivers determines the its own precise location by continuously observing for a while. During the flight, the correction is calculated by the RTK ground station and transmitted to the UAV through the data link. The mobile station on board the UAV receives the correction. Meanwhile, it also receives the GPS signal from the satellites at the same time, which can make the localization error <0.1 m [30]. The ground station receives the RTK GPS data at the rate of 10 Hz, This rate is lower than common IMU and video. In the filter process the algorithm can combine the RTK GPS data and the IMU data, in the interval between two RTK GPS data frames, the RTK GPS data is extrapolated to keep up with the high frequency IMU and video data. Filter Process of the Data Since the UAV has high precision RTK GPS receivers, it is feasible to use the RTK GPS data to augment the altitude data. The Kalman Filter is proved to be a reliable way to merge data from multiple sensors to improve the data stability. In the Kalman Filter process, the status of the system is predicted using the previous observation, and the prediction is compared to the current measurement. The classic Kalman filter is only capable of describing linear systems. The extended Kalman Filter (EKF) linearizes a nonlinear system at the working position by a series of expansion, which allows the application of the Kalman filter on a nonlinear system. In this research, we use the EKF from González [31,32] to combine the data from RTK GPS and the IMU of the UAV to improve the accuracy of the geo-registration and the localization result of the ground targets. González [31,32] presented an approach to loosely couple a low-cost GPS receiver and a strapdown internal navigation system. The close-loop correction equations of the position are: is the latitude error, δλ b is the longitude error, and δĥ b is the height error. The (−) and (+) represents before and after correction. In this research, the application of RTK GPS provides high accuracy GPS coordinates. The original algorithm from [31,32] is designed for both low-cost INS and low cost GPS receiver, which means both the attitude and the position of the UAV have to be corrected by the filter. Since the RTK GPS has an extremely high precision, the GPS data is not changed by the algorithm and the original GPS data is used to predict the current status of the UAV and correct the status prediction in every iteration. Thus, in the EKF fusion process, only the attitude data of the UAV is modified, the position correction equations are modified to: where L g is the measurement of latitude from RTK GPS, λ g is the measurement of longitude from RTK GPS, and h g is the measurement of height from RTK GPS. This means the position measurements are directly used as the actual status of the system and output of the filter process. The whole process is illustrated in Figure 3. where is the latitude, is the longitude, ℎ is the height, is the latitude error, is the longitude error, and ℎ is the height error. The (−) and (+) represents before and after correction. In this research, the application of RTK GPS provides high accuracy GPS coordinates. The original algorithm from [31,32] is designed for both low-cost INS and low cost GPS receiver, which means both the attitude and the position of the UAV have to be corrected by the filter. Since the RTK GPS has an extremely high precision, the GPS data is not changed by the algorithm and the original GPS data is used to predict the current status of the UAV and correct the status prediction in every iteration. Thus, in the EKF fusion process, only the attitude data of the UAV is modified, the position correction equations are modified to: where is the measurement of latitude from RTK GPS, is the measurement of longitude from RTK GPS, and ℎ is the measurement of height from RTK GPS. This means the position measurements are directly used as the actual status of the system and output of the filter process. The whole process is illustrated in Figure 3. Because the camera is mounted on the UAV using a brushless gimbal, the attitude of the camera cannot be determined directly by the attitude of the UAV body. The roll and the pitch angle of the camera are measures mainly using the gravity acceleration which is steady in normal conditions, the yaw angle of the camera is mainly measured by the magnetic compass, which is easy to be interference by the environment. Normally, the gimbal onboard the UAV keeps the same heading with the UAV body, the yaw angle of the UAV can be regarded as the yaw of the camera. Thus, after augmenting the roll, pitch and yaw angle of the UAV using the RTK GPS data, the yaw angle of the UAV can be used to replace the original yaw data of the camera and to improve the accuracy of the attitude data of the camera and the accuracy of the geo-registration. The Target Localization Algorithm The process of geo-registration uses pre-calibrated inner orientation parameters and exterior orientation elements obtained in real-time to adjust the project matrix and observing matrix, and then the markers in the virtual scene and objects in the real space can overlay in the same position, which can be divided into direct geo-referencing and integrated geo-referencing. Limited by the heavy Because the camera is mounted on the UAV using a brushless gimbal, the attitude of the camera cannot be determined directly by the attitude of the UAV body. The roll and the pitch angle of the camera are measures mainly using the gravity acceleration which is steady in normal conditions, the yaw angle of the camera is mainly measured by the magnetic compass, which is easy to be interference by the environment. Normally, the gimbal onboard the UAV keeps the same heading with the UAV body, the yaw angle of the UAV can be regarded as the yaw of the camera. Thus, after augmenting the roll, pitch and yaw angle of the UAV using the RTK GPS data, the yaw angle of the UAV can be used to replace the original yaw data of the camera and to improve the accuracy of the attitude data of the camera and the accuracy of the geo-registration. The Target Localization Algorithm The process of geo-registration uses pre-calibrated inner orientation parameters and exterior orientation elements obtained in real-time to adjust the project matrix and observing matrix, and then the markers in the virtual scene and objects in the real space can overlay in the same position, which can be divided into direct geo-referencing and integrated geo-referencing. Limited by the heavy computational load, integrated geo-referencing is not capable of real-time applications. Thus, we use direct geo-referencing. The camera is mounted on the UAV and has a certain downward sloping angle. The high precision GPS/IMU data is obtained by the dual antenna RTK GPS system and the onboard IMU. The data is transferred to the ground station to calculate the exterior orientation elements of the camera, which is needed by the geo-registration of the video stream. As mentioned in Section 2, according to the difference of supplementary data, the target localization methods can be divided into triangulation-based approaches [24], laser ranging-based approaches and DEM-based approaches [26]. Since the laser ranging method will significantly add up the weight of the UAV, we use the triangulation-based approach and the DEM-based approach. Figure 4 shows the target localization process of these two approaches. direct geo-referencing. The camera is mounted on the UAV and has a certain downward sloping angle. The high precision GPS/IMU data is obtained by the dual antenna RTK GPS system and the onboard IMU. The data is transferred to the ground station to calculate the exterior orientation elements of the camera, which is needed by the geo-registration of the video stream. As mentioned in Section 2, according to the difference of supplementary data, the target localization methods can be divided into triangulation-based approaches [24], laser ranging-based approaches and DEM-based approaches [26]. Since the laser ranging method will significantly add up the weight of the UAV, we use the triangulation-based approach and the DEM-based approach. Figure 4 shows the target localization process of these two approaches. If the exterior and interior orientation elements are known, the projection relationship between the real scene and the video stream can be calculated. According to the collinear principle, selecting a point from the video, make a ray connecting the optical center and the video point, and the half line will intersect with the land surface at the real location of the target. This process needs additional information to avoid the one-to-many problem such as the altitude of the target or the DEM data of the target zone. However, the DEM data of the target zone is not always available, the altitude of the target is unable to be obtained directly, either. In this research, due to the lack of the DEM data of the target zone and the flat terrain within the target zone, a flat surface is used to replace the DEM. After selecting POIs, the regular objects will be checked first. If the target point is on a known object model, then intersect the ray connecting the optical center and the video point with the model; if the target point is not on a known object model, then intersect the ray with the flat surface. The Workflow of the Proposed Method The complete workflow of the proposed method is shown in Figure 5, the RTK GPS data, the video data, the attitude data and other telemetry data of the UAV and the camera are captured by the UAV and transported to the ground in the real time when the UAV is airborne. The proposed method is composed of the following steps: Step 1. In this step the RTK GPS data and the UAV attitude data are merged by the EKF algorithm described in Section 3.2.2, the output is the filtered UAV position sequence: and UAV attitude sequence: If the exterior and interior orientation elements are known, the projection relationship between the real scene and the video stream can be calculated. According to the collinear principle, selecting a point from the video, make a ray connecting the optical center and the video point, and the half line will intersect with the land surface at the real location of the target. This process needs additional information to avoid the one-to-many problem such as the altitude of the target or the DEM data of the target zone. However, the DEM data of the target zone is not always available, the altitude of the target is unable to be obtained directly, either. In this research, due to the lack of the DEM data of the target zone and the flat terrain within the target zone, a flat surface is used to replace the DEM. After selecting POIs, the regular objects will be checked first. If the target point is on a known object model, then intersect the ray connecting the optical center and the video point with the model; if the target point is not on a known object model, then intersect the ray with the flat surface. The Workflow of the Proposed Method The complete workflow of the proposed method is shown in Figure 5, the RTK GPS data, the video data, the attitude data and other telemetry data of the UAV and the camera are captured by the UAV and transported to the ground in the real time when the UAV is airborne. The proposed method is composed of the following steps: reverse-converted to the world coordinate system using: This is a ray connecting the screen point and the optical center of the camera lenses. The coordinate of the target can be computed by intersecting this ray with the DEM of the target zone. Figure 5. The full structure of the proposed method. Dataset A DJI M600 UAV platform (DJI, Shenzhen, China) with a RTK kit is used to test our methods in Beitun City (Xinjiang, China). The camera is a DJI Zenmuse X3 gimbal (DJI, Shenzhen, China). The Step 1. In this step the RTK GPS data and the UAV attitude data are merged by the EKF algorithm described in Section 3.2.2, the output is the filtered UAV position sequence: . . .      Since the camera is kept the same course direction as the UAV body, the yaw of the UAV body yaw body can be used to replace the yaw of the camera in the camera attitude sequence:    Finally, the camera attitude sequence is: Step 2. the P body , the A camera−new and other necessary parameters are used to compute the conversion matrix between the world coordinate system W and the screen coordinate system S. In this process, the UAV position P body = lat body lon body alt body T is converted by Equation (1). And then the conversion matrix M W−S is computed by Equation (2). Step 3. In this step, the vector map from the geo-information database is projected from the geographical coordinate system to the screen coordinate system by Equation (2). The projected vector map is overlaid on the video from the UAV, to complete the augmented reality-based geo-registration of the video. Step 4. In this step, the target is located on the screen. A point on the screen (u s , v s ) can be reverse-converted to the world coordinate system using: This is a ray connecting the screen point and the optical center of the camera lenses. The coordinate of the target can be computed by intersecting this ray with the DEM of the target zone. Dataset A DJI M600 UAV platform (DJI, Shenzhen, China) with a RTK kit is used to test our methods in Beitun City (Xinjiang, China). The camera is a DJI Zenmuse X3 gimbal (DJI, Shenzhen, China). The gimbal is set to keep the same heading with the UAV and has a 30 • downward sloping. A DJI D-RTK Kit [31] is onboard and provides the RTK GPS data. Although the D-RTK kit has two antennas, we only used the positioning result since most low-cost UAVs may be unable to carry a double-antenna RTK GPS system to determine the orientation. The RTK GPS data and other telemetry data can be obtained through the MobileSDK provided by DJI [33]. During the test, the flying height is set to 250 m above the ground level. The video stream and the telemetry data are transferred to the ground in real time and used for geo-registering and target localization. In the same time, these data are stored in local storage devices for further accuracy assessments. The vector map is obtained through the satellite images of the test zone. The satellite image dataset is taken by SPOT satellite on 14 August 2014. The dataset contains one full-color image and one RGB image. The ground resolution of the full-color image is 0.5 m, the ground resolution of the RGB image is 2 m. As shown in Figure 6, the two images are merged using ENVI to generate a 0.5 m RGB image. The vector map illustrated in Figure 7 was created from the vectorization of the 0.5 m RGB image and included roads, buildings, and waters. Because the test zone has a flat terrain, in the experiment we built a flat DSM (Digital Surface Model), this DSM is used with the vector map in the process of target localization. gimbal is set to keep the same heading with the UAV and has a 30° downward sloping. A DJI D-RTK Kit [31] is onboard and provides the RTK GPS data. Although the D-RTK kit has two antennas, we only used the positioning result since most low-cost UAVs may be unable to carry a double-antenna RTK GPS system to determine the orientation. The RTK GPS data and other telemetry data can be obtained through the MobileSDK provided by DJI [33]. During the test, the flying height is set to 250 m above the ground level. The video stream and the telemetry data are transferred to the ground in real time and used for geo-registering and target localization. In the same time, these data are stored in local storage devices for further accuracy assessments. The vector map is obtained through the satellite images of the test zone. The satellite image dataset is taken by SPOT satellite on 14 August 2014. The dataset contains one full-color image and one RGB image. The ground resolution of the full-color image is 0.5 m, the ground resolution of the RGB image is 2 m. As shown in Figure 6, the two images are merged using ENVI to generate a 0.5 m RGB image. The vector map illustrated in Figure 7 was created from the vectorization of the 0.5 m RGB image and included roads, buildings, and waters. Because the test zone has a flat terrain, in the experiment we built a flat DSM (Digital Surface Model), this DSM is used with the vector map in the process of target localization. The Results of the EKF Algorithm Process The attitude data of the UAV is enhanced with the RTK GPS data using the EKF method mentioned in Section 3.2.2. The EKF algorithm requires the error parameters of the IMU sensors and the GPS receiver before the EKF processing of the data. The standard deviations of the GPS positions are 0.01 m + 1 ppm in horizontal and 0.02 m + 1 ppm in vertical according to [31]. An Allan variance analysis is performed on the IMU data from the UAV to determine the error parameters of the IMU, the results are shown in Table 1: Because the RTK GPS is more accurate than the IMU onboard the UAV and the RTK GPS has a higher weight in the EKF process, the purpose of using RTK GPS to enhance the accuracy of the attitude data of the drone can be achieved. The enhancing results are illustrated in Figure 8. The Results of the EKF Algorithm Process The attitude data of the UAV is enhanced with the RTK GPS data using the EKF method mentioned in Section 3.2.2. The EKF algorithm requires the error parameters of the IMU sensors and the GPS receiver before the EKF processing of the data. The standard deviations of the GPS positions are 0.01 m + 1 ppm in horizontal and 0.02 m + 1 ppm in vertical according to [31]. An Allan variance analysis is performed on the IMU data from the UAV to determine the error parameters of the IMU, the results are shown in Table 1: Because the RTK GPS is more accurate than the IMU onboard the UAV and the RTK GPS has a higher weight in the EKF process, the purpose of using RTK GPS to enhance the accuracy of the attitude data of the drone can be achieved. The enhancing results are illustrated in Figure 8. As shown in Figure 8, the black line represents the original data of the roll, pitch and yaw angles, which is unstable. The red line represents the roll, pitch and yaw data processed by the EKF, which performs more steadily than the original data. As Table 1 indicates, the original IMU data has significant errors, that means directly using the original data will bring a considerable error to the As shown in Figure 8, the black line represents the original data of the roll, pitch and yaw angles, which is unstable. The red line represents the roll, pitch and yaw data processed by the EKF, which performs more steadily than the original data. As Table 1 indicates, the original IMU data has significant errors, that means directly using the original data will bring a considerable error to the process of the geo-registration. The following results and assessments indicate that the geo-registration and the target localization have better performance after the EKF filtering, which means the EKF filter has successfully removed the errors from the data. Figure 9 shows the geo-registration results. The objects in the image and the vector map have no noticeable misalignment, this suggests that the proposed method can correctly calculate the conversion between the geographic coordinate system and the screen system. Because the test zone has a quite flat terrain, the flat DSM model fits the real terrain well. The orange road line of the vector map goes well with the road in the video stream, and the yellow color lumps representing the buildings are also overlaid well on the buildings in the video stream. The frame rate of the registrated video can be stabilized around 24 frames per second (fps). The Result of Augmented Reality Based Geo-Registration Sensors 2018, 18, x FOR PEER REVIEW 12 of 18 Figure 9 shows the geo-registration results. The objects in the image and the vector map have no noticeable misalignment, this suggests that the proposed method can correctly calculate the conversion between the geographic coordinate system and the screen system. Because the test zone has a quite flat terrain, the flat DSM model fits the real terrain well. The orange road line of the vector map goes well with the road in the video stream, and the yellow color lumps representing the buildings are also overlaid well on the buildings in the video stream. The frame rate of the registrated video can be stabilized around 24 frames per second (fps). Error Analysis Because the localization error of the UAV is less than 0.1 m, it can be ignored compared to the angular error. As shown in Figure 10, θ stands for the horizontal FOV of the camera, ω stands for the pitch angle of the camera, h is the height of the UAV. Error Analysis Because the localization error of the UAV is less than 0.1 m, it can be ignored compared to the angular error. As shown in Figure 10, θ stands for the horizontal FOV of the camera, ω stands for the pitch angle of the camera, h is the height of the UAV. The camera has a mounting angle to the horizontal direction. Figure 10b presents the target localization error δx caused by the roll and yaw errors, and Figure 10c presents the target localization error δy caused by the pitch angle error. Now let k = tan θ 2 , according to spatial relations, we can get: Equation (3) is derived from ω; a differential equation is obtained: The error of roll and yaw can be marked as δγ, the error of pitch can be marked as δω, then δx and δy can be calculated by: In our case, ω = 30 • , θ = 60 • , then k 2 = 1/3, the expression of δx can be simplified to (8): Thus, using δκ, δω, and δϕ as the error of roll, pitch, and yaw, the total error caused by the attitude errors can be: According to the parameters of the UAV and gimbal, δϕ = 1 • , δκ = δω = 1.7 • , then the total error is: Thus, in the actual flight, the localization error caused by the attitude data error can reach 0.1 h. The measured values obey normal distribution in the region between 0.1 h above the actual value and 0.1 h below the real value. Filtering the data will remove the attitude values that have significant deviations from the actual values, and then narrows the distribution region to ± 0.099 h. Figure 9 shows the geo-registration results. The objects in the image and the vector map have no noticeable misalignment, this suggests that the proposed method can correctly calculate the conversion between the geographic coordinate system and the screen system. Because the test zone has a quite flat terrain, the flat DSM model fits the real terrain well. The orange road line of the vector map goes well with the road in the video stream, and the yellow color lumps representing the buildings are also overlaid well on the buildings in the video stream. The frame rate of the registrated video can be stabilized around 24 frames per second (fps). Error Analysis Because the localization error of the UAV is less than 0.1 m, it can be ignored compared to the angular error. As shown in Figure 10, θ stands for the horizontal FOV of the camera, ω stands for the pitch angle of the camera, h is the height of the UAV. Accuracy Assessments Ten points that have noticeable features were picked up from the reference satellite image for the accuracy assessment of the target localization method. As shown in Figure 11, the points were as evenly distributed as possible in the test zone. The locations of the points from the reference satellite image were used as ground truth. During the test, the same ten points were selected manually from the video and located as shown in Figure 12. The localization results and the ground truth were projected to a flat surface using Gauss-Kruger projection. Each point was measured three times, the average error was calculated for the evaluation. The coordinates of the ground truth points and the localization results are listed in Table 2. The graphical comparison of the ground truth coordinates and the localization results is shown in Figure 13. Figure 11. The points selected for the accuracy assessment. The comparison in Table 2 shows that the localization results have significant error compared to the ground truth. The errors are not stable and vary enormously. As the original attitude data has considerable noise, this pattern of error is reasonable. According to Equation (10), the maximum error of the localization error may reach 24.75 m, the errors using original attitude data are in the calculated range. Figure 13 indicates that the error shows no apparent direction pattern either. This also fits well with the significant noise of the attitude data. To reduce the error of the attitude data, the RTK GPS was used to enhance the attitude data by the EKF algorithm mentioned in Section 3.2.2. The coordinates of the ground truth points and the localization results after EKF are listed in Table 2. The graphical comparison of the ground truth coordinates and the localization results after EKF is shown in Figure 14. The comparison in Table 2 shows that the localization results have significant error compared to the ground truth. The errors are not stable and vary enormously. As the original attitude data has considerable noise, this pattern of error is reasonable. According to Equation (10), the maximum error of the localization error may reach 24.75 m, the errors using original attitude data are in the calculated range. Figure 13 indicates that the error shows no apparent direction pattern either. This also fits well with the significant noise of the attitude data. To reduce the error of the attitude data, the RTK GPS was used to enhance the attitude data by the EKF algorithm mentioned in Section 3.2.2. The coordinates of the ground truth points and the localization results after EKF are listed in Table 2. The graphical comparison of the ground truth coordinates and the localization results after EKF is shown in Figure 14. The result shown in Table 2 indicates that the localization error after applying the EKF decreases significantly. The RMS (Root Mean Square) error decreases from 12.58 m to 5.21 m. The accuracy and the distribution of the localization results are both improved. The P value of two-tailed t-test between the two sets of localization errors is 0.0042, which indicates that the difference is statistically significant. The significant accuracy improvement after using the EKF enhances yaw data for the camera attitude, proving that the yaw value relying on the magnetic compass is a major source of the geo-registration error. However there still exist errors that not decreased by the EKF process, these errors may come from flowing sources: • The mounting position error. The mounting position error consists of two parts: the offset between the GPS antenna and the center of the drone, and the offset between the center of the drone and the camera optical center. The flight controller of the M600 drone has a built-in compensation mechanism for the offset between the GPS antenna and the center of the drone. Due to the existing of the gimbal, the camera has position and angle offset from the center of the drone. The measurement of the camera position may be not accurate. Furthermore, to isolate the high-frequency vibration from the motors, there are several vibration absorbers between the gimbal and the drone body. The soft connection between the camera and the drone will lead the mounting angle between the UAV body and the camera to be varied during the flight, which will subsequently make the yaw angle of the drone differ from the yaw angle of the camera. • The video lag. During the test, the telemetry data and the video data are transferred through different data links. The video link has a delay in the transfer process, this may cause the telemetry data aligning to a wrong video frame and bring errors to the geo-registration and target localization process. The onboard video link has a latency of 50 ms [34] and may be more according to environmental conditions. The base latency of 50 ms was compensated in the experiments and a test zone that has low radio inference was selected in order to minimize the effects of video lag. During the flight, the attitude and position data may not arrive at the same The result shown in Table 2 indicates that the localization error after applying the EKF decreases significantly. The RMS (Root Mean Square) error decreases from 12.58 m to 5.21 m. The accuracy and the distribution of the localization results are both improved. The P value of two-tailed t-test between the two sets of localization errors is 0.0042, which indicates that the difference is statistically significant. The significant accuracy improvement after using the EKF enhances yaw data for the camera attitude, proving that the yaw value relying on the magnetic compass is a major source of the geo-registration error. However there still exist errors that not decreased by the EKF process, these errors may come from flowing sources: • The mounting position error. The mounting position error consists of two parts: the offset between the GPS antenna and the center of the drone, and the offset between the center of the drone and the camera optical center. The flight controller of the M600 drone has a built-in compensation mechanism for the offset between the GPS antenna and the center of the drone. Due to the existing of the gimbal, the camera has position and angle offset from the center of the drone. The measurement of the camera position may be not accurate. Furthermore, to isolate the high-frequency vibration from the motors, there are several vibration absorbers between the gimbal and the drone body. The soft connection between the camera and the drone will lead the mounting angle between the UAV body and the camera to be varied during the flight, which will subsequently make the yaw angle of the drone differ from the yaw angle of the camera. • The video lag. During the test, the telemetry data and the video data are transferred through different data links. The video link has a delay in the transfer process, this may cause the telemetry data aligning to a wrong video frame and bring errors to the geo-registration and target localization process. The onboard video link has a latency of 50 ms [34] and may be more according to environmental conditions. The base latency of 50 ms was compensated in the experiments and a test zone that has low radio inference was selected in order to minimize the effects of video lag. During the flight, the attitude and position data may not arrive at the same time as the video frame, this difference is compensated by applying a linear extrapolation between contiguous attitude and position data frames. • The screen selecting error. Affected by the screen resolution and the human operation, the position selected on the screen may not the exact point of the target, these errors may be several meters in the real world. • The rough terrain model. During the flight test, the terrain of the test zone is represented by a flat surface. As the actual terrain varies and the orientation of the UAV changes. The value and direction of the error may change irregularity. When there exists DEM data of the target zone, using actual DEM of the target zone can increase the localization accuracy. The Use of DJI M600 Platform This article intends to demonstrate a method for getting satisfying augmented reality geo-registration result and target localization result on low-cost UAV platforms. However, the price of DJI M600 platform is slightly higher than most electronic multi-rotor drones. The reason for choosing the DJI M600 platform is as follows: • Although the DJI M600 is slightly more expensive among multi-rotor UAV platforms, the difficulty and cost of using the M600 are still much lower than that of oil-powered helicopter UAVs, which are capable of carrying navigation-level IMU sensors. A single person can operate an M600 in extreme situations that is nearly impossible for oil-powered helicopter UAVs. • The D-RTK kit onboard the M600 is an off-the-shelf product provided by DJI, which is highly integrated with the flight controller of the M600. This experiment platform can save plenty of time and work from building a test environment from scratch. Although the M600 is not very cheap compared to cheap quadcopters, the attitude sensors of M600 and the onboard camera gimbal have no essential difference with the cheap drones. Thus the conclusions of this research are also applicable to cheap quadcopters. The proposed method in this article used none of the extra abilities that only an expensive platform like the M600 can have. There is no theoretical barrier to apply the proposed method to a real low-cost UAV platform. Conclusions This paper presents an augmented reality geo-registration method for geo-registering video streams from low-cost UAVs and ground targets localization. In the proposed method, a conversion model between the world coordinate system and the screen system was used to complete the augmented reality-based geo-registration of the video. The RTK GPS data was used to enhance the body attitude data by the EKF algorithm, the camera yaw data was replaced by the enhanced body yaw data in the geo-registration process to improve the accuracy of geo-registration. A target localization method based on the geo-registration model was proposed to complete the target localization process on the video. The performance of the proposed method was demonstrated by a case study in Beitun City, Xinjiang Province, China. The results showed that the proposed method performed well in the test environment. In the augmented video, the geometries and the marks were placed in the correct place with the corresponding objects in the video, and the attitude data of the drone was enhanced efficiently by the EKF algorithm. The target localization results were improved with the enhanced attitude data of the drone and the camera. The limitation of this study is that in the real world the terrain is not a simple flat surface. Using a flat surface ignores the slight changes of the ground, which brings errors to the localization results. The DEM of the working zone in the real world needs to be used to acquire a better localization accuracy. The EKF fusion between the RTK GPS data and the UAV attitude data improves the accuracy of the attitude data significantly. However, there still exists systematical error between the actual yaw value and the fusion result, which will have to be calibrated strictly. This error may vary with the working zones and latitudes, which is not easy to be solved in practical applications. The vector data and the ground truth points are not quite accurate, as such the localization results are relative values. Since the system doesn't have a double antenna RTK GPS system, the augmentation of the attitude data using the RTK GPS data requires the UAV to be moving to ensure a satisfactory result. When the UAV is hovering, the fusion performance may decrease. In such circumstances, manual interaction can be introduced to improve the geo-registration accuracy.
12,738.6
2018-11-01T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
Radio Channel Characterization in Dense Forest Environments for IoT-5G : The attenuation due to vegetation can limit drastically the performance of Wireless Sensor Networks (WSN) and the Internet of Things (IoT) communication systems. Even more for the envisaged high data rates expected for the upcoming 5G mobile wireless communications. In this context, radio planning tasks become necessary in order to assess the validity of future WSN and IoT systems operating in vegetation environments. For that purpose, path loss models for scenarios with vegetation play a key role since they provide RF power estimations that allow an optimized design and performance of the wireless network. Although different propagation models for vegetation obstacles can be found in the literature, a model combining path loss and multipath propagation is rarely considered. In this contribution, we present the characterization of the radio channel for IoT and 5G systems working at 2.4 GHz, focusing on the radio links blocked by oak and pine trees modelled from specimens found in a real recreation area located within a dense forest environment. This specific forest, composed of thick in-leaf trees, is called Orgi Forest and it is situated in Navarre, Spain. In order to fit and validate a radio channel model for this type of scenarios, both measurements and simulations by means of an in-house developed 3D Ray Launching algorithm have been performed, offers as outcomes the path loss and multipath information of the scenario under study. A geometrical and dielectric model of the trees were created and introduced in the simulation software. The path loss was then estimated as dependent of the radio link range for two species of trees at 2.4 GHz. We concluded that the scattering produced by the tree can be divided into two zones with different dominant propagation mechanisms: a free-space zone far from the tree and a diffraction zone around the edge of the tree. 2D planes of delay spread value are also presented which similarly reflects the proposed two-zone model. Introduction Inhomogeneous vegetation environments have the special feature of acting as scatterers of electromagnetic waves. The signal scattering is translated into an excess of attenuation which can limit the performance of the Internet of Things (IoT) envisaged at high data rates and low latency expected for the upcoming 5G mobile wireless communications. In this context, radio planning tasks become necessary in order to assess the validity of future IoT systems operating in vegetation environments. For that purpose, path loss models for scenarios with vegetation play a key role since they provide RF power estimations that allow an optimized design and performance of the wireless network [1]. Moreover, a path loss model may contribute to evaluate the maximum effective distance between adjacent terminals, hence to estimate the number of sensors needed to cover a certain area. Finally, the signal strength loss is related to the quality of service (QoS), causing unreliable communication between nodes that will increase both the number of data packet retransmissions and the power consumption of the nodes, causing radio link failure in last term. Therefore, there is a need for reliable through-vegetation radio channel modelling for vegetation environments, which will assess the propagation behaviour in terms of both path loss and multipath propagation. The characterization of the radio channel blocked by vegetation elements has been largely studied in literature, which proposed different models to estimate the power attenuation or level excess loss introduced by a signal blockage due to vegetation obstacles, mainly trees [2][3][4]. However, a model combining path loss and multipath is rarely considered. In this contribution, we present a simple model to characterize the attenuation due to the isolated thin trees in an air-to-air communication channel occurring between a static transmitter and a mobile user which moves linearly toward the tree. The developed channel model also considers parameters due to the multipath presence obtaining the value of the delay spread parameter. The radio characterization was performed by means of simulations based on 3D Ray Launching software, where the specific material parameters of the vegetation elements are considered, such as dielectric constant and conductivity. The paper is organized as follows: In Section 2, the simulation tools are discussed: 3D Ray Launching algorithm, as well as geometrical and dielectric model of tree. In Section 3, the simulation scenario is described and the obtained simulation results are shown. Finally, the comparison between these simulation results and the radio channel measurements shown in Reference [4] is made. Radio Channel Characterization of Vegetation Environments In this section, the used main elements required for performing the radio channel characterization are described: the simulation algorithm and the created geometrical and dielectric model of the trees. Simulation Software The most commonly radio propagation models used to characterize this type of complex environments are theoretical or empirical models. Theoretical methodologies offer rapid results but they lack precision. On the other hand, empirical techniques are more accurate but because of that, they require extensive measurement campaigns in the considered environment. Methods based on geometrical optics such as ray tracing or ray launching achieve a good trade-off between simulations accuracy and computational cost. In this work, an in-house developed 3D Ray Launching (3D-RL) algorithm has been used to characterize inhomogeneous vegetation environments. The 3D-RL approach is divided in three steps: 1. The first step consists in the design and creation of a realistic scenario, considering all the obstacles and scatterers within it. 2. The second step is the simulation procedure, where a set of rays are launched from the transmitter with a specific angular and spatial resolution. All of the parameters of the paths that follow the rays are stored during simulation, until the rays achieve the maximum number of reflections or the maximum specified delay. Electromagnetic phenomena such as reflection, refraction and diffraction are considered. 3. Finally, the third step processes the data and provides accurate electromagnetic radio wave propagation results. A detailed description of the algorithm can be found in Reference [5] and its validation for complex environments, in which the vegetation has a great impact, can be found in Reference [6]. It is worth noting that in addition to reflection and refraction electromagnetic phenomena, the presented 3D-RL algorithm has the possibility to include diffraction phenomenon in the simulations. For the presented simulations, diffraction has been activated. Model of Tree The Orgi Forest is a 77-hectares-forest which is included in the European Natura 2000 network. Natura 2000 is the largest network of protected areas in the world and it offers protection to most valuable and threatened species in Europe. In the case of Orgi, the oak (Quercus robur) is the main species protected. For the presented analysis, the radio propagation through both pines and oaks has been assessed. For that purpose, novel computational tree models have been developed for its use in the 3D-RL algorithm. The oak tree consisted of a solid trunk and a homogeneous mass of leaves. Meanwhile, the pine was modelled with homogeneous branches, with air in between. Both the widths and the heights of the trees are parameterized. Figure 1 shows the created tree models for oaks and trees and Table 1 shows the parameters of the materials included for the simulations. Simulation Results The scenario simulates the transmissions between a static transmitter and a mobile terminal in the presence of a thick tree producing a signal blockage. This study considered two tree species: oak and pine. The scenario created for the simulations with the 3D-RL can be seen in Figure 2. The transmitting frequency used was 2.4 GHz. The transmitter, represented by a red circle (TX) has been placed at 2 m height. The entire bounding box (i.e., the walls of the scenario) has been defined as air with the aim of avoiding not desired multipath components, except the ground that has been defined with the material properties shown in Table 1. A summary of the simulation parameters is presented in Table 2. Although the 3D-RL tool provides results for the whole volume of the scenario, propagation losses corresponding to the yellow dashed line of Figure 2b are analysed in this work. Specifically, linear paths at three heights have been considered: 1 m, 2 m and 3 m. Path Loss The effect of the mobile receiver displacement on the path loss, moving from far to near the tree along the direct ray path direction, was analysed from the simulated data for both species of trees at 2.4 GHz. We concluded that the scattering produced by the tree can be divided into two zones: a diffraction dominant zone around in the vicinity of the tree followed by a free-space zone. The two different propagation situations can be identified in Figure 3. Far from the tree, the signal variation with distance fits the free-space power decay given that the line-of-sight (LOS) condition is dominant. However, as the receiver moves linearly toward the tree (actually the mass of leaves), the channel response varies from a response typical for a free-space scenario to a response typical for a multipath or scattering situation and the received signal starts a linear variation of opposite trend to the freespace. This would be the diffraction dominant zone. The diffraction produces that the large attenuation introduced by the tree blockage starts recovering and continue as a free-space component. These results are similar to those found in Reference [4]. There it is described a scenario of a lowelevation, air-to-ground radio link blocked by an isolated tree with a transmitter placed over both the tree and a ground mobile receiver, for different species of trees at X band (8)(9)(10)(11)(12) and Ku band (12-18 GHz) frequencies. However, apart from a free-space region, two different scattering zones were identified in Reference [4]: a diffuse scattering-dominant region close to the tree trunk, within which the signal level only admitted a model according to a statistical distribution function; and a colliding region wherein the diffraction on the tree crown prevails, modelled considering knife-edge diffraction loss with a correction of the tree height. Beyond this second scattering zone, the signal recovers the power decay corresponding to the free-space model. In our model we have not clearly identified the zone corresponding to the diffuse scattering near the tree. This may be to the absence of air gaps in the model adopted for the mass of leaves which would turn the propagation media into a multi dispersive material. In Reference [4] the model was experimentally derived from measurements performed inside an anechoic chamber. For actual trees the mass of leaves is not homogeneous. The random fading effect of the leaves on the propagated signal contributes to produce the diffuse scattering zone. However, in Figure 3a, for oak, a noticeable effect is observed once the tree blockage ends: the signal attenuation decays sharply. This extreme attenuation may respond to the plausible existence of a diffuse scattering zone. Experimental measurements are needed to be carried in order to corroborate this fact. For the case of pine tree, the results are similar but less marked. It may be due to the fact that the pine tree model shows gaps of air that do not attenuate the signal as drastically as in the case of the homogeneous oak model. Delay Spread As previously mentioned, models combining path loss and multipath propagation are rarely considered but the presented 3D Ray Launching algorithm can also provide multipath propagation information, as can be seen in Figure 4, where simulation results of the Delay Spread are shown for different heights of pine and oak tree models. As can be seen, at height 1 m, the trunk is the influent part but for 3 m height the pine tree model presents too much air gaps (i.e., behaves like low density medium), while the oak tree model behaves like homogeneous medium. These facts will be studied in future works. Anyway, the obtained results are in accordance with [7,8].
2,758
2018-11-14T00:00:00.000
[ "Computer Science" ]
A Self-Scrutinized Backoff Mechanism for IEEE 802.11ax in 5G Unlicensed Networks : The IEEE 802.11ax high-efficiency wireless local area network (HEW) is promising as a foundation for evolving the fifth-generation (5G) radio access network on unlicensed bands (5G-U). 5G-U is a continued effort toward rich ubiquitous communication infrastructures, promising faster and reliable services for the end user. HEW is likely to provide four times higher network efficiency even in highly dense network deployments. However, the current wireless local area network (WLAN) itself faces huge challenge of efficient radio access due to its contention-based nature. WLAN uses a carrier sense multiple access with collision avoidance (CSMA/CA) procedure in medium access control (MAC) protocols, which is based on a binary exponential backoff (BEB) mechanism. Blind increase and decrease of the contention window in BEB limits the performance of WLAN to a limited number of contenders, thus affecting end-user quality of experience. In this paper, we identify future use cases of HEW proposed for 5G-U networks. We use a self-scrutinized channel observation-based scaled backoff (COSB) mechanism to handle the high-density contention challenges. Furthermore, a recursive discrete-time Markov chain model (R-DTMC) is formulated to analyze the performance efficiency of the proposed solution. The analytical and simulation results show that the proposed mechanism can improve user experience in 5G-U networks. Introduction Fifth-generation (5G) wireless systems are becoming a priority for telecom operators, as 5G comes with the promise of unseen services and a broad range of new use cases and business models ranging from smart transport systems to smart agriculture and factories.5G is expected to push the digitization of the economy further due to its ability to handle large volumes of data with low latency in real time.The evolution of the 5G era has promised an age of boundless connectivity and intelligent automation.5G wireless networks will support 1000-fold gains in capacity, connections for at least 100 billion devices, and 10 Gb/s individual user experiences capable of extremely low latency [1,2].To support massive capacity and connectivity, the IEEE 802.11ax high-efficiency wireless local area network (HEW) is promising as the foundation to evolve the 5G radio access systems to unlicensed band (namely described as, 5G-U) [3].However, wireless local area networks (WLAN) will face huge challenges to access this unlicensed band, especially for highly dense user deployments.WLAN medium access control (MAC) protocol mainly focuses on maximizing the communication radio utilization using fair MAC layer resource allocation (MAC-RA) [4,5] employing a carrier sense multiple access with collision avoidance (CSMA/CA) scheme of distributed coordination function (DCF) for the Wi-Fi user equipment's (WEs) competition to access the medium.To achieve the maximum communication radio utilization through fair MAC-RA in the WLANs with the ever-increasing density of contending WEs, the CSMA/CA scheme of the current DCF is of great importance as a part of 5G-U [5]. The binary exponential backoff (BEB) scheme is the typical and traditional CSMA/CA mechanism, which was introduced in IEEE 802.11DCF [6].A randomly generated backoff value for the contention procedure is used.At the first transmission attempt, the WE generates a uniform random backoff value B, from the contention window interval [0, W cur ], where W cur is initially set to the minimum value W min .After each unsuccessful transmission, W cur is doubled until it reaches the maximum value W max = 2 m W min , for m maximum number of backoff stages (b), that is, b ∈ (0, m).Once a WE successfully transmits its data frame, W cur is reset to the minimum value W min .For a network with a heavy load, resetting b to zero and contention window (W) to its W min value after successful transmission will result in more collisions and poor network performance due to an increase in probability to select similar backoff value B. Similarly, for fewer contending WEs, the blind exponential increase of W cur for collision avoidance causes an unnecessarily long delay due to the wider range for selecting B. Besides, this blind increase/decrease of the backoff window is more inefficient in the highly dense networks proposed for IEEE 802.11ax enabling for 5G-U, because the probability of contention collision increases with the increasing number of WEs.Thus, the current MAC-RA protocol does not allow WLANs to achieve high efficiency in highly dense environments and become a part of future 5G-U, whereas the upcoming HEW will suffer from such unresolved issues as they will be required to achieve four times higher network efficiency, even in highly dense network deployments [3].Hence, to withstand this challenge, WLAN needs a more efficient and self-scrutinized backoff mechanism to promise enhanced user quality of experience (QoE). This paper describes some of the use cases of HEW deployments for 5G-U networks.To solve the critical medium collision problem incurred by a large number of densely deployed contending WEs, a practical channel observation-based scaled backoff (COSB) mechanism [4] is presented in this paper.The COSB guarantees enhanced QoE in terms of high throughput and low delay in a high-density environment by reducing the number of collisions during the channel access mechanism by using a self-scrutinized backoff mechanism. The remainder of the paper is organized as follows: Section 2 describes some of the use cases of HEW deployments in 5G-U.In Section 3, the proposed COSB is described in detail.An analytical model is formulated in Section 4 to affirm the performance enhancement using COSB in highly dense environments.Section 5 describes the performance evaluation of COSB compared with state-of-the art BEB along with two similar backoff scaling protocols.Finally, in Section 6, a comprehensive conclusion and future works are presented. IEEE 802.11ax Use Cases in Fifth-Generation Radio Access Network on Unlicensed Bands (5G-U) Based on current research trends [5] and developments within IEEE 802.11axHEW [3], several future use cases for 802.11ax can be identified.Figure 1 shows some of the use cases proposed for HEW in 5G-U [7].These include a high throughput HEW in the form of a gigabit ethernet connection replacement, improved network capacity with multi-user multiple-input and multiple-output (MU-MIMO) transmissions, using HEW as a backhaul for local area networks (LAN), and supporting highly dense scenarios (such as an office building, stadium, train, etc.). Gigabit Ethernet Connection Replacement The most upfront use case of the latest gigabit Wi-Fi amendment IEEE 802.11axHEW is the opportunity to replace ancient gigabit ethernet connections, such as a connection to a server (e.g., server A in Figure 1a), with HEW radio links.With this use case, it is likely to serve either more WEs (servers) with the same throughput as before or the same number of WEs with improved throughput.The former benefit is particularly significant from the perspective of densely deployed networks, and the latter from the perspective of backhaul radio links supporting dense networks. Improved Network Capacity Using Multi-User Multiple-Input and Multiple-Output (MU-MIMO) HEW-enabled base stations (BSs) typically have more radio antennas than WEs.Therefore, a BS can utilize downlink MU-MIMO (the 802.11ax working group is also proposing uplink MIMO [3]), which allows a single BS to transmit parallel beam-formed transmission streams to different WEs, such as WE 1, WE 2, WE 3 and WE 4 in Figure 1b, on the same frequency.Beam-forming has previously been used in single-user (SU) transmissions to achieve higher data rates, and can now be used to increase overall WLAN network capacity.As a result, a WE equipped with a smaller number of antennas (that is, only one) does not affect network performance by occupying the whole radio channel with its lower transmission rate.Moreover, since MU-MIMO transmissions are realized in parallel and possibly with different transmission rates, the distance from BS is less important than in legacy SU networks in which WEs on the edge of the coverage cell (using low rates) could severely affect the performance of others by deferring their transmissions. High-Efficiency Wireless (HEW) as a Backhaul for Local Area Network (LAN) Since the IEEE 802.11axHEW amendment is a successor to IEEE 802.11ac very high throughput (VHT), Wi-Fi radio links can effectively replace wired backhaul connectivity, especially for those deployment infrastructures where wired connection deployment is a challenging task [7].HEW, as a backhaul connection as shown in Figure 1c, is surely the most straightforward application of this concept.However, an emerging idea is to use directed beam-forming (point-to-point) backhaul links using IEEE 802.11ax for small cells to reduce the backhaul cost.While operating in an unlicensed radio spectrum is subject to interference from other unlicensed networks or WEs, interference can be avoided by using highly directional antennas.With the emergence of new MIMO directional antennas, point-to-point links could profit from 802.11ax beam-forming transmissions. Support for Highly Dense Scenarios In most of the WLAN configurations, each BS serves only a limited number of WEs.Traditionally, to analyze the best-case scenario, only a single WE per BS is assumed.The worst-case scenario arises when multiple Wi-Fi networks are deployed in the same area as densely deployed networks.Examples of such scenarios are an office floor/building, train stations, or stadiums (as shown in Figure 1d), in which each end WE connects to its own BS but the maximum system throughput is limited due to interference coming from neighboring devices (BSs and WEs).If more WEs were served by each BS or more Wi-Fi networks were present in the same area, the results would be worse due to higher overheads and increased interference.HEW expects to handle the interference due to highly dense network deployments using more intelligent and optimized MAC-RA schemes, such as dynamic sensitivity thresholds for BS and WE traffic differentiation, and adaptive transmit power [5]. Problem Statement All of the use cases described above indicate that the performance of a Wi-Fi system can be severely degraded with an increase in the number of contenders, as the collision in the network is directly proportional to the density of the network.This problem statement is assured by the simulation results shown in Figure 2. Figure 2 plots the number of WEs (n) contending for channel access versus the average channel collision probability (p obs ) in a saturated (always willing to transmit) network environment with W min = 32 and W min = 64.The other simulation parameters are described in Table 1.The figure shows that increased network density has a direct relationship with the average channel collision probability; the denser the network, the higher the channel collision probability.In such a troublesome situation, a more adaptive and self-scrutinized MAC-RA is required by the HEW networks to maintain the performance so that it can serve 5G-U. Channel Observation-Based Scaled Backoff (COSB) In the proposed COSB protocol, after the communication medium has been idle for a distributed inter-frame space (DIFS), all the WEs competing for a channel proceed to the backoff procedure by selecting a random backoff value B as shown in Figure 3.The time immediately following an idle DIFS is slotted into observation time slots (α).The duration of α is either a constant slot time σ during an idle period or a variable busy (successful or collided transmission) period.While the channel is sensed to be idle during σ, B decrements by one.A data frame is transmitted after B reaches zero.In addition, if the medium is sensed to be busy, the WE freezes B and continues sensing the channel.If the channel is again sensed to be idle for DIFS, B is resumed.Each individual WE can proficiently measure channel observation-based conditional collision probability p obs , which is defined as the probability that a data frame transmitted by a tagged WE fails.We discretize the time in B obs observation time slots, where the value of B obs is the total number of α observation slots between two consecutive backoff stages as shown in Figure 3.A tagged WE updates p obs from B obs of backoff stage b i at the i th transmission as, where for an observation time slot k, S k = 0 if α is empty (idle) or the tagged WE transmits successfully, while S k = 1 if α is busy or the tagged WE experiences collision as shown in Figure 3.In the figure, WE 1 randomly selects its backoff value B = 9 for its b i backoff stage.Since WE 1 observes nine idle slot times, two busy periods, and one collision (B obs = 9 + 2 + 1 = 12), p obs is updated as 2+1 B obs = 3 12 = 0.25 in the next backoff stage b i+1 . According to the channel observation-based conditional collision probability p obs , the adaptively scaled contention window value is W b i+1 at backoff stage b i+1 of the transmission time i + 1, where b i+1 ∈ (0, m) for the maximum m number of backoff stages, and i is the discretized time for the data frame transmissions of a tagged WE.More specifically, when a transmitted data frame has collided, the current contention window W b i of backoff stage b i at the i th transmission time slot is scaled-up according to the observed p obs at the i th transmission, and when a data frame is transmitted successfully, the current contention window W b i is scaled-down according to the observed p obs at the i th transmission.Unlike the BEB (where backoff stage is incremented for each retransmission and resets to zero for new transmission as shown in Figure 4a), the backoff stage b i in COSB at the i th transmission has the following property of increment or decrement: Figure 4b shows that the backoff stage does not reset after a successful transmission.Since the current backoff stage represents the number of collisions or successful transmissions of a tagged WE, it helps to scale the size of W efficiently.The incremented or decremented backoff stage b i results in scaling-up or scaling-down of the current contention window, respectively.The scaling-up and scaling-down of the contention window operates as follows: where ω is a constant design parameter to control the optimal size of the contention window and is expressed as ω = W min . Analytical Model We formulate the analytical evaluation of proposed COSB mechanism with saturation throughput and average delay, on the assumption of ideal channel conditions, i.e. no hidden terminal and capture effects.In the analysis, we assume the fixed number of WEs, each of which is always willing to transmit the data frame, i.e., the network is assumed as a saturated traffic environment.Initially, we study the behavior of a tagged WE with a discrete-time Markov chain model (DTMC) [8,9], and we obtain the stationary transmission probability γ for the tagged WE.Since the proposed COSB does not reset the backoff stage to its initial value (that is to zero) after successful transmission, the transmission attempt for every new data frame remains recursive within the backoff stage state dimension.To accurately analyze the performance of COSB, we formulate a recursive discrete-time Markov chain model (R-DTMC).Later, by knowing the exact events that can occur on the communication channel within a randomly selected slot-time, we formulate the normalized throughput and average delay of the proposed COSB mechanism. Recursive Discrete-Time Markove Chain (R-DTMC) Model Consider there are n number of WEs competing for the channel in a WLAN.In the saturated condition, each WE has immediately a data frame available for transmission after each successful transmission.Thus, due to the consecutive data frame transmission, each data frame needs to wait for a random backoff time before transmitting. Let b be the backoff stage counter for a tagged WE and m be the maximum number of backoff stages b can experience for a data frame, that is b ∈ (0, m), such that W b = 2 b × W min × ω p obs for b th backoff stage and W max = 2 m × W min × ω p obs for the m th backoff stage contention window, where W b is the contention window size at b th backoff stage and p obs is the observed channel collision probability.Let us adopt the notation W b+1 = 2 b+1 × W min × ω p obs , for the adaptively scaled-up contention window for b + 1 backoff stage, when transmission is failed at the b th backoff stage.Similarly, let W b−1 = 2 b−1 × W min × ω p obs be the adaptively scaled-down contention window for the b − 1 backoff stage, when successfully transmitted at the b th backoff stage. Assume Ω(t) is the function for the stochastic process representing the backoff counter u for a tagged WE, where u ∈ (0, W cur − 1).Since time is discretized as an integer time scale, t and t + 1 correspond to the beginning of two consecutive transmission time slots, and the backoff time counter of each WE decreases at the beginning of each slot time.Figure 3 illustrates that the backoff time decreases when the communication channel is sensed as idle (σ), and it stops when the channel is sensed as busy, which may be due to a successful or unsuccessful transmission of any other WE.Therefore, the time interval between two consecutive slot time beginnings may be much longer and different from the idle slot time size, i.e., σ.Let π(t) be the stochastic process representing the backoff stages (0, 1, 2, . . . ,m) of the tagged WE at time t.The key articulation in our R-DTMC model is that, at each data frame transmission attempt regardless of the number of retransmission attempts, each data frame collides with a practically observed and independent collision probability p obs .With these assumptions, COSB can be modeled as the two dimensional process {π(t), Ω(t)} with the R-DTMC as depicted in Figure 5.In this R-DTMC, the transition probabilities are described as follows. The tagged WE remains at the first backoff stage after a successful transmission on the first backoff stage with the probability, The backoff counter decreases when the channel is sensed as idle with the probability, The tagged WE scales-up the current contention window and moves to the next stage b if a data frame transmission failed on backoff stage b − 1 with the probability, The tagged WE scales-down the current contention window and decreases its backoff stage for the next transmission attempt to b − 1 after a successful transmission on backoff stage b with the probability, The tagged WE remains at the m th backoff stage after an unsuccessful transmission with the probability, In particular, to the above transition probabilities, as considered in Equation ( 6), when a data frame transmission is collided at backoff stage b − 1, the backoff stage increases to b, and the new backoff value is uniformly chosen from the adaptively scaled-up contention window W b .On the other hand, Equation ( 7) describes how when a data frame transmission is successful at backoff stage b, the backoff stage decreases to b − 1, and the new backoff value is uniformly chosen from the adaptively scaled-down contention window W b−1 .In case the backoff stage reaches the value m (that is the maximum backoff value), it is not increased in the subsequent data frame transmission attempt. Let us assume that d b, u = lim t→∞ P{π(t) = b, Ω(t) = u}, b ∈ (0, m), u ∈ (0, W b − 1) be the stationary distribution of the R-DTMC.From Figure 5, each state transition probability can be written as, If β = Now for the backoff stage m, the d m,0 can be written as, Owing to the Markov process-based chain regularities, for each u ∈ (1, W b − 1), the stationary distribution for {π(t), Ω(t)} can be written as, The recursive characteristic of state transition probabilities can be combined as, From Equations ( 9)-( 11) and ( 13), Equation ( 12) can be re-written as, From Equations ( 9)-( 11) and ( 14), all the values d b,u are expressed as a function of d 0,0 and channel observation-based practical conditional collision probability p obs .d 0,0 is finally determined by normalizing the R-DTMC states as follows, From W * = W min × ω p obs and few mathematical steps, the above normalization relation can be written as, Finally, we get d 0,0 as follows, Since, a transmission occurs only when the backoff counter of the WE reaches zero regardless of the backoff stage, transmission probability γ can be expressed as follows, Furthermore, after performing a few mathematical steps to Equation (18) using the value of d 0,0 from Equation (17) we get, However, in general γ depends on the practical collision probability p obs , which is always unknown until the channel is observed for the busy slots.A transmitted data frame encounters the collision if at least one of the n − 1 remaining WEs transmit.Since each of the transmissions in the system sees this collision in the same state, a steady state can easily be yielded as [9], These two (γ and p obs ) are monotonic non-linear systems which can be numerically solved for each other. Normalized Throughput Let θ be the normalized throughput of the network and be defined as the fraction of the communication channel used for successful transmission of the data payload.To compute θ, let γ tr be the probability that there is at least one transmission in the considered slot time.Since there are n number of WEs in the system contending for the medium and each transmits with probability γ, the transmission probability γ tr can be defined as, If the probability γ s that a transmission is successful is given by the probability that only one WE transmits in the considered slot time, γ s can be obtained as, Thus, θ can be expressed as the ratio, Assume E[P] is the average data frame payload size (assuming that all the data frames have the same fixed size), then the slot time for transmitting average payload data successfully can be obtained as γ tr γ s E[P], since γ tr γ s is the probability for the successful transmission of a data frame in a given slot time.The average length of a given slot time is the sum of three cases; no transmission in a slot time that is (1 − γ tr )σ, a successfully transmitted data frame that is γ tr γ s , and a collision that is γ tr (1 − γ s ).Finally, the relation (23) can be written as follows: where T s and T c are the average time the communication channel has been busy due to successful transmission and collision, respectively.For analytical evaluation, the values of E[P], T s , T c , and idle slot time σ must be expressed with the same time unit.Let P hdr = PHY hdr + MAC hdr be the time to transmit a data frame header, and δ be the channel propagation delay.If ACK is the time to receive an acknowledgement, T s and T c can be obtained as, The corresponding values for T s and T c depend upon the 802.11standard.The PHY and MAC layer parameters to compute T s and T c are shown in Table 1. Average Delay We derive the average delay E[D] for a COSB mechanism for a successfully transmitted data frame.The saturation average delay is defined as the average time between the time data frame at the head of its MAC queue ready for transmission and its successful reception at the destination. According to [10], where E[Slot] is the total length of slot time as given in Equation ( 24), that is, is the average number of backoff slot times for a successful data frame transmission and is given by, After some algebraic calculations, E[B] reduces to, Performance Evaluation Analytical results of the proposed COSB formulated from R-DTMC are validated with that obtained from the simulation for the proposed COSB in an event-driven simulator, network simulator-3 (NS-3) version 3.24 [11].To evaluate the performance of COSB, we compare simulation results with BEB, and two of the related contention window-scaling algorithms; enhanced collision avoidance (ECA) mechanism [12], and exponential increase-exponential decrease (EIED) backoff algorithm [13].These comparison protocols (ECA and EIED) are selected for their characteristic of not resetting the value of W to its minimum value W min .ECA uses a deterministic backoff value B = W min /2 instead of resetting W to W min after successful transmission.The W value is exponentially increased after each unsuccessful transmission and is halved after each successful transmission in the EIED mechanism.The limitations of these protocols is that the performance of ECA is limited to the number of contenders below the deterministic cycle length W min /2, while EIED increases/decreases the contention window value without knowing the channel collision probability.The analytical model is validated with a network of n WEs ranges from 5 to 50 (a typical office floor deployment of IEEE 802.11ax standard [3]), where each WE is within the coverage area of the others (no hidden terminal).The WEs are set to be in saturation state (always willing to transmit).The specific MAC and PHY layer parameters are listed in Table 1. Normalized Throughput and Average Delay Figure 6a describes the normalized throughput for various numbers of WEs in an indoor HEW system.A monotonic decrease can be observed for the performance of BEB and EIED.The reason for performance degradation of BEB is resetting its contention window value to minimum value after successful transmission, which causes higher collision with the increases in the number of WEs.In spite of increased throughput, the performance of EIED also degrades with the increase of WEs due to a blind decrease of the contention window.In the figure, ECA performs better until n < 15, where the number of contenders is less than the deterministic cycle length W min /2 due to the collision-free deterministic environment.In the beginning, COSB also has a curved performance (increase and decrease) in normalized throughput.The curved performance of COSB shows that after observing the channel, it adjusts the contention window adaptively and results in increased normalized throughput.The performance degradation of COSB is slower than the compared protocols when the number of the WE increases.However, COSB provides the best throughput (Figure 6a) and average delay (Figure 6b) in a high density network as compared to the BEB, ECA and EIED.ECA has the best performance at low density, but the throughput decreases drastically in high density.COSB has acceptable performance at low density.This performance enhancement of COSB comes from the adaptive channel observation-based scaling of W. Figure 6 shows that the analytical model is accurate because analytical results (COSB-ana) match with the simulation results (COSB-sim) in both normalized throughput (Figure 6a) and average delay (Figure 6b). Maximum Approximate Saturation Throughput Bianchi in [9] determined the maximum achievable throughput of a WLAN by formulating approximate solutions for optimal transmission probability γ opt as follows, As described in Section 4.2, T c is the average time during which the communication channel has been busy due to the collision, and the value T c depends upon the IEEE 802.11 standard under consideration.Equation (31) has a fundamental theoretical importance to approximate the maximum saturation throughput of a DCF network, which mainly depends on the density of the network (that is, size of n).Thus γ opt is the transmission probability that each WE should adopt in order to achieve the maximum throughput performance.Table 2 shows the maximum approximate throughput achieved theoretically compared with that achieved by BEB and COSB algorithms in DCF for different network sizes.The table shows that the maximum approximate saturation throughput is very smooth, even a very small difference in the estimate of the γ opt leads to similar throughput values.The interesting result is that with the increase of density of the network, throughput achieved by the proposed COSB is closer to the maximum approximate saturation throughput as compared to the BEB.Moreover, the maximum approximated saturation throughput is practically independent of the number of contending WEs in the WLAN. Average Channel Utilization Per Data Frame Transmission In order to have a successful data frame transmission, a tagged WE spends 1/(γ tr γ s ) average number of slot times on the communication channel.Of those average slot times spent, (1 − γ tr ) is the time when channel is observed as idle and each idle slot time lasts for σ. Figure 7a plots the average number of idle slot times per data frame transmission for two different W min values with varying network density.The results show that for W min = 32 and W min = 64, the idle slot time per data frame transmission is very low for both COSB and BEB.This idle channel utilization is more significant when the initial contention window is greater, as shown for W min = 64 in Figure 7a.More specifically, for BEB mechanisms, the idle slot time per data frame transmission drastically decreases for a dense network environment even for a large number of initial contention windows that is W min = 64.COSB does not reset its initial contention window to W min after a successful transmission and it recursively remains near the adaptive values of the contention window size according to the channel observation-based p obs . Another important measure to discuss is the average number of transmissions that a tagged WE must perform for a successful transmission of a data frame which is given by 1/(1 − p obs ). Figure 7b shows that the number of transmission attempts per data frame transmission considerably increases as the size of the initial contention window decreases, that is W min = 32, especially for BEB when the network size is n = 50.The figure shows that for W min = 32, BEB WEs suffer an average 2.1 retransmissions (collisions), and COSB WEs suffer an average 1.5 retransmissions (collisions) when network density is n = 50.The increased idle slot times (Figure 7a) and reduced number of retransmissions (Figure 7b) for dense networks are the reason for efficient saturation throughput achievement of COSB. Conclusions In future technologies, the IEEE 802.11axHEW hopes to become part of the 5G-U networks, as they promise four times higher network efficiency even in highly dense network deployments.However, currently WLAN itself faces huge challenge of efficient channel access due to its distributed contention-based nature.Currently, CSMA/CA is based on a BEB mechanism, which blindly increases and decreases the contention window for collided and successful transmissions, respectively.In this paper, we highlight some of the use-case scenarios for HEW deployments in 5G-U networks.Furthermore, to handle the performance degradation challenge caused by increasing density of the WLANs in those use cases, a channel observation-based scaled backoff (COSB) mechanism based on practical channel collision probability is proposed.COSB overcomes the limitation of BEB to achieve high efficiency and robustness in highly dense networks.A practical channel collision probability observed by the contending Wi-Fi equipment (WE) is presented to adaptively scale-up and scale-down the size of W during the backoff mechanism for collided and successfully transmitted data frames, respectively.The proposed COSB enhances the performance of CSMA/CA in dense networks.Furthermore, a recursive discrete-time Markov chain model (R-DTMC) is also presented to analyze the performance of COSB.The validated model helps to determine the maximum achievable throughput in highly dense networks.The analytical model helps to investigate the interesting importance of the practical channel observation-based collision probability.The analytical and simulation results show that the proposed protocol compared to the state-of-the-art BEB protocol offers a performance boost in terms of throughput and delay when the number of contending WEs increases.Furthermore, this protocol is designed with very few modifications to the existing BEB mechanism, which makes the COSB protocol a good candidate for upcoming high-efficiency WLANs (HEW) that are the IEEE 802.11ax standard. Future research considerations include the implementation of the COSB mechanism for long term evolution (LTE)-based 5G-U networks, known as licensed assisted access (LAA).The addictiveness and practical collision probability measurement in COSB motivate us to integrate the algorithm in LAA. Figure 1 . Figure 1.Possible high-efficiency wireless local area network (HEW) use cases in fifth-generation radio access network on unlicensed bands (5G-U) deployments; (a) gigabit ethernet connection replacement; (b) improved network capacity using multi-user multiple-input and multiple-output (MU-MIMO); (c) HEW as a backhaul for local area network (LAN); (d) support for highly dense scenarios. Figure 3 . Figure 3. Channel observation mechanism of channel observation-based scaled backoff (COSB) during the backoff procedure. Figure 7 . Figure 7. (a) Average number of idle slot times per successful data frame transmission; (b) average number of transmissions per data frame. Table 1 . Medium access control (MAC) layer parameters used in simulation and analysis. Table 2 . Comparison of maximum approximate saturation throughput achieved from γ opt with throughputs of COSB and BEB.
7,561.4
2018-04-16T00:00:00.000
[ "Computer Science", "Engineering" ]
The role of detours in individual human navigation patterns of complex networks Despite its importance for public transportation, communication within organizations or the general understanding of organized knowledge, our understanding of how human individuals navigate complex networked systems is still limited owing to the lack of datasets recording a sufficient amount of navigation paths of individual humans. Here, we analyse 10587 paths recorded from 259 human subjects when navigating between nodes of a complex word-morph network. We find a clear presence of systematic detours organized around individual hierarchical scaffolds guiding navigation. Our dataset is the first enabling the visualization and analysis of scaffold hierarchies whose presence and role in supporting human navigation is assumed in existing navigational models. By using an information-theoretic argumentation, we argue that taking short detours following the hierarchical scaffolds is a clear sign of human subjects simplifying the interpretation of the complex networked system by an order of magnitude. We also discuss the role of these scaffolds in the phases of learning to navigate a network from scratch. is a way of interpreting an interconnection network by defining a central node (or a set of nodes) and referring to all other nodes with positions relative to, i.e., "above" or "below", the central node. These hierarchies are then used as helper structures when forming the paths in the network. In this study, we will refer to these hierarchical helper structures simply as "scaffolds". As a result, real paths will be somewhat longer than the shortest alternatives, but the detours will be characteristic to the individual taking them, as no two individuals may abstract the same hierarchy of the network. Although there are existing models assuming latent hierarchical scaffolds aiding navigation 6,10,[17][18][19][20] , this is the first study processing sufficient individual human navigation data to visualize and analyse these individually created hierarchies. We discuss that navigational scaffold hierarchies may boost the learning process to navigate the word-morph network and reduce the memory requirement of navigation by an order of magnitude. Moreover, identifying the individual scaffold hierarchies as the enablers of memory-efficient navigation in the word-morph network is of particular importance since this may promote uncovering of navigational schemes in other complex networked systems considering not only humans. Similar detours have been identified in measurements capturing the collective behaviour in networks from diverse areas of life. Gao et al. showed that the paths of packets going through the internet are also detoured to a non-negligible extent 21 , and they showed that the hierarchical policies of internet packet routing may be responsible for a major proportion of the inflation. Detours have been identified in road networks by Zhu et al. 22 and in cattle pen systems by Grandin 23 , while similar phenomena were also reported in airports 10,24 and brain networks 10,25 . Results For our study, we use data from an experiment with a word-morph game application for smartphones 26 (see Methods for details). The application collected 19828 paths from 259 human subjects navigating the word-morph network, and the corresponding dataset was published in Scientific Data 11 . After cleaning the data from paths not referring to steady-state navigation, by removing tasks that were either unfinished, contained loops or took an extraordinarily long time (>300 seconds) to complete, our working dataset of paths was reduced to 10857 paths (for more details about data filtering, see Methods). The word-morph network is a complex network that is impossible for a human subject to keep fully in mind with its 1008 nodes and 8320 edges. The values of the average degree (i.e., the average number of edges emanating from the nodes), the diameter (the longest shortest path in the network) and the clustering coefficient 13 of the network are 16.39, 9 and 0.44 respectively. To attain a high-level impression about the performance of human navigation, we have plotted the average time needed to Figure 1. An example and high-level statistics of our navigation experiment. Panel (a) shows a sample section of the network of three-letter English words, in which two words are connected if they differ only in a single letter. When human subjects solve a navigation task, they come up with a path from a randomly given starting word to a destination word by changing only a single letter at each step such that they always obtain a valid intermediate English word. The red and green paths show a shortest and a slightly detoured human solution from "yob" to "way". Panel (b) presents the average time it takes for human subjects to solve the n-th task in a row, while panel (c) shows the stretch of the human paths, i.e., the ratio of the length of the paths found by human subjects to the length of the shortest possible path in the word-morph network. While the average time to solve a task clearly decreases with the number of tasks solved, the stretch of the solutions stabilizes between 1.2 and 1.1. This suggests that human subjects develop a specific strategy in the first few rounds, but after a few tens of solved tasks, their strategy is not improved any further in terms of length. Therefore, they have a simplified interpretation of the network, and they find their paths through this, only slightly faster as time elapses. solve the n-th task in a row in Fig. 1b. We can see that after a few initial rounds, human subjects find a solution in approximately 30 seconds on average, and from there on, they slowly improve to approximately 20 seconds after solving 100 tasks. Notably, it is an intrinsically astonishing finding that after a few rounds, people can find paths in this complex maze very efficiently. Strikingly, the improvement in time does not imply that the paths found are also shorter. In Fig. 1c, the stretch of human solutions is shown compared to the shortest paths. The stretch of a path P is computed as the ratio of the length of P to the length of the shortest path between identical starting and destination words. In the example of Fig. 1a, the stretch of the human path (green) is = . 1 25 5 4 compared to the shortest possible path (red). Figure 1c shows that although human subjects improve in terms of the time needed to solve a task, the stretch of the paths they find stabilizes slightly below 1.2. Thus, the length of the human paths seems not to converge to the length of the shortest path (i.e., to stretch 1), and they always include some detours. A plausible explanation for this is that human subjects develop some kind of sub-optimal strategy through the course of the game and use this strategy to solve upcoming tasks. The improvement in time only means that the application of the same strategy becomes increasingly more effective. Nevertheless, how can we characterize the strategy in use? Panels a and b in Fig. 2 illustrate how differently an algorithm implementing shortest paths and a single human subject use the word-morph network to solve the navigational tasks. The plots show only edges traversed more than two times in the course of solving 1000 tasks. In the case of the shortest path algorithm, the usage of edges is homogeneous. The algorithm has no clear concept or deeper interpretation of the word-morph network and thus picks the paths mechanically without any sign of favouring specific regions of the network. The selected human subject behaves quite differently. The subject seems to have a clear concept of the network. The subject structures the network in a subjective manner by identifying various regions and places a larger emphasis on nodes and edges connecting these regions. A clear sign of this structuring is that from the human solution, a hierarchical scaffold structure is formed (see Fig. 2b for an example). To capture this behaviour, we focused on subjects highly engaged with the game, thus producing enough data to deeply examine the navigation strategy they use. We investigated subjects having more than 200 completed navigation tasks (9 subjects qualified for this). For these subjects, we processed all the solutions of the navigational tasks and assigned weights to the edges of the word-morph network reflecting how many times they were used in the solutions. We dropped the rarely used edges, for which the usage could be the result of randomly choosing the source and destination words. From the remaining graph, we took the largest component as the scaffold. In 90% of the cases, the scaffolds of the human subjects were at least two times larger in size compared to the random case, but in the majority of the cases, the human scaffolds were found to be an order of magnitude larger (see Panel a in Fig. 3). Panel b of Fig. 3 shows that the average degree of the scaffolds is approximately 2 in the case of all subjects. This means that the scaffolds are tree-like connected sub-networks of the original word-morph network. This result is fully in line with the assumptions of existing hierarchical human navigational models 6,17,18,20 . Compared Structures behind human paths and shortest paths. Panel (a) shows how many times an edge is crossed after solving 1000 random tasks by using the shortest path between the source and target word. The almost homogeneous distribution of edge crossings suggests that the entity using these paths does not have any form of understanding or interpretation of the word-morph network; conversely, it mechanically picks paths. Human paths are quite the contrary. Panel (b) shows the edge crossings of a single human subject when solving the same 1000 random tasks. The human solution appears to be highly structured, suggesting that humans possess a characteristic concept of the word-morph network. The structure is very close to a pure hierarchy. There is a clear scaffold that guides navigation, consisting of red, orange and green edges with a high number of crossings. This scaffold shows that the human subject tends to simplify the problem and form a simpler and systematic, although not necessarily optimal, strategy. From the sides of the network, where a navigation task starts, the human subject tends towards the scaffold where a switch is performed to other sides of the network. How this particular scaffold is built up is quite specific. Panel (c) shows the words in the middle of the scaffold. "Aim", "art", "arm" and "are" depict words where consonants and vowels can be changed very effectively. In this case, the scaffold is used to switch between regimes of the network based on the location of vowels and consonants. www.nature.com/scientificreports www.nature.com/scientificreports/ to shortest paths, the edges of the scaffolds are heavily used by the subjects (see Fig. 3c) with a very specific usage pattern. The scaffold has a definite core of a few nodes, between which the usage of the edges can exceed 50 in the particular example of Fig. 2b. This core behaves as a switching device among different parts of the network and abstracts the individual's concept of the structure of the whole network. The scaffold is built up in a hierarchical, tree-like fashion, as edge utilization clearly drops when receding from the core. In the course of navigating between words, subjects use the scaffold as a guiding framework. Figure 2c shows the words residing in the scaffold. In this example, the network is clearly divided into regions based on the position of consonants and vowels in the words, and the core words are picked by the human subject in order to switch effectively among these regions. Our results show that although these individual scaffolds may have some similarities, every subject used a fairly unique set of nodes and edges forming their own hierarchical scaffolds (see Supplementary Fig. 1 for additional examples of personal scaffolds). This finding is readily supported by Fig. 3d, which shows the percentage of overlap between all possible pairs of scaffolds. The overlap for scaffolds i and j is computed according to the Jaccard index over the sets of edges: i.e., the ratio of edges present in both scaffolds (E(S i ) denotes the set of edges contained in scaffold i) to the edges in the union of the scaffolds. Thus a network's overlap with itself is practically 100%. One can see that in the case of the scaffolds of the subjects, the average of the overlap is very small, approximately 2.6%, and the maximum overlap is only 7%. To quantify the statistical significance of the results regarding the scaffolds, we tested the null hypothesis that human paths can be explained by the shortest path algorithm. To test this hypothesis, we generated 500 solutions with the random shortest path algorithm over the same set of puzzles that the subjects solved. We found that the distribution of scaffold sizes and usage can be nicely estimated with a Weibull distribution (see Methods) in the case of all subjects. Table 1 shows the parameters of the Weibull distributions fitted to the scaffold sizes and usages plus the p-value indicating the tail probability that a scaffold of similar size and usage to the human solution could be derived from randomly chosen shortest paths. The p-values never exceed the alpha level of 0.05 and are extremely small in most of the cases, meaning that we have to reject the null hypothesis with high statistical significance. This substantiates the conclusion that the behaviour of the human subjects cannot be explained based on the shortest path algorithm. The identification of the individual scaffold hierarchies as core switching devices in the human interpretation of the word-morph network poses an intriguing question: Why do we use them even after mastering our ability in the navigation task? Why do we tolerate sub-optimal paths through these scaffold hierarchies and not strive for shorter paths? Recall that detours in the subjects' paths persisted even after completing 100 navigation tasks. We argue that the reason behind this is related to our information encoding and processing capabilities. In short, we build scaffold hierarchies while being satisfied with sub-optimal paths because this way we do not have to process every bit of information about a large and complex system, and we can get away with an interpretation that is an order of magnitude simpler. To show this, we use the following minimalist information-theoretic model inspired by our results above. The word-morph network is represented by a graph G(N,E) defining its nodes N and edges E. For modelling human behaviour, we use a simple tree hierarchy as a scaffold for navigation. The construction of the hierarchy proceeds by picking the node with the highest closeness centrality 27 and building the breadth-first search (BFS) tree emanating from it. This BFS tree will be used as the scaffold. Inspired by the information exchange algorithm well-fitted for hierarchically structured organizations 17 , we define human navigation based on the scaffold hierarchy as follows: (i) if the destination node is below the current node or its neighbours in the hierarchy, then we step to its closest superior or the destination itself provided that the destination and the current nodes are connected; (ii) if the destination node is not below the current node in the hierarchy, then we step to the current node's direct superior in the hierarchy. As an analogy, this simple navigation mechanism captures that if to the shortest path case. The human subjects' behaviour clearly deviates from the shortest path algorithm, as they form sizeable navigational scaffolds compared to shortest paths. The average degree of the scaffolds is close to approximately 2, as shown in panel (b); thus, the structure is very close to trees. Panel (c) confirms that the scaffold is heavily used by human subjects when completing the navigation tasks. We define usage simply as the sum of intersections between the subject's paths and the scaffold. If we denote the solutions of the subject as P 1 , P 2 …P K , where K is the number of puzzles solved by the subject, then the usage of the scaffold S is computed as , where E(P i ) denotes the set of edges contained in P i , while E(S) is the set of edges in the scaffold. Panel (d) shows that the individual human scaffolds are indeed "individual" as the observed overlaps between the subjects' scaffolds is only 2.6% on average. www.nature.com/scientificreports www.nature.com/scientificreports/ somebody is my subordinate in the hierarchy or the subordinate of someone that I know, then I know who is the closest to them among my acquaintances. If I know nothing about the target, then I turn to my direct superior. Note that this extremely simple process models only a possible way of using a very artificial scaffold, the BFS tree. Our goal with analysing this simplified navigation process is to enable the information-theoretic analysis of the paths formed by the usage of scaffolds. Paths emanating from this simple model will clearly not match the paths used by any of the subjects for multiple reasons. First, although scaffolds built by humans are very similar to trees, they are not trees in many of the cases (see Fig. 3b). Second, human scaffolds vary subject by subject and have only an extremely small overlap across subjects (see. Fig. 3d) and with the BFS hierarchy. To characterize the complexity of implementing the paths provided by the shortest path algorithm and human navigation, we approximate the required minimum information in every node to decide which next step to take towards all destinations in the word-morph network. Let us assign positive integers, i.e., 1, 2, 3…, as IDs to the nodes of the network. At each node x, we can represent the amount of information needed to make the right choice by a node table T x . At node x, this node table has |N|−1 entries (where |N| is the number of nodes in the word-morph network) belonging to all the nodes other than x, and each entry contains the ID of a neighbour to take next towards a given destination. For example, a node table T 5 = (1, 2, 1, 2) tells us that at node 5, if we want to go toward node 1, 2, 3, 4, we should take nodes 1, 2, 1, 2 as next steps, respectively. This node table implicitly tells us that node 5 is connected to nodes 1 and 2 and that, in this example, the network has five nodes. Supplementary Note 1 provides a more detailed example of how to compute these node tables for a concrete network and set of paths. Now, tables ∀ ∈ T x N , x contain all the information required to implement the given paths between arbitrary pairs of nodes in the word-morph network. To approximate how many bits of information are needed to store these tables in memory, we compute their empirical Shannon entropy 28 in T x , respectively. Then, Σ ∈ H T n ( ) x N x 0 yields the global per node entropy to implement the paths. In Supplementary Note 2, we supply asymptotically optimal results for the empirical entropy of some well-known graph families. In Fig. 4, the required information for implementing shortest paths and hierarchical paths in the word-morph network is shown. Shortest paths clearly have a stretch of one, but the price of this is a high entropy, as approximately 3.18 bits per node are required to store the shortest paths in the node tables (see the Shortest Path column on the left of Fig. 4). Navigation with the simple BFS scaffold has an order of magnitude less (approximately 0.83 bits per node) entropy (see the Hierarchy 1 column of Fig. 4), but hierarchically guided paths are much longer; they have a stretch of 1.46. Recall that our results with human subjects indicate a stretch slightly below 1.2. The hierarchy 2, 3 and 7 columns in Fig. 4 stand for a slightly modified version of the BFS hierarchy in which we do not have strictly 1 direct superior but can have links to at most 2, 3 and 7 superiors in the BFS tree, respectively. These hierarchies are no longer trees, but they are still as sparse as the human scaffolds. These modifications readily illustrate that there is a clear tradeoff between stretch and entropy. Having to remember more superiors reduces the stretch but surely increases the complexity. Nevertheless, with hierarchy 7, a stretch of 1.14 is achievable at the cost of only 2.32 bits of memory per node. These results readily illustrate that even the most rudimentary scaffold guiding navigation can achieve an effective stretch-entropy tradeoff. However, BFS scaffolds are constructed in a centralized fashion and rely upon global information about the network, which is not realistic. A more realistic decentralized scaffold with only one direct superior yields a sweet spot in this tradeoff space while being computable with local algorithms 29 . In this hierarchy, called In-or-Out, every node's superior is the neighbour lying in the most central location in the network in terms of closeness centrality. This simple, local strategy can provide a very low stretch for an order of magnitude less entropy compared to shortest paths. This is because the In-or-Out hierarchy is aware of the neighbours' centrality; thus, every node's direct superior is a neighbour www.nature.com/scientificreports www.nature.com/scientificreports/ that is closest on average to any other node in the network. Interestingly, the In-or-Out hierarchy stretch is close to what we have observed with human subjects. In addition to simplifying the process of navigation, scaffold hierarchies can boost learning the structure of a totally unknown network by observing its paths. To show this, we use a very simple incremental model where, in every step, we show a single path connecting randomly chosen nodes and compare the reconstructed network structure and the efficiency of navigation based solely on the given paths to the original network. Figure 5 illustrates the steps of this learning process for the cases in which we show paths according to shortest or hierarchical scaffolds from the word-morph network. In the first case, we show the shortest paths between the words "aye" and "pit" (green) and between "pit" and "emu" (olive), and based solely on this knowledge, one may implicitly deduce a path from "aye" to "emu" traversing 6 nodes. Alternatively, showing paths using a hierarchical scaffold yields somewhat longer paths (red). However, one can see that the newly gained path between "aye" and "emu" leads to a substantially shorter path requiring only 3 intermediate nodes. In Fig. 6, the integrity and the stretch and entropy footprint of the various learning scaffolds are shown when we continue simulating the learning process up to 2000 paths with a computer program (see Methods for details). In panel (a), the size of the giant component in the network reconstructed from the paths is shown as a function of learned paths. The shortest path scaffold provides only very sporadic knowledge about the network in the initial (0-120) learning steps, as the size of the giant component hardly grows with the number of learned paths. The most integrated knowledge is provided by the most simple scaffold of Hierarchy 1. In panel (b), we can clearly distinguish between two phases of the learning process. Until approximately 700 paths, rough exploration of the nodes and possible connections in the network occurs. According to the inset of the panel, by the end of this exploration phase, one can connect more than 90% of all possible node pairs in the case of all scaffolds. Using the shortest paths as learning scaffolds, we can find only very long paths in the exploration phase, as the average stretch can exceed even 3. Interestingly, if paths are picked The decentralized In-and-Out hierarchy with one direct superior, based on the highest closeness centrality, is a sweet spot in this tradeoff space. This simulates the case when people know all subordinates in the network but remember only one superior closest to the centre of the network. It provides a realistic stretch, but the required entropy is an order of magnitude lower than that in the shortest path case. Figure 5. Shortest and hierarchically guided paths in the word network. Learning only the shortest paths between the words and and between and makes us conclude that the word aye is 7 nodes away from emu. However, with a hierarchical scaffold, a four-node path between aye and emu can be found even though both of the paths between pit and aye and between pit and emu are longer through the scaffold than through the shortest possible path. according to a hierarchical scaffold, we can obtain paths with a lower stretch as the scaffold becomes increasingly simpler, i.e., the number of direct superiors decreases. In the case of the simplest one-superior case, the stretch is very stable at approximately 1.5. Therefore, in the exploration phase, one can learn reasonable paths much faster if paths are given according to a hierarchical scaffold. After the exploration phase, we do not explore new territories of the word-morph network; what we do is only improve our knowledge. In this improvement phase, the shortest path scaffold takes the lead over the hierarchical scaffolds, yielding the best stretch values. The price of being better in stretch is a higher entropy, as can be seen in panel (c): The entropy of the scaffolds is similar in the exploration phase; however, as the number of paths learned increases, the entropy of the simplest Hierarchy 1 scaffold starts to decrease substantially, while the shortest path one continues to increase almost linearly. Discussion Although this study concentrates on a networked system, the underlying problem of human navigation in the word-morph network seems even more interesting in light of the fact that the current explanations of physical navigation tend to apply models considering the graph-like abstraction of the surrounding physical environment. In fact, there is an ongoing debate about whether we build a detailed cognitive map or a much simpler cognitive graph of the possible physical choice points 30,31 inside our head. Furthermore, recent studies reported major correlations between the navigation and learning skills of humans 32,33 , while others went even further and investigated the possibility that navigation in cognitive spaces may lie at the core of any form of organized knowledge and thinking [34][35][36] . The word-morph network is a special mixed system over which navigation relies strongly on domain-general mechanisms since both spatial, manifested in the Hamming distance between words, and cognitive, i.e., the function and meaning of the words, dimensions contribute to the formation paths. Thus a promising speculation is that the identification of individual scaffolds guiding human navigation in the word-morph network may contribute to a better understanding of how humans structure, encode and navigate through cognitive spaces. The empirical confirmation of individual scaffold hierarchies may also help resolve known anomalies in modelling human navigation behaviour in networks. Human paths over networks are reported to exhibit non-negligible memory 24,37,38 , which leads to problems when applying first-order Markov chains to approximate paths in spreading dynamics and community detection 24 . Individual scaffold hierarchies explain the source of these anomalies, as the next step of hierarchically guided paths clearly depends on nodes visited previously by the given individual. Building on the assumption of hierarchical scaffolds behind network paths, we may be able to refine higher-order Markov models, which may bring us closer to a better understanding of how real systems are organized and function. Methods Dataset. For our study, we have used the dataset collected by a smartphone application called "fit-fat-cat" running on the Android platform. The dataset 11 is published in Scientific Data, with the appropriate ethical consent. Here, we summarize the data collection process; for a detailed description of the experiment, consult 11 . The application is available from the Google Play store 26 . When a subject starts a navigational task, the source and destination words are generated randomly from the all possible three-letter English words. The source and destination words are displayed in a box (see Fig. 7). Below this box, the list of words that the subject visited so far in that particular task is shown. When starting a new task, the list contains only the source word. The subject can shows that after learning only approximately 700 paths, one can infer valid paths between 90% of all possible node pairs using either the shortest path or hierarchical scaffolds. In this exploration phase, learning based on shortest paths seems to be quite inefficient, as the stretch can even reach 3. In this phase, the simplest hierarchical scaffold yields the shortest established path on average. Only in the improvement phase, in which no significant new parts of the word-morph network are explored, is the relation is reversed. The entropy of the paths is shown in panel (c). The exploration phase shows no difference among the scaffolding schemes; however, in the improvement phase, the entropy of the hierarchical scaffolds is much lower compared to the shortest paths. www.nature.com/scientificreports www.nature.com/scientificreports/ enter the consecutive words in a user-friendly manner by using a virtual keyboard of the phone. First, the subject selects the letter to change, then chooses the new letter with the keyboard. After changing a letter, the app automatically adds the new word to the list. In this way, the subjects can see which words they have already tackled when solving a particular navigation task. A task may end in three ways. If the subject reached the target word through such one-letter transformations, then the task is solved. In this case, the word becomes green-coloured to show the end of the task. Second, the subject can give up the task by pressing the "new game" button. In this www.nature.com/scientificreports www.nature.com/scientificreports/ case, the subject acquires the next task automatically. Finally, the subject can press the "magic wand" button. In this case, a possible (shortest path) solution of the task is shown before starting a new task. No matter how the task is ended, the list of words is anonymously submitted to our database stored in the cloud. Due to the scale of the experiment, we couldn't control the external conditions under which the subjects carried our the solutions, apart from standard software checking of the validity of the subjects' inputs. For more details, see 11 . Detecting an individual scaffold requires a relatively high number of completed navigation tasks. Completing many puzzles can be a very tedious and repetitive task. Doing this in a single row (e.g. in a paid, controlled experiment during which the subject can concentrate from the beginning to the end) is arguably unfeasible. Luckily, 9 of the subjects found the game interesting enough to solve more than 200 puzzles. Thus it is not the number of subjects that are uniquely large in the dataset, but the number of paths collected from a single subject. Path filtering. Instead of focusing on the dynamic process of how we learn to navigate, i.e., how we learn an approximate picture of the network by exploration, we concentrate on the way people routinely choose paths in a network after they have developed their individual path selection strategy. In this steady state, subjects do not explore the network or wander around; they simply solve the puzzle by routine. To analyse this steady-state behaviour, we have to drop all unfinished paths, paths taking too much time to complete and loops from the dataset. Of the recorded 19828 paths, we dropped 8177 because they did not reach the target for some reason, 712 paths because the time to solve the puzzle was unusually large (>300 seconds), which raises the question of if the subjects concentrated on the puzzle, and only 352 paths (1.7% of the total paths) because they contained loops. Weibull fitting to the random shortest path algorithm. The scaffold sizes and usages of the random shortest path algorithm can be well-estimated with a two-parameter Weibull distribution. As an illustration, we verify the goodness of the fit for the puzzle set of subject 4 in Fig. 8. The results for the other subjects are highly similar. computer simulations. For investigation of the incremental learning of a network via its paths, we have written a simulator in the Python programming language. In the beginning, the simulator reads the network N. After that, it iteratively picks random pairs from the network and computes the shortest and hierarchical paths between them according to the given BFS hierarchy. At each iterative step, the current knowledge about the network is the union of nodes and edges contained in the previous iterations. Therefore, at step t, the knowledge about the network is a graph G t (V, E); then, after adding a path P t , it is extended to The simulator computes the required entropy and stretch of the paths in G t compared to the shortest paths in N every 50 steps. We note, that we have run the simulations beyond 2000 paths but the relative positions of the stretch and entropy plots of the algorithms remains the same in that regime. Data availability The data supporting the findings of this study are available from the "fit-fat-cat" public Open Science Framework data repository 39 and described in detail in 11 .
7,723.8
2019-04-17T00:00:00.000
[ "Computer Science", "Biology" ]
A FAMILY OF EXPONENTIALLY FITTED MULTIDERIVATIVE METHOD FOR STIFF DIFFERENTIAL EQUATIONS In this paper, an A-stable exponentially fitted predictor-corrector using multiderivative linear multistep method for solving stiff differential equations is developed. The method which is a two-step third derivative method of order five contains free parameters. The numerical stability analysis of the method was discussed, and found to be A-stable. Numerical examples are provided to show the efficiency of the method when compared with existing methods in the literature that have solved the set of problems. INTRODUCTION In order to solve stiff initial value problems in ordinary differential equations efficiently, many new methods have been developed in recent years, which satisfy certain stability requirements. The property of A-stability is desirable in formulas to be used in the solution of stiff systems of differential equations especially from chemical kinetics and the discretisation of partial differential equations. Dahlquist [7], proved that A-stable linear multistep formulas must be implicit, its maximum order is two and of those of second order, the one with the smallest truncation error coefficient is the trapezoidal rule. In nearly every linear systems of differential equation, which have widely disperse eignevalues, high order A-stable formulas are particularly appropriate since they allow integration to proceed with a larger step size. Thus, the need to develop a high order A-stable implicit multistep formula, which uses linear combinations of derivatives higher than the first, give rise to the development of multiderivative multistep formulas. Lambert [11], and Enright [8], pointed out that Multiderivative methods give high accuracy and possess good stability properties when used to solve first order initial value problems in ordinary differential equation. THE GENERAL MULTIDERIVATIVE MULTISTEP METHOD The general multiderivative multistep method is given by, ∑ ∑ ∑ Where is the derivative of evaluated at , and are real constants with and is the appropriate numerical solution evaluated at the point . In order to remove the arbitrary constant in (2.1) we shall always assume that , and ∑ | | ∑ | | DEVELOPMENT OF NEW EXPONENTIALLY FITTED MULTIDERIVATIVE METHODS A numerical integration formula is said to be exponentially fitted at (complex) value if when the method is applied to the scalar test problem , with exact initial condition, the characteristic equation satisfies the relation . However, the idea of using exponentially fitted formulas for the appropriate numerical integration of certain classes of stiff systems of first order ordinary differential equation of the form, which was originally proposed by Liniger and Willoughby [12], is to derive integration formulas containing free parameters (other than the step length of integration) and then to choose these parameters so that a given function , where is real, satisfies the integration formula exactly. Liniger and Willoughby [12] derived three 1-step integration formulas with orders ranging from 1 -3. Their results revealed that for all choices of the fitting parameter , their formulas are A-stable. DERIVATION OF FAMILY OF TWO STEP EXPONENTIALLY FITTED METHODS The objective of this paper is to develop a two-step, third derivative multiderivative exponentially fitted formulas. (i.e. and ). For this purpose, equation ( are respectively the first, second and third derivatives of . When we are deriving exponentially fitted multistep methods, the approach is to allow both (2.3) and (2.4) to posses free parameters order than the mesh size which allow it to be fitted automatically to exponential function. DERIVATION OF METHOD OF FIFTH ORDER FORMULAS The derivation of predictor-corrector integrations formula of order 5 involves two stages as was done in higher order. First we derived the order four predictor by setting in equation (2.3) to obtain five set of equations with 12 unknown parameters and we then obtain the following set of simultaneous equations When we solve the above equations we obtain, When these values of the parameters are substituted into (2.3), we obtain the predictor formula as: Now, for exponential fitting purpose, we apply (2.7) to scalar test function to obtain equation (2.8) below. Again to obtain the corresponding order 5 corrector formula, we obtain six set of simultaneous equation from (2.4)asfollows, 3 We impose the same condition as in predictor, and in addition, we let as free parameters , the values of the unknown parameters are obtained from (2.11) as, When these values are substitute into (2.4), we obtain the fifth order corrector formula as, STABILITY CONSDIERATION OF THE METHOD To examine the stability conditions required by this method, it is expected by maximum modulus theorem that the stability function of the method given by (2.16) satisfies| | . In order to determine the interval of absolute stability of the method, we find limits of both and as and Similarly, from (2.16) we have, Thus we formed that ( ) and ( ) Now, we further verify analytically that the ranges of value of a andb represent the region of absolute stability of the new method. We illustrate this by taking a large sample S as shown in table 3.1 below; However, for purpose of comparative analysis on the performance of the new scheme we denote AL 5 as the new method, CH4, CH5-Cash [6] method of order 4 and 5 respectively, J-K Jackson and Kenue [10], OK6-Okunuga [13], AB7, AB8, NM9 represent Abhulimen and Otunta [3] method of order 7, 8 and 9 respectively. F 5 Abhulimen[4] method of order 5, AF5-Abhulimen and Okunuga [1]and AG6-Abhulimen and Omeike [2]. Example 1 Non-linear stiff problems (Enright and Pryce) [9] Step length h Method The system (4.2) can be rewritten as a firstorder system; Thus we obtain a 2 x 2 system of stiff IVP. The eigenvalues of the Jacobian matrix The general solution of (4.2) is . If we impose the initial conditions in the exact solution is . The result of this problem using the newly derived methods are obtained at ; as given in table 4.2 below. The exact solution is given as, For comparison purpose , we have the following as; AB7, AB8 and NM9 to represent Abhulimen and Otunta " Two step third derivative methods order seven, eight and nine" respectively. F 5 denote three-step second derivative scheme. As shown in the Table 3.1 above, the proposed method in this paper perform better than existingmethods in terms of accuracy. CONCLUSION The aim of this paper was to develop numerical method which provides solution to initial value problems with stiff differential equations via exponentially fitted integrators.Numerical experiments have been carried out using appropriate step size as required by each problem. Such problems which are stiff require small step size before the solution can be smooth. In general, the results from numerical experiment so presented in this paper, show that the new method perform effectively well when compared similar methods in the literature. Hence the aim and objective of this paper have been achieved.
1,582
2017-04-06T00:00:00.000
[ "Mathematics" ]
Assessing the Helpfulness of Learning Materials with Inference-Based Learner-Like Agent Many English-as-a-second language learners have trouble using near-synonym words (e.g., small vs.little; briefly vs.shortly) correctly, and often look for example sentences to learn how two nearly synonymous terms differ. Prior work uses hand-crafted scores to recommend sentences but has difficulty in adopting such scores to all the near-synonyms as near-synonyms differ in various ways. We notice that the helpfulness of the learning material would reflect on the learners' performance. Thus, we propose the inference-based learner-like agent to mimic learner behavior and identify good learning materials by examining the agent's performance. To enable the agent to behave like a learner, we leverage entailment modeling's capability of inferring answers from the provided materials. Experimental results show that the proposed agent is equipped with good learner-like behavior to achieve the best performance in both fill-in-the-blank (FITB) and good example sentence selection tasks. We further conduct a classroom user study with college ESL learners. The results of the user study show that the proposed agent can find out example sentences that help students learn more easily and efficiently. Compared to other models, the proposed agent improves the score of more than 17% of students after learning. Introduction Many English-as-a-second-language (ESL) learners have trouble using near-synonyms correctly (Liu and Zhong, 2014;Liu, 2013). "Nearsynonym" refers to a word whose meaning is similar but not identical to that of another word, for instance, establish and construct. An experience common to many ESL learners is looking for example sentences to learn how two nearly synonymous words differ (Liu, 2013;Liu and Jiang, 2009). To facilitate the learner's learning process, our focus Figure 1: The Learner-Like Agent mimics learners' behavior of performing well when learning from good material and vice versa. We utilize such a behavior to find out helpful learning materials. is on finding example sentences to clarify English near-synonyms. In previous work, researchers develop linguistic search engines, such as Linggle (Boisson et al., 2013) and Netspeak 1 , to allow users to query English words in terms of n-gram frequency. However, these tools can only help people investigate the difference, where learners are required to make assumptions toward the subtlety and verify them with the tools, but can not tell the difference proactively. Other work attempts to automatically retrieve example sentences for dictionary entries (Kilgarriff et al., 2008); however, finding clarifying examples for near-synonyms is not the goal of such work. In a rare exception, Huang et al. (2017) retrieve useful examples for near-synonyms by defining a clarification score for a given English sentence and using it to recommend sentences. However, the sentence selection process depends on handcrafted scoring functions that are unlikely to work well for all nearsynonym sets. For example, the difference between refuse and reject is their grammatical usages where we would use "refuse to verb" but not "reject to verb"; such a rule, yet, is not applicable for delay and postpone as they differ in sentiment where delay expresses more negative feeling. Though Huang et al. (2017) propose two different models to handle these two cases respectively, there is no clear way to automatically detect which model we should use for an arbitrary near-synonym set. In the search for a better solution, we noted that ESL learners learn better with useful learning materials-as evidenced by their exam scoreswhereas bad materials cause confusion. Such behavior can be used to assess the usefulness of example sentences as shown in Figure 1. Therefore, we propose a Learner-Like Agent which mimics human learning behavior to enable the ability to select good example sentences. This task concerns the ability to answer questions according to the example sentences for learning. As such, we transform this research problem to an entailment problem, where the model needs to decide whether the provided example sentence can entail the question or not. Moreover, to encourage learner-like behavior, we propose perturbing instances for model training by swapping the target confusing word to its nearsynonyms. We conduct a lexical choice experiment to show that the proposed entailment modeling can distinguish the difference of near-synonyms. A behavior check experiment is used to illustrate that perturbing instances do encourage learner-like behavior, that is inferring answers from the provided materials. In addition, we conduct a sentence selection experiment to show that such learner-like behavior can be used for identifying helpfulness materials. Last, we conduct a user study to analyze near-synonym learning effectiveness when deploying the proposed agent on students. Our contributions are three-fold. We (i) propose a learner-like agent which perturbs instances to effectively model learner behavior, (ii) use inferencebased entailment modeling instead of context modeling to discern nuances between near-synonyms, and (iii) construct the first dataset of helpful example sentences for ESL learners. 2 Related Works This task is related to (i) learning material generation, (ii) near-synonyms disambiguation, and (iii) natural language inference. 2 Dataset and code are available here: https://github.com/joyyyjen/ Inference-Based-Learner-Like-Agent Learning Material Generation. Collecting learning material is one of the hardest tasks for both teachers and students. Researchers have long been looking for methods to generate high-quality learning material automatically. Sumita et al. (2005); Sakaguchi et al. (2013) proposed approaches to generate fill-in-the-blank questions to evaluate students language proficiency automatically. Lin et al. (2007); Susanti et al. (2018); Liu et al. (2018) worked on generating good distractors for multiplechoice questions. However, there are only a few tasks working on automatic example sentence collection and generation. Kilgarriff et al. (2008); Didakowski et al. (2012) proposed a set of criteria for a good example sentences and Tolmachev and Kurohashi (2017) used sentence similarity and quality as features to extract high-quality examples. These tasks only focused on the quality of a single example sentence, whereas our goal in this paper is to generate an example sentence set that clarifies near-synonyms. The only existing work is from Huang et al. (2017), who designed the fitness score and relative closeness score to represent the sentence's ability to clarify near-synonyms. Our work enables the models to learn the concept of "usefulness" directly from data to reduce the possible issues of the human-crafted scoring function. Near-synonyms Disambiguation. Unlike the language modeling task that aims at predicting the next word given the context, near-synonyms disambiguation focuses on differentiating the subtlety of the near-synonyms. Edmonds (1997) first introduced a lexical co-occurrence network with secondorder co-occurrence for near-synonym disambiguation. Edmonds also suggested a fill-in-the-blank (FITB) task, providing a benchmark for evaluating lexical choice performance on near-synonyms. Islam and Inkpen (2010) used the Google 5-gram dataset to distinguish near-synonyms using language modeling techniques. Wang and Hirst (2010) encoded words into vectors in latent semantic space and applied a machine learning model to learn the difference. Huang et al. (2017) applied BiLSTM and GMM models to learn the subtle context distribution. Recently, BERT (Devlin et al., 2018) brought a big success in nearly all the Natural Language Processing tasks. Though BERT is not designed to differentiate near-synonyms, its powerful learning capability could be used to understand the subtlety lies in the near-synonyms. In this paper, our models are all designed on top of the pre-trained BERT model. Natural Language Inference. Our proposed model directly learns the difference and sentence quality by imitating the human reactions of learning material and behavior of learning from example sentences. The idea of learning from example is similar to natural language inference (NLI) task and recognizing question entailment (RQE) task. There are various NLI dataset varied in size, construction, genre, labels classes (Bowman et al., 2015;Williams et al., 2018;Khot et al., 2018;Lai et al., 2017). In the NLI task, each instance consists of two natural language text: a premise, a hypothesis, and a label indicating the relationship whether a premise entails the hypothesis. RQE, on the other hand, identifies entailment between two questions in the context of question answering. Abacha and Demner-Fushman (2016) used the definition of question entailment: "a question A entails a question B if every answer to B is also a complete or partial answer to A." Though NLI and RQE research has acquired lots of success, to the best of our knowledge, we are the first to attempt using these two tasks on language learning problems. Poliak et al. (2018)'s recast version of the definite pronoun resolution (DPR) task inspired us to build learner-like agents with entailment modeling . In the original DPR problem, sentences contain two entities and one pronoun, and the mission is to link the pronoun to its referent (Rahman and Ng, 2012). In the recast version, the premises are the original sentences, and the hypothesis is the same sentence with the pronoun replaced with its correct (entailed) and incorrect (not-entailed) reference. We believe our proposed entailment modeling can help the model to understand the relationship between the given example sentence and question for the target near-synonym. Thus entailment modeling enables the learner-like agent to mimic human behavior through inference. Method In this paper, we use learner-like agent to refer to a model that answers questions given examples. The goal of the learner-like agent is to answer fill-in-theblank questions on near-synonyms selection. However, instead of answering the question from the agent's prior knowledge, the agent needs to answer the question using the information from the given examples. That is, if the given examples provide incorrect information, the agent should then come up with the wrong answer. This process is to simulate the learner behavior illustrated in the Figure 1. Since the model is required to infer the answer, we further formulate it as an entailment modeling problem to enable model's capability of inference. In this section, we (i) define the proposed learner-like agent, (ii) describe how to formulate it as an entailment modeling problem, and (iii) introduce the perturbed instances to further enhance the agent's learner behavior. Learner-Like Agent The overall structure of a learner-like agent is as follows: given six example sentences E (3 sentences for each word) and a fill-in-the blank question Q as an input instance, the model is to answer the question based on the example hints. We adopt BERT (Devlin et al., 2018) to fine-tune the taskspecific layer of the proposed learner-like agent using our training data, equipping the learner-like agent with the ability to discern differences between near-synonyms. The input of our model contains the following: is the length of the sentence and contains a word w i from the near-synonym pair, where i ∈ {1, 2} denotes word 1 or word 2; , where E w i denotes a sentence containing w i ; • A [CLS] token for the classification position, and several [SEP] tokens used to label the boundary of the question and the example sentences, following the BERT settings. The output will is the correct word for the input question, namely, w 1 or w 2 . We specifically define E[w j ] i where i, j ∈ 1, 2 to be the context of w i . The example sentence of case (2) in Table 1 shows a case of E[w 1 ] 1 where the target word w 1 is little and the rest of the sentence is called context E[ ] 1 . When we change little to small to create case (9), it is described as E[w 2 ] 1 meaning an example sentence where w 2 fills the position of w 1 in sentence E w 1 . This notation also applies to the question input Q[w j ] i . Inference-based Entailment Modeling We apply NLI and RQE tasks in the learner-like agent question design. The goal of the Entailment (9) and (14) are the perturbed instances. The inappropriate examples are used in section 4 for behavior check. Modeling Learner-like Agent (EMLA) is to answer entailment questions given example sentences. We transform the original fill-in-the-blank question into an entailment question where the EMLA answers whether the given example sentence E entails the question sentence Q. If the word usage in the question sentence matches the word usage in the example sentence, the EMLA answers entail , or ¬entail otherwise. The EMLA M e is described as where ans-either entail or ¬entail -is the prediction of the inference relationship of one of the six example sentences E i k , where k ∈ {1, 2, ..6}, and Q j . To fill all the context possibilities of Q[ ] j for the same word in E w i , an example has the following four cases: From the input and output of the instances (equations 2 to 5), we see that the target word and its context in Q j for all cases except for equation 2 do not follow the example word usage. The examples of the instances are shown in Table 1. Equation 3 and equation 4 tell us that an example sentence of w 1 does not provide any information for the model to infer anything about w 2 so both of them result in not entail. The question of equation 5 is incorrect, as shown in the Table 1 case (5), so it would also lead to not entail. After training the EMLA to understand the relation between example and question, we can convert its prediction {entail , ¬entail } back into the fillin-the-blank task by looking into the model predictions. Given the probability of {entail , ¬entail }, we know which term in the near-synonym pair is more appropriate in the context of If the question context and the example context match, then a word with a higher entail probability is the answer. If they do not match, that with the higher ¬entail probability is the answer. Perturbed Instances To encourage learner-like behavior, i.e., good examples lead to the correct answer, and vice versa, we propose introducing automatically generated perturbed instances to the training process. A close look at the input and output of the instances (equations 2 to 5) shows that they consider only correct examples and their corresponding labels. We postulate that wrong word usage yields inappropriate examples; thus we perturb instances by swapping the current confusing word to its nearsynonym as where ¬ans is {entail, ¬entail} − ans and E[¬w i ] w i k is the example sentence in which the contexts in w 1 and w 2 are swapped. The corresponding perturbed instances from equations 2 to respectively, in which w 2 's context becomes E[ ] 1 . Again, only equation 9, where both the context and the word usage match, is entail. The example instance is shown in Table 1 case 9. Experiments We conducted three experiments: lexical choice, behavior check, and sentence selection. The lexical choice task assesses whether the model differentiates confusing words, the behavior check measures whether the model responds to the quality of learning material as learners do, and sentence selection evaluates the model's ability to explore useful example sentences. Lexical Choice Lexical choice evaluates the model's ability to differentiate confusing words. We adopted the fill-inthe-blank (FITB) task, where the model is asked to choose a word from a given near-synonym word pair to fill in the blank. Baseline Context modeling is a common practice for nearsynonym disambiguation in which the model learns the context of the target word via the FITB task. For this we use a Context Modeling Learner-like Agent (CMLA) as the baseline based on BERT (Devlin et al., 2018) as a two-class classifier to predict which of w 1 or w 2 is more appropriate given a near-synonym word pair. The question for CMLA is a sentence whose target word, i.e., one of the confusing words, is masked; the model is to predict the masked target word. The CMLA M c is then described as where Q[MASK] i fills the the position of w i with MASK, and ans ∈ {w 1 , w 2 } is the prediction of [MASK] in the question, and E are the six example sentences. Q[MASK] i is a question with the context of either w 1 or w 2 . This raises a problem of the model deriving the answer only from Q i , Equations 12 and 13 risk the model to selects w i given Q i . To encourage learner-like behavior, we incorporate perturbed instances into the training process corresponding to equations 12 and 13 as , ., E[¬w 1 ] 1 3 ] For context modeling , the perturbed instance has the additional benefit that it forces the model to make inferences based on the given example sentences, as illustrated in Table 1 case (14). Dataset and Settings We collected a set of near-synonym word pairs from online resources, including BBC 3 , the Oxford Dictionary 4 , and a Wikipedia page about commonly misused English words 5 . An expert in ESL education manually selected 30 near-synonym word pairs as our experimental material. We collected our data for both training and testing from Wikipedia on January 20, 2020. Words in the confusing word pair were usually of a specific part of speech. This guaranteed that the part of speech of the confusing word in the sentence pool matched that in target near-synonym word pair. To construct a balanced dataset, we randomly selected 5,000 sentences for each word; 4,000 sentences for each word in a near-synonym word pair were used to train the learner-like model and 1,000 sentences for testing. For comparison, we trained four learner-like agents: EMLA, CMLA, EMLA without perturbed instances, and CMLA without perturbed instances. For the best learning effect, we empirically set the ratio of normal-to-perturbed instances to 2 : 1. The agents were trained using the Adam optimizer with a 30% warm-up ratio and a 5e-5 learning rate. The maximum total input sequence length after tokenization was 256; other settings followed the BERT configuration. Results and Discussion We compared the EMLA and CMLA and Figure 2 shows the model performance on 30 word pairs. The average accuracy of EMLA and CMLA is 0.90 and 0.86, while that excluding perturbing instances is 0.80 and 0.86, respectively. On average, EMLA performs the best; when perturbed instances are not included in the training, its performance for lexical choice drops. We expected training with perturbed instances to worsen model performance in exchange for learner-like behavior. However, results show that the perturbed instances enhance the inference ability of EMLA. Also, CMLA models seem to be unaffected by perturbed instances (yellow vs. green lines); this could be because CMLA tends to memorize the input context instead of making an actual inference, which in NLI is recognized as bias (Chien and Kalita, 2020). Behavior Check The behavior check evaluates whether the agent learns as learners do; that is, a learner-like agent should perform well on FITB questions when the given learning materials are helpful, and should perform poorly when the materials are not helpful. In this experiment, all models complete two FITB quizzes. For the first quiz, authentic sen-tences are provided as appropriate learning materials; for the second quiz, inappropriate learning materials are provided. These materials are considered inappropriate because they are automatically generated using the authentic sentences but replacing their target words with near-synonyms for training, resulting in confusion and wrong word usage, as illustrated in Table 1 (see the last two "Inappropriate example" rows). In other words, given inappropriate example sentences, if the model is truly inferring answers from the examples, the model should select the other choice in the same quiz question. Results and Discussion We recorded the accuracy of every question and combined the 30 pairs of near-synonym wordsets from the same model into one graph. As shown in Figure 3, even without perturbed instances, the learning effect of EMLA corresponds to the learning material quality. In contrast, CMLA without perturbed instances, as in the lexical choice task, is no worse when given inappropriate examples. To determine whether the results of the two fill-in-the-blank quizzes are significantly different when given appropriate and inappropriate examples, we conducted a t-test. Table 2 shows that learner-like behavior is enabled in CMLA with perturbed instances, whereas EMLA learns like learners even without perturbed instances. This result conforms to that shown in Figure 3: the quiz results for both EMLA models can be clearly distinguished, and adding the perturbed instances to EMLA slightly magnifies their difference. However, the CMLA still relies on perturbed instances to learn the difference. Looking more closely, we present Table 3, in which ∆ is the difference in accuracy between two quizzes. The higher ∆ is, the better the model differentiates confusing words. We measure the correlation between the lexical choice accuracy and ∆ with the Pearson correlation coefficient and obtain a value of 0.87, which demonstrates a strong positive correlation. Sentence Selection In the sentence selection experiment, we evaluate the ability of the learner-like agent to select useful example sentences. Our assumption is straightforward. We give the agent a set of example sentences and evaluate its performance on a number of quizzes. If it does well on many quizzes, the example sentences are deemed helpful for learning confusing words. Baseline We compared agents with an implementation of Huang et al. (2017)'s Gaussian mixture model (GMM), which learns the distribution and semantics of the context. We set the number of Gaussian mixtures to 10 and trained the GMM with the dataset proposed here. In the testing phase, we retrieved the top three recommended sentences for each word in the confusing word pair and compared this to the expert's choices. Evaluation Dataset To evaluate the sentence selection, we employed an ESL teacher as an expert to carefully select the three best example sentences out of ten randomly selected, grammatically, and pragmatically correct examples for each word in all confusing word pairs. Specifically, the evaluation dataset had a total of 600 example sentences. For each near-synonym pair, three sentences for each word were labeled as helpful example sentences. To select sentences that clearly clarify the semantic difference between near-synonyms, the ESL expert considered suitability, informativeness, diversity, sentence complexity, and lexical complexity during selection. For suitability, the expert considered whether the two near-synonym words in one confusing word pair were interchangeable in the current sentence. Diversity was considered when constructing the selected pool. Suitability and diversity are designed from the (Huang et al., 2017)'s conclusion. Other criteria are from Kilgarriff's good example sentence (Kilgarriff et al., 2008). Selection Method For the proposed good example sentence set, we selected an example sentence combination that helps EMLA or CMLA to achieve the highest accuracy in the quiz. That is, the example sentence set that leads to the highest learning performance. One of a total of 14,400 (C 10 3 × C 10 3 ) example sentence sets, including six example sentences, was provided to the models to evaluate their helpfulness. Each example sentence set was used to answer a quiz composed of k questions. Here, k determines the representativeness and consistency of the testing result from each quiz. We used five independent quizzes to find a reliable k by calculating the correlation of their testing results. Finally, we empirically set k to 100, where the lowest correlation among 30 word pairs was 0.24, and the median was 0.67. That is, each quiz contained 100 questions. When testing example sentence sets, multiple example sentence sets could achieve the same highest accuracy for the quiz. We considered them equally good so sentences in these sets were all treated as selected. Thus, our method would possibly suggest more example sentences than the gold labels. Table 4 shows the results of sentence selection. EMLA significantly outperforms CMLA and Huang's GMM in sentence selection. The improvement comes from the increasing recall, indicating that the proposed learner-like agent manages to find helpful example sentences for ESL learners. Learner Study We conducted a user study to see the effect of learning on example sentences selected by EMLA, CMLA, and a random baseline. In this learner study, a total of 29 Chinese-speaking college freshmen majored in English were recruited. All the participants were aged between 18 and 19. A proficiency test (Chen and Lin, 2011) was given before the study to identify their English level for further analysis. Experimental Design and Material We followed Huang et al. (2017)'s learner study design with some modification. The whole test consisted of a pre-test and a post-test section in a total of 80 minutes. The fill-in-the-blank multiplechoice question was used in both tests to examine students' understanding of near-synonym. A total of 30 word pairs were used to create 30 question sets where each set contained three questions. The Figure 4 (B), was presented to students. The students were asked to finish the randomly assigned 15 question sets in the pre-test and a background questionnaire. During the post-test section, example sentences generated by EMLA, CMLA, or the random baseline will be presented in the example panel as shown in Figure 4 A. A maximum of three example sentences for each word can be obtained by clicking the readme button. The readme button can help us track how many example sentences were used for learning. Note that the students were asked to answer the same question sets in the post-test so we can measure the improvement they made between the pre-test and the post-test. For each question set, the model used for sentence selection was also randomly assigned in order to prevent learners from getting tired from the useless example sentences. Different from the sentence selection in Section 4.3, where all the combinations with the highest score in the quiz are selected, we picked the most common three example sentences from the combination to fulfill the experimental design. Here, we assume the most common three sentences for each word would be the best candidate in all the combinations. Results and Discussion When learning from example sentences from EMLA, 16 students improved. Only 12 and 11 students improved when learning from CMLA and random baseline, suggesting that EMLA helped more. Figure 5 shows the students' improvement score versus proficiency score. Figure 5: Improvement of 29 learner scores in respect to entailment modeling, context modeling, and random baseline. A total of 16 learners improved when learning on the material generated by entailment modeling. Table 5: Analysis of two groups. Above and Below stand for the above-average group and the belowaverage group respectively. EMLA helps the aboveaverage group the most. We also find that the aboveaverage group reads significantly fewer sentences than the below-average group. However, the below-average group rates the example sentences easier (scores range from 1 to 4 while 1 being "too difficult"). To further understand students' behaviors, we separated students into two groups using their English proficiency test scores. Students whose test scores were lower than the average score were grouped into the below-average group and were considered having lower English proficiency, and vice versa. The above-average group and the belowaverage group had 12 and 17 students respectively. The average improvement scores of the two groups are shown in Table 5. We can see the aboveaverage students benefit more from example sentences while below-average benefit less or even confused by the example sentences. Again, EMLA helps above-average students the most. The random baseline provides a mixed result, and even the above-average students got affected. This echos results from Huang et al. (2017) where students can still learn from the random example sentences but more effort is needed to fully understand the nearsynonym and the outcome is unstable. In Figure 5, we can find that there are two outliers in the ran-dom baseline. The one improved a lot is from the below-average group, and the other one worsen a lot is from the above-average group. This evidence shows the uncertainty of the random baseline. We investigated the learner's behavior during the post-test and their questionnaire response toward example difficulty. The result is also shown in Table 5. The above-average students read significantly fewer examples while they also rate examples more difficult. On the other hand, most of the below-average students read all the six examples and rate them relatively easier. Though many above-average students improved in the post-test, we found that there are two of them read less than three examples and thus performed worse in the post-test. Such a case suggests that reading a fair amount of example sentences is required to fully understand the near-synonym. Conclusion We introduce the learner-like agent, in particular EMLA, which differentiates the helpfulness of learning materials using inference. Entailment modeling, unlike common context-based nearsynonymous word disambiguation, makes inferences to learn the relationship between the example sentences and the question, similar to human behavior. Context modeling in the learnerlike agent relies upon additional perturbed examples to mimic human behavior, whereas EMLA already has this ability. The agent can be used to evaluate the helpfulness of learning materials, or-more interestingly-to select the best materials from a large candidate pool. We select good example sentences in practice, which confirms the usefulness of modeling learner behavior. Using the EMLA learner-like agent, we find more helpful learning material for learners, as demonstrated by the learner study. These demonstrate the usefulness of modeling learner behavior using an inference approach. In the future, we would like to explore if the learner-like agent can be extended to materials and data beyond the example sentences for near-synonyms.
6,801.2
2020-10-05T00:00:00.000
[ "Computer Science", "Education", "Linguistics" ]
Research and Application of the Beijing Road Traffic Prediction System . As an important part of the urban Advanced Traffic Management Systems (ATMS) and Advanced Traveler Information Systems (ATIS), short-term road traffic prediction system has received special attention in recent decades. The success of ATMS and ATIS technology deployment is heavily dependent on the availability of timely and accurate estimation or prediction of prevailing and emerging traffic conditions. We studied a real-time road traffic prediction system developed for Beijing based on various traffic detection systems. The logical architecture of the system was presented, including raw data level, data processing and calculation level, and application level. Four key function servers were introduced, namely, the database server, calculation server, Geographic Information System (GIS) server, and web application server. The functions, function modules, and the data flow of the proposed traffic prediction system were analyzed, and subsequently prediction models used in this system are described. Finally, the prediction performance of the system in practice was analyzed. The application of the system in Beijing indicated that the proposed and developed system was feasible, robust, and reliable in practice. Introduction Along with ever-increasing motorization in China, urban road traffic systems are facing serious congestion issues, especially in the larger cities.The development of Intelligent Transportation Systems (ITS), in particular Advanced Traffic Management System (ATMS) and Advanced Traveler Information System (ATIS), plays an increasingly importation role in urban traffic management.They provide various levels of traffic information and trip advisory to system users, including many ITS information service providers, enabling travelers to make appropriate and informed travel decisions.The success of ATMS and ATIS technology deployment is heavily dependent on timely and accurate estimates of the prevailing and emerging traffic conditions.To implement ATMS and ATIS to meet various traffic control, management, and operation objectives, it is necessary to develop a road traffic prediction system that utilizes advanced traffic prediction models to analyze data, especially real-time traffic data from different sources, to estimate and predict traffic conditions. In the past few years, real-time traffic prediction systems have been studied and developed in certain cities and regions [1,2], based on simulation or the real-time traffic detection data. The Traffic Estimation and Prediction system (TrEPS) developed in a dynamic traffic assignment (DTA) research project initiated by the US Federal Highway Administration (FHWA) is a typical traffic prediction system based on simulation.The system is expected to be capable of estimating and predicting traffic information for real-time traffic management and control purposes to meet the information needs in the ITS context [3,4]. Together with IBM, the Singapore Land Transport Authority (LTA) ran a pilot project from December 2006 to April 2007, with a traffic prediction tool based on historical traffic data and real-time feeds with traffic flow conditions from several sources, to predict the levels of congestion up to an hour in advance.The pilot results showed overall prediction results with above 85% accuracy.Furthermore, when more data was available at peak hours, average accuracy reached 90% [5]. The CAPITALS project was initiated in five European cities (Brussels, Berlin, Paris, Madrid, and Rome) by using and improving existing data resources to establish a platform for information and traffic management services for administration and travelers.A traffic prediction tool was tested and the harmonisation of traffic information in Paris was completed.The five cities above extended their information platforms towards integrated mobility service platforms, in which the prediction tools were developed in Paris, Madrid, and Berlin.In Madrid, estimation of travel times on the M30 motorway ring road was based on a collection of realtime traffic information from the network through detectors and TV cameras and a short-term prediction for congestion analysis.This information was processed in the M30 Traffic Control Centre and communicated via Variable Message Sign (VMS) panels to travelers [6]. As a key element of the Government's Transport 2010 Ten-Year Plan for developing and modernizing the transport system, England's National Traffic Control Centre has gathered real-time information from across the motorway network, improving driving conditions for road users by keeping them better informed and making journey times more reliable.From their website, users obtain the prediction information through the traffic forecaster [7]. The BAYERN ONLINE project launched by the Bavarian State Government in Germany developed the BayernInfo website [8], with one of its main functions providing shortterm, mid-term, and long-term traffic prediction for travelers by using a traffic model called "ASDA-FOTO" [9].Short-term prediction depends on real-time traffic, midterm prediction depends on traffic events, and long-term prediction depends on traffic demand forecasts.For roads without detectors, the so-called assignment-based methods are applied. Traffic prediction systems are also under research or construction for some Interstate Highways in America, a case in point being the I-4 Interstate Highway in Orlando, Florida [10].In addition, most of the developments that have been conducted to date have been carried out in developed countries.In the last decade, many studies have been conducted on short-term traffic flow prediction models and system research in China [11][12][13], but no practical system has been implemented successfully in the literature to assist real-time traffic operation in cities or highways in China. To improve traffic management efficiency, the Beijing Traffic Management Bureau (BTMB) launched several ITS systems, including the Beijing Road Traffic Prediction System (BRTPS).In this study we analyzed the development and performance of BRTPS.The system architecture was presented and analyzed in the second section, which is followed with the main functions of BRTPS in the third section.Three key prediction models used in the BRTPS were introduced in the fourth section, as well as the performance analysis in the fifth section.The final section gives a brief conclusion. System Architecture 2.1.Logical Architecture.According to system requirements and existing devices and data resources, the logical architecture of the system is shown in Figure 1.The three-level logical architecture includes the following three levels. 2.1.1.Data Resource Level.The data resource level provides the BRTPS system with different data from various existing urban traffic detection systems in Beijing, including the loop detector of the traffic signal control system (covering about three hundred intersections within the second ring expressway), travel time detection system (covering 139 intersections within the fifth ring expressway with vehicle number plate recognition video), microwave traffic flow detection system (covering all expressways in Beijing, with a distance of about 300-800 m), probe vehicle detection system (about 20,000 taxi vehicles in Beijing), traffic accident reporting system from the Beijing Traffic Control Center, and other data resources. Data Processing and Prediction Level.The data processing and prediction level is the core of the BRTPS.It is composed of the following parts. Data processing module, which provides real-time reliable data for the integrated database via cleaning, coding, and preparation of different data from different sources. Integrated database, which stores and processes data required by the system, including historical data, real-time processed detection data, prediction data, and statistical analysis results. Model library, which stores various traffic flow prediction models, traffic accident duration time prediction models, capacity models of intersections and road segments, and analysis models. Knowledge base, which stores the temporal-spatial relationships produced by traffic flow pattern recognition models and provides basic parameter configuration for the prediction models. GIS platform, which displays all necessary spatial data and spatial attributes of the system. The main products of the data processing and prediction level are the predicted values of various traffic flow parameters at different time intervals. Application Level. The application level is composed of certain application systems supported by the BRTPS, including Personalized Trip Planning and Guiding System and the traffic management system of the traffic control center, and information service providers. Physical Architecture.Based on Microsoft.Net Remoting technique, the distributed physical architecture of the system is presented and shown in Figure 2. The main components of the physical architecture are the four servers, which perform the core functions of the system. Database Server. The database server keeps the integrated database running, with the following main functions: (1) obtaining raw data from the existing data center, performing data processing, which transforms the raw data into standardized basic data required by the system, and storing the basic data into the integrated database; (2) storing all necessary basic data and results of traffic flow conditions required by the system; and (3) responding to the requests of reading, writing, and updating traffic flow conditions data from the other three servers. Calculation Server. The calculation server performs various prediction models used in the system, with the following main functions: (1) obtaining basic data from the database server, calculating traffic flow prediction, road network level of service evaluation, congestion evaluation, incident warning, and temporal-spatial influence analysis based on those data, and then sending the prediction results to the database server; (2) responding to control requests from the web application server by performing requested configuration and thus changing the calculation logic; and (3) responding to the calculation requests from the web application server by performing requested calculations and then sending the results to the web application server. GIS Server. The main functions of the GIS server include (1) storing urban road network geographical data required by the system; (2) responding to requests of the web application server by analyzing requirements for GIS data and traffic flow data, obtaining the latter from the database server and combining them with GIS data to obtain visualization information, and then sending the visualization information to the web application server; and (3) responding to requests to modify GIS information from the web application server. Web Application Server. The web application server deals with requests from the other terminals on the network by interpreting requests into requests on GIS data, traffic flow data, and calculation, sending the requests to the other three servers accordingly, and providing user web information based on the information returned from the other servers. This system provides service via its graphical user interface: system users visit the web application server from their terminals and send requests to the web application server from the browser, which will be analyzed and interpreted by the web application server and sent to the other three servers; these servers will then return the results to the web application server for final processing and displaying on the website for the users.This system also provides service by delivering results for other application systems: based on the requirements of these systems, this system will send the prediction results to these systems at the same time as storing the results in its own integrated database, or other systems obtain the prediction results regularly from the integrated database of this system before performing their own processing and application according to their own needs. System Functions The system mainly consists of the following functions. Traffic Prediction under Normal Conditions and Prediction Model Update.Based on the integrated database, the system will predict traffic flow conditions with different intervals using various traffic prediction models.Every five minutes, traffic flow parameters, including flow volume, speed, occupancy, and travel time, are predicted with time intervals of 5 mins, 15 mins, 30 mins, 1 h, and 2 h. Traffic flow prediction models are updated online in accordance with the operation of the system.The correction factors in various prediction models, such as weight factors in the combined prediction model, are continuously adjusted according to the prediction performance or the traffic condition changes to improve prediction accuracy and the model's adaptability to various traffic conditions. Temporal-Spatial Influence Analysis and Prediction of Traffic Accidents.Based on real-time detected traffic flow data and accident information from the traffic accident reporting system, this system analyzes the temporal-spatial influence of traffic accidents in the Beijing road network.It provides predicted duration time and influence scope of an accident for urban road traffic management administrators. Traffic Flow Condition Analysis and Evaluation.The system also analyzes and evaluates urban road traffic conditions at the road section, intersection, and region level by adjusting traffic condition evaluation factors and assessing the transport level of service.It also analyzes the detected and predicted data to evaluate the level of traffic congestion. Urban Road Traffic Changing Trend Analysis.The system can identify the traffic flow changing trend both temporally and spatially, with the immense amount of traffic flow data stored in the system's database.It analyzes the characteristics and trends of traffic flow in different regions, intersections, and sections and the correlation of the traffic flow between them, to provide support for urban road traffic management administrators. Traffic Information Service.The system can generate traffic flow condition assessment and prediction information, which may be provided for other urban road traffic management systems, organizations, or individuals who have an interest, for example, information service providers.Additionally, it can also disseminate prediction information to public travelers through the VMS or the internet. Key Prediction Models To develop a practical system that can be deployed in the BBTM traffic control center, we presented and modified several models, including the traffic flow parameter correlation model, the capacity calculation model for expressways, urban arterials and intersections, the traffic flow parameter prediction models under normal traffic flow conditions, the Automatic Incident Detection (AID) model, and the accident temporal-spatial influence analysis model [14].Here we introduce two traffic flow parameter prediction models under normal traffic flow conditions and the accident duration time prediction model.to find the most suitable prediction model for Beijing's traffic flow conditions, various short-term traffic flow prediction models were proposed for detected and nondetected roads, including the combined traffic flow prediction model [15], the nonparametric regression model [16], and the combined neural network prediction model [17].The former two models were applied in the system according to the consideration of computation efficiency and prediction accuracy.The combined prediction model for the BRTPS was considered with the composition of the Discrete Fourier transform model (DFT), Autoregressive model (AR), and Neighborhood Regression model (NR).For convenience, we denoted DFT-AR-NR as the DAN model [15].Traffic prediction for road sections was not only associated with its historical and recent data of the road section of interest but also with data from adjacent sections.Therefore, a basic form of the DAN model can be represented as [15] Combined Traffic Flow Prediction where + , ∧ , and * denote the prediction results of the three submodels, respectively, and , , and are the weight coefficients of the three submodels, respectively.Adjusting the value of these weight coefficients can strengthen or weaken the role for any of the submodels.The DAN model was mainly used for detected road segments. Nonparametric Regression Model. The short-term traffic flow forecasting frame based on nonparametric regression is shown in Figure 3 [16]. The whole system process is as follows. (1) The system input variable sets were determined by the selection algorithm of current flow states. (2) The input variable set was matched among the flow states stored in database to find optimal matching states.If forecasting time was ample, the linear matching algorithm was the best choice; otherwise we resorted to nonlinear matching algorithm and complex data structure, for example, binary tree and R tree. (3) The successfully matching states were averaged to obtain the forecasting values.(4) The forecasting error was put into the feedback regulation module to adjust the input variable set and matching algorithm. The nonparametric regression based model was mainly used for nondetected road segments. Traffic Accident Duration Time Prediction Model. For traffic accident duration time prediction, a model based on the algorithm of decision tree, Classification and Regression Tree (CART), was presented and applied [18].The model was developed based on accident records extracted from the accident reporting system of the Beijing Traffic Management Bureau.When an accident occurred, this model will be used to predict the duration of the accident. System Deployment and Performance Analysis 5.1.System Deployment.Based on the above models and various data resources, the Beijing Road Traffic Prediction System was developed in the following environment: database system: ORACLE 10 g database, web server: IIS6, and WebGIS developing and operating system: ArcGIS Server 9.0 from ESRI.The client uses Windows 98 OS or above and web browser IE6.0 or above.Before the 2008 Olympic Games, the 1.0 vision of BRTPS mainly covered 14 detected expressways and arterial streets within the second ring expressway and was deployed in the traffic control center of BBTM for normal traffic conditions. In 2011, this system was updated to cover all expressways and arterial streets within the fifth ring expressway, for normal and event traffic conditions.Figure 4 shows the BRTPS interface. The data used in the system mainly comes from the expressway traffic flow detection system (microwave detectors), travel time detection system based on vehicle number plate recognition, traffic signal control system detectors, and floating car system based on taxi and accident reporting system as mentioned above.It predicts traffic parameters such as flow, speed, and occupancy in 5 min, 15 min, 30 min, 1 h, and 2 h intervals. System Performance Analysis. To understand the prediction performance of the practical system, prediction error analysis was carried out during November 2012. Fifteen sites selected for the application of the DAN model included ten different expressways in Beijing.Most sites are very congested during morning and evening peak hours.Ten days were selected as test days for all fifteen sites, namely, November 12-16, 2012, and November 26-30, 2012.From 7:00 to 13:00 and from 14:00 to 19:00 every day, we selected the detected data and the predicted data hourly.The predicted data included the predicted value of traffic flow, speed, and occupancy in 5 min, 15 min, and 30 min intervals.We mainly analyzed the error performance of speed prediction, which was the most precise among the three traffic flow parameters of volume, speed, and occupancy.For analysis of system prediction performance, mean absolute percentage error (MAPE) and mean absolute error (MAE) were selected and employed to reflect the accuracy of the predictor. where ( + 1) is the observed traffic flow speed for the time interval + 1, V( + 1) is the predicted traffic flow speed for the time interval + 1, and is the number of intervals for prediction. The MAPE of speed prediction for different intervals of the fifteen sites over ten days is shown in Figure 5. From Figure 5, the average MAPE of speed prediction over the ten days increased slowly with increasing prediction interval, specifically by 14.5% for 5 min interval, 16.4% for 15 min, and 16.8% for 30 min.Eleven sites had MAPE speed prediction at the 5 min interval below 20%, eleven sites at the 15 min interval, and ten sites at the 30 min interval.Thus, speed prediction performance of most selected sites was satisfactory. The MAPE and MAE of speed prediction at different hours are shown in Figures 6 and 7, respectively.There were no apparent differences in performance in different hours, except for the afternoon peak hours of 17:00 and 18:00.Both MAPE and MAE during afternoon peak hours were larger than that during other hours.The larger errors during the afternoon peak hours indicated that the models deployed in the system may need to be improved for congestion conditions in the future or for some road segments. The MAPE of speed prediction of the selected sites shows that the accuracy of BRTPS is similar with some other systems, for example, the traffic prediction tool developed by IBM Research for Singapore, in which the overall prediction results were well above the target accuracy of 85 percent [5]. 10 speed prediction values with largest prediction error in 5400 data are listed in Table 1, in which the same hour for the same site ID indicates that the hour is in different days.From Table 1, we can see that the ten cases with large prediction error almost were undersaturated condition, as shown with the speed and occupancy values.These large speed prediction errors may resulted from two reasons.The first is that the prediction model cannot deal with the traffic condition changing from undersaturated to oversaturated.For example, at 16:45, the traffic condition is free flow and at 17:00 the traffic flow suddenly becomes congested; the combined prediction cannot suit for the changing well.On the other hand, the traffic flow condition at 17:00 may be caused by an event, for example, an accident, and in the current application system did not consider the effect of special event in the prediction model before the event occurred. Conclusions Real-time traffic prediction systems are one of the foundations of ATMS and ATIS.We studied the logic structure, physical structure, and main functions of the Beijing Road Traffic Prediction System deployed in the control center of BTMB.The key prediction models and the online prediction performance were also introduced.Performance analysis indicated that the system satisfied prediction accuracy most of the time for expressways.As discussed, however, during the application period the current system may sometimes produce larger prediction errors, especially during the transition period from free-flow to congested traffic or under congestion conditions.Future prediction accuracy may be improved by refining the developed model based on detected data or by integrating other prediction models based on realtime dynamic traffic assignment. Figure 2 : Figure 2: Physical Architecture of the system. interval MAPE in 15-minute interval MAPE in 30-minute interval Figure 5 : Figure 5: MAPE of speed prediction in different intervals of fifteen sites in ten days. Figure 6 : Figure 6: MAPE of speed prediction in different hours. Figure 7 : Figure 7: MAE of speed prediction in different hours. Table 1 : MAPE of speed prediction of the largest error.
4,930.8
2014-02-18T00:00:00.000
[ "Engineering", "Computer Science" ]
Image Compression Based on Clustering Fuzzy Neural Network صخلملا نإ يأ قفارت يتلا تاقوعملاو لكاشملا ة ريبك ةمزح ضرع بلطتت ةيمقر ةروص اهلقنل ة آ ىلإ ناكم نم رخ ىلإ جاتحت كلذكو ةريبك ةينزخ ةحاسم . تداق تاقوعملا هذه ىلإ نـع ثحبلا بلا ةبسن ليلقتل سبكلا تايمزراوخل تانيسحت يف ريثأت نود نم يأ ةيعون لضفأبو ةثوعبملا تاناي قيقحلا تانايبلا ةروصلل ةي . ةدـقنعلا ىلع دامتعلااب ةروصلا سبكل ةديدج ةقيرط ميدقت مت ثحبلا اذه يف . ةـقيرط بكلا ةديدج فده ةلاد نمضتت ةديدجلا س ةكبش ىلع ةدمتعملا ةقاطلا ةلاد ةطساوب اهتميق لقت يتلا ةيئانث ةببضملا ةيعانطصلاا ةيبصعلا دليفبوهلا داعبلأا بيردتلا تاذ فارشإ نود . نوـكتت د ةـلا ديدجلا فدهلا ة زـكارمو ةروصـلا طاقن نيب ةفاسملا لدعمو ةيفينصتلا يبورتنلاا ةلاد طبر نم ةدقنعلا . يدامر جردت تاذ روص جذامن ىلع ةديدجلا ةقيرطلا قيبطت مت دادعإبو نـم ةـعونتم ىلع لوصحلا متو ةدقنعلا زكارم لضفأ سبك ةبسن . ربتعت ةديدجلا ةقيرطلا هذهو اضيأ ةـقيرط ةدقنع ةروصلا طاقنل ةيوق ةديدج . Introduction Image compression has been pushed to the forefront of the image processing field.This is largely a result of the rapid growth in computer power, the corresponding growth in the multimedia market, and the advent of the World Wide Web, which makes the Internet easily accessible for everyone.Additionally, the advances in video technology, including highdefinition television, are creating a demand for new, better, and faster image compression algorithms.The storage and transmission of such data require large capacity and bandwidth which could be very expensive.Image data compression techniques are concerned with reduction of the redundancies in data representation in order to decrease data storage requirements and hence communication costs.Reducing the storage requirements is equivalent to increasing the capacity of the storage medium and hence communication bandwidth.Thus the development of efficient compression techniques will continue to be a design challenge for future communication systems and advanced multimedia applications [1,2]. Clustering is a useful approach in several exploratory patternanalysis, grouping, and machine-learning situations, including data mining, document retrieval, image segmentation, and pattern classification [3,4]. In image segmentation coding techniques, image is segmented to different regions separated with contours, and coded with different coding techniques.Region growing, k-means, c-means, and split and merge methods are used generally for image segmentation.Beside of this, crisp classical segmentation methods, the fuzzy logic segmentation methods were also seen very effective for coding [5,6]. The Hopfield neural network is well-known technique used for solving optimization problems based on energy function [7].In this study, a new image clustering and compression method based on fuzzy Hopfield neural network was introduced for gray scale images.This new approach includes new objective function, and its minimization by energy function based on unsupervised two dimensional fuzzy Hopfield neural network.After applying new method on gray scale sample images at different number of clusters, better compression ratio was observed. Classification of compression algorithms In an abstract sense, we can describe data compression as a method that takes an input data D and generates a shorter representation of the data c(D) with a fewer number of bits compared to that of D. the reverse process is called decompression, which takes the compressed data c(D) and generates or reconstructs the data D ' as shown in the figure 1.Sometimes the compression (coding) and decompression (decoding) systems together are called a CODEC. The reconstructed data D ' could be identical to the original data D or it could be an approximation of the original data D, depending on the reconstruction requirements.If the reconstructed data D ' is an exact replica of the original data D, we call the algorithm applied to compress D and decompress c(D) to be lossless.On the other hand, we say the algorithms are lossy when D ' is not an exact replica of D. hence as far as the reversibility of the original data is concerned, the data compression algorithms can be broadly classified into two categories, lossless and lossy, we will focus our discussions on lossless coding [2,8]. Coding (Compression) Method The neighboring pixels in a typical image are highly correlated to each other.Often it is observed that the consecutive pixels in a smooth region of an image are identical or the variation among the neighboring pixels is very small. Run length coding(RLC) is an image compression method that works by counting the number of adjacent pixels with the same gray level value.This count, called the run length, is then coded and stored. Run length coding is a simple approach to source coding when there exists a long run of the same data, in a consecutive manner, in a data set.As an example, the data set that represents by the matrix d = In this manner, the data d can be run length encoded as (5 7) (19 12) (0 8) ( 81) (23 6).For ease of understanding, we have shown a pair in each parentheses.Here the first value represents the pixel, while the second indicates the length of its run [1,2]. Decompression system In some cases, the appearance of runs of symbols may not be very apparent.But the data can possibly be processed in order to aid run length coding.Here we apply classical clustering algorithm, fuzzy clustering algorithm and using fuzzy with neural network on the gray level image.Then, after obtaining the cluster centroids the clustering image is created and it is coded by run length coding algorithm in one and two dimensions.When applying two dimensions of run length coding, we use zig-zag ordering of coefficients of the clustered image as shown in the following figure. Fidelity Criteria In some image transmission systems some errors in the reconstructed image can be tolerated.In this case a fidelity criterion can be used as a measure of system quality [16].After completing the decoding process, the root mean square error e RMS , the root mean square signal to noise ratio SNR RMS , and the peak signal to noise ratio SNR PEAK , should be calculated between the reconstructed image and the original image to verify the quality of the decode image with respect to the original one.The root mean square error is found by taking the square root of the error squared divided by the total number of pixels in the image. [ ] The smaller the value of the error metrics, the better the compressed image represents the original image.Alternately, with the signal to noise (SNR) metrics, a larger number implies a better image.The SNR metrics consider the decompressed image ( ) , ( ˆc r I ) to be the 'signal' and the error to be 'noise'.We can define the root mean square signal to noise ratio as : where: L = the number of gray levels (e.g. , for 8 bits L=256) To check the compression performance, the values (CR) compression ratio and Bpp rate (bit per pixel rate ) are calculated.The compression ratio is the amount of compression, while the Bpp rate is the number of bits required to represent each pixel value of the compressed image.The compression ratio is defined by: The bits per pixel for N*N image is : K-means Clustering Algorithm The Standard K-means clustering algorithm is well known and understood algorithm.Its computational complexity is O(n), where n is the number of data points (feature vectors) to be clustered.The k-means is one of a group of algorithms which aim to minimize an objective function [2,9]. Although it can be proved that the procedure will always terminate, the k-means algorithm does not necessarily find the most optimal configuration, corresponding to the global objective function minimum.The algorithm is also significantly sensitive to the initial randomly selected cluster centers.The k-means algorithm can be run multiple times to lessen this affect [10,11]. The k-means is a simple algorithm that has been adapted to many problem domains.It is a good candidate for extension to work with fuzzy feature vectors. Fuzzy Approach to Pixel Classification The basic postulates of fuzzy clustering is that a member may have partial memberships grades in several fuzzy clusters.A membership value in the interval[0,1] is assigned to each sample in every cluster, based on certain measurements [2,10]. In fuzzy clustering a pattern is assigned with a degree of belongingness to each cluster in a partition.Here we will present the most popular fuzzy clustering algorithm, known as fuzzy c-means algorithm [12,13]. Fuzzy C-Means Fuzzy c-means is a commonly used clustering approach.It is a natural generalization of the k-means algorithm allowing for soft segmentation based on fuzzy set theory.As in hard k-means algorithm, fuzzy c-means algorithm is based on the minimization of a criterion function. The following criterion function may be chosen.Which differs from the k-means objective function by the addition of the membership values u ik and the fuzzifier m that shown in the following equation. • U={u ik } is c*n matrix, where u ik is the membership value of the k th input sample x k in the i th cluster.The membership values satisfy the following conditions: is an exponent weight factor .There is no fixed rule for choosing the exponent weight factor.However, in many applications m=2 is a common choice. The above three conditions imply the followings: • The membership values of each sample x k to a particular cluster should lie between 0 and 1. • Each sample x k must belong to at least one cluster. • Each class must have at least one sample and all the sample cannot belong to a particular class. The objective function in this case is the sum of the squared Euclidean distances between each input sample and its corresponding cluster center, weighted by the fuzzy membership values.The algorithm iteratively updates the cluster centers using the expression: The fuzzy membership function of the k th sample x k to the i th cluster is given by the following: It can be noted that the weight factor m, reduces the influence of small membership values. The fuzzy c-means algorithm is thus summarized as follows: Step1: Initialize U (0) randomly or based on some approximation.Initialize V (0) and calculate U (0) .Set the iteration counter t=1.Select the number of cluster centers c and choose the value of m.Step2: Compute the cluster centers.Given U (t) , calculate V (t) according to the Eq. 1. Step3: Given V (t) update the membership values to U (t+1) according to Eq. , where ε is a small positive number.Step5: Increment iteration counter to t=t+1 Go to step2. It may be noted that, we are applying the fuzzy c-means to image the data points or the sample points x 1 ,x 2 ,…….,x n are the pixel gray values.Thus n represents the total number of pixels in the image [2,10,12]. Hopfield Neural Network Artificial neural networks are mimicking the neurophysiology of the human brain.They have an ability to learn from examples in order to find patterns in data or classify data.Once trained on training data they have predicting ability on new data.They perform global search on data, however their shortcoming is that they represent the kind of black box , where a user can hardly understand the underlying principles that are used to classify the data.On the other hand, it could perform well on recognizing images and similar tasks.The most common method used for learning of neural network is Hopfield [14]. The Hopfield neural network can be used as a content addressable memory.Knowledge and information can be stored in single layered interconnected neurons (nodes) and weighted synapses(links) (as shown in the following figure), and can be retrieved based on the network's parallel relaxation method, nodes are activated in parallel and are traversed until the network reaches a stable state (convergence).It had been used for various classification tasks and global optimization [15].In this study, a new image clustering and compression method based on fuzzy Hopfield neural network was introduced for gray scale images. The Proposed Method The Hopfield neural network is a well-known technique used for solving optimization problems based on the energy function.In this method, two dimensional Hopfield neural network consists of N*c neurons which are fully interconnected neurons.The total weighted input for neuron (x,i) is given as: where N is the number of data points, c is number of cluster, V y,j denotes the binary state of neuron (y,j), W x,i;y,j is interconnection weight between neuron (x,i) and neuron (y,j) , I x,i is external bias vector for neuron (x,i).energy function of two dimensional Hopfield neural network is also given as: The neural network reaches a stable state, when the energy function is minimized.The optimization problem can be mapped into a two dimensional fully interconnected Hopfield neural network with the fuzzy reasoning strategy.Instead of using the competitive learning strategy, the fuzzy Hopfield neural network uses the fuzzy reasoning algorithm to eliminate the need for finding weighting factors in the energy function. Iterative minimization of new objective function consists of the following steps: Step1: Choose number of cluster c, iteration criteria ε , fuzzification parameter m chosen to be 2, and primary centroids v 0 .Step2: Compute initial membership values: Where m is the fuzzification parameter and membership value u x,i is the output state at neuron (x,i), z x is x pixel value of image .A neuron (x,i) in a maximum membership state indicates that z x pixel belongs to class i.The summation of membership values of each pixel in different classes equals 1 and total membership values for N image pixel equal N. v represents the cluster center. Step3: Compute new membership values according to the following equation: When the input to neuron (x,i) can be expressed as: New objective function consists of equal weighted combination of classification entropy function and average distance between image pixels and cluster centroids for separate and compact clustering. Step8: Segmented or clustering image is created when obtaining the membership values and the cluster centroids, after this it is coded by run length coding in one or two dimensions. The following block diagram explain the flow of the new image clustering and compression method based on fuzzy Hopfield neural network. Results and Conclusions This new method fuzzy hopfield neural network was applied to 256*256 dimensional four sample grayscale images and compared with results of k-means and fuzzy c-means algorithms and also compared with original run length coding with one and two dimensions.Comparing parameters is a signal to noise ratio, compression ratio and bits per pixel.Comparison results are given at table 1. Original images and reconstructed image by k-means, fuzzy c-means and fuzzy hopfield neural network method were also given in figure 5,6,7 and 8 corresponding to different sample of images fuzzy hopfield neural network method provides better image compression than other methods according to results. Importance of image clustering and compression methods increases nowadays.A new image clustering and compression method based on fuzzy hopfield neural network provides better compression ratio.This method can be used for pattern recognition additionally, because it provides good validity measure.There isn't possibility to reach incorrect results.But some of methods as k-means has high possibility to go to a local minimum according to selection of initial values and may not give correct results.The fuzzy hopfield neural network can also provide a more efficient mechanism.This new method is a good alternative for image clustering and compression. Fig. 2 Zig-Zag Ordering of Pixels of ImageWe can define the error between an original, uncompressed pixel value and the reconstructed (decompressed) pixel value by the following equation image next we can define the total error in an N * N decompressed image as: related metric, the peak signal to noise ratio, is defined as: is effective to minimize new objective function in iteration. If u x,i membership values is out of 1 Compute new objective function J k : Fig. 7 : Fig. 7 : Original image , Reconstructed image using K-means, Reconstructed image using fuzzy c-means, Reconstructed image using fuzzy Hopfield neural network
3,665.2
2007-12-01T00:00:00.000
[ "Computer Science" ]
Provenance and tectonic implications of the Yanshi bauxite area in Western Henan, China . The bauxite layer in Western Henan supplies a large number of bauxite ores and is useful for studying tectonic movement. In this paper, the bauxite samples were selected to carry out LA-ICP-MS detrital zircons U-Pb dating and Hf isotope testing. The results indicated that the detrital zircons with the Early Paleozoic ages were mainly derived from the North Qinling Orogenic Belt. The detrital zircons of the Precambrian age may be derived mainly from the basement of the North China Block and the North Qinling Orogenic Belt. The results of this study support the opinion that the North Qinling Orogenic Belt has been uplifted at ~310 Ma, and the surface of the southern craton has an overall north-dipping topography at this time. Introduction The Yanshi County of Henan Province is located in the southern part of the North China Craton (NCC) and the eastern part of the Mesozoic Cenozoic Luoyang Basin. It is the earliest area found in the bauxite deposit and is a part of the metallogenic belt in Western Henan [1]. The occurrence horizon of bauxite is the upper Carboniferous Benxi Formation. The sedimentary provenance area can provide important information about the tectonic activities on the periphery of sedimentary basins, especially in the tectonic conversion period [2][3][4][5][6][7][8][9][10]. In this study, the LA-ICP-MS detrital zircon U-Pb dating of bauxite in Yanshi County, Henan Province, South China Craton was carried out. The provenance and properties of the ore-forming materials of bauxite in this area were analyzed. It is expected to help the genesis of the Benxi Formation bauxite and the mining of geological information carried by bauxite. Geological background Within the NCC, one of the most prominent features in the sedimentology since Phanerozoic is the parallel unconformity contact between the Upper Carboniferous-Lower Permian and its underlying carbonate layers of Lower Paleozoic. There is a hiatus of about 150 million years [2,11,12]. Except for the basal conglomerate with high compositional maturity and structural maturity occurred in the northernmost part, in most area, the bottom of Upper Carboniferous-Lower Permian strata is dominated by bauxitic mudstone, with localized oolitic bauxite [13,14]. The bauxite represents the earliest redeposited formation after the long depositional break of the NCC. According to the detrital zircon of the original rock, the palaeozoic tectonics and orogeny, the bauxite in the NCC have two provenances: the Bayan Obo-Chifeng fault and adjacent to the Xingmeng Orogenic Belt; the Luanchuan Fault Zone is adjacent to the Qinling Orogenic Belt. It is composed of structural units that have formed in different periods and tectonic settings. The suture zone is the Shangdan Suture (Fault) Zone formed in the Early Paleozoic [13][14][15][16]. The north of the Shangdan Fault Zone belongs to the North Qinling Orogenic Belt (NQOB), with different stages and degrees of metamorphism of sedimentary strata. Zircon U-Pb age measurements were completed at the Experimental Center of Resources and Environmental Engineering Institute, Hefei University of Technology. The laser ablation system used for zircon U-Pb LA-ICP-MS measurements was a GeoLas 2005. U-Pb ages Ninety-three out of one hundred zircon grains [18] were analysed, and the concordant ages of ninety-three zircon grains (90% confidence) (Figure 2a) were divided into two groups. The first group included sixty-six zircon grains, accounting for 71% of the total, ages between 378 Ma and 544 Ma (mainly from the early Paleozoic), with a peak at ~444 Ma. The second group included twenty-seven zircon grains, accounting for 29% of the total, which had ages between 629 Ma and 3116 Ma (from the Precambrian) ( Figure 2b). The table modified from [18]. Provenance The εHf(t) values fell within the range of εHf(t) values (age of ~450 Ma) of the NQOB. Therefore, the magmatic zircon with a peak at ~450 Ma was mainly derived from the NQOB. The detrital zircon in the study area may not only have been provided by the basement of the NCC. It may be derived mainly from the metamorphic strata exposed in the North Qinling area. Tectonic implications There is a change in the provenance of detrital zircon and the material sources of the North China Craton [2,16]. According to the analyses of the provenance and the comparison diagram of the probability curves of the samples in the NCC, it indicates the tectonic movement. At ~600 Ma, The Shangdan Ocean separated the South China Craton from the NCC (Figure 3a). At ~515 Ma, the Erlangping back-arc basin subducted toward the south, beneath the North Qinling terrane (Figure 3b). The Erlangping back-arc basin appeared because the Shangdan oceanic crust might have subducted, which would have occurred at ~540 Ma. Meanwhile, the North Qinling Belt island arc also appeared [16]. At ~450 Ma, the Erlangping back-arc basin was closed, and the Erlangping Suture began to form, while the Shangdan Ocean might still exist and continue to subduct (Figure 3c) [16]. At ~310 Ma, the Shangdan Ocean was already closed (Figure 3d). The formation and rapid uplift of the North Qinling Belt suggest that the surface has an overall north-dipping topography, which provides the material sources for the NCC. The results show that the provenance of the detrital zircon in the southern NCC has changed [2]. Conclusion The detrital zircons with the Early Paleozoic ages were mainly derived from the NQOB. The detrital zircons of the Precambrian age may be derived mainly from the NQOB and the basement of the North China Block. In the sedimentary period, the surface had an overall north-dipping topography, which provided the material sources for the NCC. It caused the change in the provenance.
1,302
2021-01-01T00:00:00.000
[ "Geology" ]
Voltage profile and power quality improvement in photovoltaic farms integrated medium voltage grid using dynamic voltage restorer Received Oct 9, 2019 Revised Nov 9, 2019 Accepted Feb 15, 2020 In this paper, we have presented a simulation study to analyze the power quality of three phases medium voltage grid connected with distribution generation (DG) such as photovoltaic (PV) farms and its control schemes. The system uses two-stage energy conversion topology composed of a DC to DC boost converter for the extraction of maximum power available from the solar PV system based on incremental inductance technique and a three-level voltage source inverter (VSI) to connect PV farm to the power grid. To maintain the grid voltage and frequency within tolerance following disturbances such as voltage swells and sags, a fuzzy logic-based Dynamic Voltage Restorer is proposed. The role of the DVR is to protect critical loads from disturbances coming from the network. Different fault conditions scenarios are tested and the results such as voltage stability, real and reactive powers, current and power factor at the point of common coupling (PCC) are compared with and without the DVR system. INTRODUCTION The capacity of renewable energy is set to increase by 50% between 2019 and 2024 in the latest forecasts of the International Energy Agency for 5 years and solar PV controls the largest proportion of them. This represents an increase of 1,200 GW, which corresponds to the total installed capacity of the United States today. Solar PV alone represents about 60% of this projected growth [1]. In 2016; nearly 80 GW of PV panels were installed worldwide [2]. This corresponds, in average, to the installation of more than 31,000 PV panels per hour and represents a growth of 48% compared to 2015. The global installed capacity for solar PV reached 303 GW in 2016. The orientation of China's energy policies towards renewable energies has made it the world leader in solar PV with installed capacity in 2018 of 45 GW (cumulative capacity of 176 GW), India is the second global leader with 11 GW and the United States comes in 3rd with 10.6 GW, closely followed by Japan with a cumulative of 56 GW. Germany is fourth with 45.4 GW [1,3]. Solar is an inherently time-varying source of energy due to the variability of the sun's irradiance throughout the day and across the seasons. Thus, the integration of such stochastic and unpredictable renewable energy sources into the network poses new challenges to grid operators in maintaining a stable secure energy supply. This can cause power quality issues due to the appearance of phenomena like flickers, fault ride through, voltage dips / swells, high voltage ride-through (HVRT) and low voltage ride through (LVRT), harmonic resonance, phase imbalance or low power factor which are among the major concerns of power utilities and regulators. Power quality issues will become crucial as renewable energy sources penetration increases [3][4][5]. In practice, voltage sag and harmonics are the major problems in a power system. They can cause malfunctioning or tripping of equipment and many other problems on the power system. Electricity generation from solar energy has been one of the fastest growing technology and has become, globally, the most promising renewable energy resource [3]. In [6], the authors applied a Dynamic Voltage Restorer (DVR) to enhance the power quality and the low voltage ride through (LVRT) capability of a hybrid distribution generation (DG) system connected to a three-phase medium-voltage network. In [7], a comprehensive review of several control schemes to enhance the LVRT capability of grid-feeding converters is presented. The paper also discusses the respective advantages and limitations of each control strategy. The authors in [8][9], discussed the use of PV-based DVR to compensate and safeguard the power quality and maintain voltage stability between the PCC and the distribution network. A novel control strategy of the DVR is proposed for the mitigation voltage disturbances such as sags and swells. In [10][11][12][13][14], the implementation of a Dynamic Voltage Restorer for voltage quality improvement in the system integrated with Distributed generation (DG). The authors highlight the ways to speed up the technology development towards the extensive integration of the DVR in the near future. As mentioned above; the DVR can be integrated into the network in several control configurations to overcome the problems related to power quality. In this work, the DVR is integrated to a power grid connected to a PV farm in order to mitigate the intermittency and variability of solar energy and overcome grid faults caused by voltage sags and swell at the PCC. The proposed DVR control scheme employs a fuzzy logic controller and an in-phase compensation technique. The designed DVR and the electric system are evaluated under various fault conditions. The remaining of the paper is organized as follows: Section 2 describes the proposed topology of the PV farms connected to the DVR and tied to the grid. In Section 3, the structure of DC to DC and DC to AC converters models are developed. The DVR topology and its basic control scheme are described in Section 4. Section 5 presents a series of simulation results to demonstrate the improvement of voltage stability and power controllability with the proposed DVR circuit. Conclusions of the paper are summarized in Section 6. PROPOSED SIMULATED SCENARIOS AND NETWORK TOPOLOGY ESEARCH METHOD The proposed power system model is shown in Figure 1. It is composed of five PV farms of 100 kW each. The PV farms are interfaced to the distribution grid through a three phase PMW inverter and threephase alternating current choke filter. MODELING OF THE PV CELL AND CONVERTERS The PV cell model used in the paper is based on the two-diode equivalent circuit model shown in Figure 3. The expression of the I-V curve describing the equivalent circuit shown of Figure 3 is given in terms of the total cell current [15]: η is the ideality factor. The PV Module and PV field are modeled by considering that all the PV cells are extremely identical and have the same ambient conditions. If there are N s cells connected in series and N p in parallel then the series resistance R s and parallel resistance R p are scaled by a factor of N s /N p as shown in equation (3) below [15]: The PV power conversion is controlled by a Maximum Power Point Tracking algorithm to extract the maximum power via DC to DC converter of a high efficiency that acts as an optimal electrical load for a PV cell, most often for array or solar panel, and converts the power into a voltage or current level that is better suited to the load that the system is designed to provide. PV cells have a single operating point where the current (I) and voltage (V) values of the cell result in maximum output power [16]. Figure 4 shows the I-V characteristic of the PV module. The short-circuit current, open-circuit voltage and maximum power points are highlighted on the I-V curves. The current I MP is then determined by evaluating Equation (1) at V = V MP [17]. The PV array considered in this simulation study consists of seven (7) modules in series and (47) parallel strings in order to generate 100 kW at solar irradiance of 1000 W/m² and an output DC voltage of 380V. There are several MPPT methods available in the literature [18].In this work, the Incremental Conductance (InC) algorithm which can be regarded as an improved version of the popular P&O is employed [15]. This method was proposed to handle rapidly changing atmospheric conditions [19]. The gradient of the power curve is: Multiplying both sides by 1/V PV leads to: Where G and dG denote the conductance and incremental conductance respectively. The DC to DC converter is used to change the voltage level of a DC source. The inductor and capacitors parameter values in the boost converter circuit are: C=0,1mF, C1=C2=12 mF, L1=5 mH. Based on the instantaneous values of the current and voltage, the duty cycle of the boost DC-DC converter is continuously adjusted by the MPPT controller to ensure that the PV generator always operates at its MPP for any irradiance and temperature conditions [15]. In high voltage and high power applications, it is suitable to operate with high voltages to keep the currents within reasonable levels. This needs the DC bus voltage V d to exceed the voltage ratings of the converter power switches [20]. Therefore, integration of renewable energy can cause serious power quality issues. Among these, the harmonics generated by in inverters and injected into grid are of major concern [21][22][23]. For linear modulation (i.e. for amplitude modulation factor m a ≤ 1) the amplitude of the first harmonics changes linearly accordingly to the change of the amplitude modulation factor, so the expression for phase voltage has the form: DVR TOPOLOGY The Dynamic Voltage Restorer consists of a Voltage Source Converter (VSC), a switching control scheme, an energy storage device and a coupling transformer connected in series with the AC system. The DVR can be applied to a variety of power quality and reliability problems including dip voltage compensation, voltage unbalance, voltage regulation, harmonic isolation, power factor correction and power outages. Therefore, it can provide protection against any sags, swells, and large fluctuations in the alternating current line voltage [24]. The DVR injects a three phase AC voltage in series and synchronized with the distribution feeder voltages of the AC power system. The amplitude and phase of the injected voltage can be varied to regulate the exchange of active and reactive powers between the DVR and power system within predetermined limits negative (power absorption) and positive (power injection) [24]. The DVR can provide harmonic isolation to prevent harmonics in the source voltage from reaching the load. In addition, the DVR also provides voltage balancing and voltage regulation [24]. The Where V Lo is the load voltage magnitude, Z TH is the load impedance, V TH denotes the system voltage during fault condition and I Lo represents the load current which is done by: When V Lo is considered as a reference equation can be rewritten The complex power injection of DVR can be written as: Only the required reactive power is injected which can be provided by the DVR itself [24]. VOLTAGE STABILITY AND POWER CONTROLLABILITY: SIMULATION RESULTS To evaluate the contribution impact of D-FACTS to the PV farms tied to grid, we have chosen the DVR as case of study. The DVR has a power rating of 4MVA and is used to regulate voltage on a 30 kV distribution grid connected to bus B2. One feeder transmits power to a local load connected at bus B3 which represents a plant continuously absorbing fluctuating currents, thus causes voltage flicker. an appropriate voltage is injected by the DVR in order to regulate the voltage of the bus B1 and B3. This voltage transfer is done through the reactance of the coupling transformer by producing a secondary voltage in phase with the primary voltage (grid side).The simulation scenario considered in this case study consists of creating two faults of 0.3 second duration each during a simulation time of three seconds. The first fault is a swell voltage created between 0.8 and 1.1 seconds and the second fault is a sag voltage fault set between 1.25 seconds to 1.55 seconds, as shown in Figure 5. The swell voltage fault is simulated as an increase of 20% of the nominal voltage while the sag voltage was set to decrease by 10% of nominal voltage. The active and reactive powers are seen in Figures 9 and 10 respectively. In Figure 9 and during the swell fault, we can observe small overshoots and oscillations in the active power for 2 cycles of transient when the DVR is not in operation, but the oscillations are completely damped when the DVR is switched on. For a sag fault and without DVR, the grid voltage exhibits a large transient lasting for 9 cycles which, again, is completely damped when the DVR is introduced. After a short oscillation appearing at the onset of the fault, the reactive power exhibits a large overshoot at 1.39 seconds corresponding to an injection of +181.2 kVar. The reactive power suddenly decreased to -139.1 kVar at 1.535 seconds, then after several oscillations the system's normal operation is restored at the end of the sag fault at 1.957 seconds compared with the case with DVR contribution where it can be observed that the oscillations have been completely damped at 1.699 seconds. Figure 10. Reactive power flow at PCC Figure 11 shows the total current supplied from the PV farms to the load and the grid. It can be observed that during voltage swell the current has decreased from 11.26 A to 9.16 A in the case where the  ISSN: 2088-8694 DVR is not in operation. There is a reduction in the current despite a voltage increase (swell fault). This action is due to PV system inverter in order to regulate its voltage at bus B1.Similarly, it can be seen that the same contribution of the PV system inverter at the voltage sag fault when the current has increased from a its normal value of 12.32 A to 14.72 A during the fault and suddenly collapses to 4 A and finally stabilizing after several oscillations. The effective contribution of the DVR in stabilizing the current at its normal value during faults is achieved rapidly and relatively smooth. Figure 12 illustrates the power factor (PF) behavior at the PCC. Under normal operation the PF is kept at unity by placing 10 kVar capacitor bank filtering harmonics at the VSC output of the PV farms so as to transfer entirely the active power generated by the PV farms to the load and the grid. It is seen in Figure 12 that in the case of voltage swell fault, the operating limits of the system are acceptable. However, without DVR, the voltage sag caused a significant drop in the PF which reached 0.5 at the end of the disturbance. It can be concluded that the power factor at the point common coupling is effectively controlled by the DVR during the voltage fault scenarios considered. The negative impact of renewable energies on power quality arises mainly from two typical characteristics of renewable energy sources namely their random variability and the presence of a static converter to interface the generating plants to the grid(with exception for hydroelectric) [3]. These devices cause harmonics in the system and they are also very sensitive to distorted voltage waveforms. In normal cases, the inverter connecting the Renewable Energy Conversion Systems (RECS) to the electrical network modifies the output voltage in order to regulate the active and reactive currents between the inverter and the grid and to prevent system from instability due to the widespread deployment of RECS, independent system operators (ISO) require RECS to operate according to strict network codes so that to remain connected to the network and provide the expected reactive current to support the electric system during network faults [6], [25]. CONCLUSION The results show that the DVR interfaced to photovoltaic systems and tied to grid in medium voltage is effective in reducing voltage sags and swells with improved voltage regulation capabilities and flexibility for power factor correction. Our simulation model presents the dynamic interaction between the DVR converters and the converters of PV system. The Dynamic voltage restorer is one of the fastest and effective custom power device that has proven its effectiveness for the mitigation of voltage sags and swells. The simulation study presented in this work has demonstrated that the DVR is a potential power quality improvement device. systems. His current research interests in energy include intelligent control design and computational intelligence applications to efficiency optimization in renewable energy systems with particular focus in the management of smart homes and dynamic scheduling, optimization and control of future smart grids. , condition monitoring and asset management in electric power networks; Energy storage systems integration into the grid; Smart meter data analytics using machine learning techniques for efficient energy management; electric vehicles integration into the distribution grid and V2G/G2V management.
3,768.8
2020-09-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Isolation and Characterization of PBP, a Protein That Interacts with Peroxisome Proliferator-activated Receptor* In an attempt to identify cofactors that could possibly influence the transcriptional activity of peroxisome proliferator-activated receptors (PPARs), we used a yeast two-hybrid system with Gal4-PPARγ as bait to screen a mouse liver cDNA library and have identified steroid receptor coactivator-1 (SRC-1) as a PPAR transcriptional coactivator. We now report the isolation of a cDNA encoding a 165-kDa PPARγ-binding protein, designated PBP which also serves as a coactivator. PBP also binds to PPARα, RARα, RXR, and TRβ1, and this binding is increased in the presence of specific ligands. Deletion of the last 12 amino acids from the carboxyl terminus of PPARγ results in the abolition of interaction between PBP and PPARγ. PBP modestly increased the transcriptional activity of PPARγ, and a truncated form of PBP (amino acids 487–735) acted as a dominant-negative repressor, suggesting that PBP is a genuine coactivator for PPAR. In addition, PBP contains two LXXLL signature motifs considered necessary and sufficient for the binding of several coactivators to nuclear receptors. In situhybridization and Northern analysis showed that PBP is expressed in many tissues of adult mice, including the germinal epithelium of testis, where it appeared most abundant, and during ontogeny, suggesting a possible role for this cofactor in cellular proliferation and differentiation. appear to exhibit distinct patterns of tissue distribution and differ considerably in their ligand binding domains, suggesting that they possibly perform different functions in different cell types (7,13,14). Indeed, of the three isotypes, PPAR␣ expression is relatively high in hepatocytes, enterocytes, and the proximal tubular epithelium of kidney when compared with other cell types (13,14), and evidence derived from mice with PPAR␣ gene disruption indicates that this receptor is essential for the pleiotropic responses induced by peroxisome proliferators (15). Several structurally diverse peroxisome proliferators, specific fatty acids, and eicosanoids act as ligands for PPAR␣ (4, 16 -19). Although PPAR␦ isotype is ubiquitously expressed and binds the same ligands as PPAR␣ (18,19), its functional significance remains largely elusive. PPAR␥ exists as two isoforms, PPAR␥1 and PPAR␥2, as a consequence of alternate promoter usage in the gene encoding this receptor (8,20,21). While PPAR␥1 isoform expression is restricted to liver and few other organs (8,14), the PPAR␥2 isoform, which plays an important role in adipocyte differentiation, is predominantly expressed in adipose tissue (8,14). Forced expression of the PPAR␥1 or PPAR␥2 isoforms in fibroblasts has been shown to convert these cells into adipocytes, suggesting that PPAR␥ exerts a pivotal role in adipocyte development and lipid homeostasis (20,22). PPAR␥ is activated by the arachidonate metabolite 15-deoxy-⌬ 12,14 -prostaglandin-J 2 , which appears to function as a natural ligand for this receptor, as well as a thiazolidinedione class of antidiabetic drugs (23,24). Like other members of the nuclear receptor superfamily, PPARs possess a central DNA-binding domain that recognizes PPAR-response elements (PPREs) in the promoter regions of their target genes (1,7,25). PPARs heterodimerize with RXR (RXR is the receptor for 9-cis-retinoic acid) and the transcriptional regulation of target genes by PPARs is achieved through the binding of these PPAR-RXR heterodimers to PPREs (3,25,26). RXR also forms heterodimers with other members of the nuclear receptor superfamily, and these interactions appear to influence the PPAR-regulated transcriptional activation because of the competition among various RXR heterodimerization partners for RXR (27,28). In addition, tissue and species responses to peroxisome proliferators and other natural PPAR ligands may depend upon pharmacokinetics and or metabolism, the relative abundance of PPAR isotype and its heterodimerization partner RXR, the structural features of PPREs and flanking sequences, and to some extent hormone levels and dietary composition (3,28,29). Transcriptional regulation of nuclear hormone receptors involves the participation of basal transcription factors, including TATA-binding protein and TFIIB, and other cofactors, known as nuclear transcriptional coactivators or corepressors, that bridge the association between nuclear receptors and the basal transcription machinery (30,31). The cofactors identified in recent years include, CBP/p300 (32, 33), SRC-1/NCoA-1 (34,35), TIF-2/GRIP-1/NCoA-2 (36, 37), p/CIP (37), N-CoR (38), SUG1/TRIP1 (39), SMRT (40), and RIP140 (41) among others. Of these CBP/p300, as well as SRC-1/NCoA-1, TIF-2/GRIP-1/ NCoA-2, p/CIP, and RIP140, function as nuclear receptor coactivators, whereas NCoR and SMRT function as transcriptional corepressors. In an effort to understand possible tissue-and species-specific differences in the transcriptional activity of PPAR isotypes, we initiated studies to identify cofactors that influence PPAR transcriptional activity. Using the PPAR␥ ligand binding domain as the bait in a yeast two-hybrid system to screen a mouse liver cDNA library, we previously identified SRC-1 as a PPAR coactivator (42). Here we report the cloning and characterization of PBP, a new PPAR-binding protein. In addition, we show that PBP also binds to TR␤1, RAR␣, and RXR␣. Functional studies reveal that PBP modestly increases the transcriptional activity of PPAR␥ and a truncated PBP (amino acids 487-735), which contains putative PPAR binding region (amino acids 626 -686), acts in a dominant-negative fashion causing a decrease in the transcriptional activity of PPAR␥. MATERIALS AND METHODS Yeast Two-Hybrid Screening-To isolate cDNAs encoding proteins that specifically interact with PPAR␥, yeast two-hybrid screening procedure was used as described elsewhere (42). Briefly, this screening system employed GAL4-PPAR␥ (expressing GAL4-DNA-binding domain and mPPAR␥-ligand binding domain fusion protein), which was cotransformed into yeast with a second vector that expressed fusion proteins between GAL4-activating domain and mouse liver cDNA. Of the 13 clones that exhibited positive interaction with PPAR␥, two were identified previously as mSRC-1 (42). Of the remaining 11 clones, two revealed overlapping cDNA sequences. As these positive clones contained only partial cDNA sequences, we used RACE PCR to obtain the remaining 5Ј-end, and 3Ј-end sequences. Briefly, for 5Ј-RACE PCR, the first amplification was performed using the adapter primer 1 and the gene-specific primer (5Ј-CAATGAGAGACAGTGCTGGGGTGT-3Ј) for 20 cycles. Each cycle consisted of 20 s at 94°C, 30 s at 60°C, and 4 min at 68°C; 1 l of the PCR product was used as the template for the second amplification with the adapter primer 2 and the nested genespecific primer (5Ј-AGCCTGTATGGTTTCAGCCTTCCTC-3Ј) for 20 cycles, essentially using the same conditions as those used for the first amplification. The PCR products were cloned into pGEM-T (Promega), and three independent clones were sequenced. For 3Ј-RACE PCR, the sequences of the gene-specific and nested gene-specific primers were 5Ј-CATCCTCTCAGAATCAACATGGCAG-3Ј and 5Ј-CCAAAGGGAA-ATCTCCCAGTAGG-3Ј, respectively. These PCR amplifications were performed using the mouse liver marathon ready cDNA (CLONTECH) and rTth DNA polymerase. The full-length cDNA we cloned has been designated PBP to reflect its ability to bind PPARs. Plasmids-The yeast expression vector Gal4-PPAR␥, GST-PPAR␥, GST-RXR␣, PCMV-PPAR␥, PPRE-TK-LUC, and GAL-TK-LUC have been described elsewhere (42). The vectors for in vitro transcription and translation of RAR␣ and TR␤1 were provided by Dr. L. Madison (Northwestern University Medical School). The construction of pSV-sport-PPAR␣ for in vitro translation was described previously (13,43). GAL-PPAR␥⌬12 was constructed by inserting PCR-amplified cDNA fragment encoding amino acids 174 -463 of mPPAR␥ into EcoRI/SalI site of PGBT9 (CLONTECH). This GAL-PPAR␥⌬12 construct does not include the last 12 amino acids at carboxyl terminus of the mPPAR␥ ligand binding domain (amino acids 174 -475). PCMV-PBP was generated by inserting the full-length coding region of PBP cDNA into NotI site of PCMV-FLAG-2 (Eastman Kodak Co.). PCMX-PBP, for in vitro translation, was constructed by inserting the full-length PBPcDNA into the BamHI/SalI site of PCMX. To construct SK-PBT (truncated PBP consisting of residues 487-735), GST-PBP-T and PCMV-PBP-T, the partial PBP cDNA fragment encoding amino acids 487-735 was released from PGADH10-PBT clone isolated by a yeast two-hybrid system and subcloned into BamHI site of pBLUESCRIPT SK, NotI site of the modified PCMV-FLAG2, which contains a nuclear targeting signal peptide PKKKRKV, and BamHI site of pGEX-5X-2. Quantitative ␤-Galactosidase Assays-For quantitative characterization of the interaction of PPAR␥ with PBP, appropriate plasmids were cotransformed into yeast HF7C, plated on selective media (containing PPAR␥ ligand BRL49653 at a concentration of 10 Ϫ5 M, or no ligand), and the plates were incubated for 4 days at 30°C. For each assay, five colonies were suspended in 150 l of buffer Z (60 mM Na 2 HPO 4 , 40 mM NaH 2 PO 4 , 10 mM KCl, 1 mM MgSO 4 , 35 mM 2-mercaptoethanol). The cell suspension (10 l) was diluted in 190 l of buffer Z, and the A 600 was measured to estimate cell density. The remaining cell suspension was pelleted by centrifugation, and ␤-galactosidase activity was determined by a chemiluminescent reporter protocol (Galacto-light kit, Tropix, Bedford, MA). To assay the binding of PPAR␣, RAR␣, and TR␤1 to PBP, a truncated fragment of PBP (from amino acids 487-735) which contains a putative PPAR␥-binding domain was generated in E. coli, using the expression plasmid GST-PBPT, and allowed to interact with [ 35 S]methionine-labeled PPAR␣, RAR␣, or TR␤1 produced in vitro using the rabbit reticulocyte translation system. The binding was assayed in the presence or absence of specific ligands: Wy-14643 (1 ϫ 10 Ϫ4 M) for PPAR␣, 9-cisretinoic acid (1 ϫ 10 Ϫ6 M) for RAR␣, and T 3 (1 ϫ 10 Ϫ6 M) for TR␤1. Bound proteins were washed three times with binding buffer, NETN, eluted, and subjected to SDS-PAGE as described above. Northern Blot Analysis and in Situ Hybridization-Fifty nanograms of PBP cDNA were random-primed and used as a probe to hybridize mouse multiple tissue Northern blot (CLONTECH). For in situ hybridization, mouse embryos from embryonic (E)9.5 days and E13.5 days and adult mouse tissues were immersed in 4% paraformaldehyde for 16 -20 h at 4°C and processed as described previously (44). Both sense and antisense PBP riboprobes were generated with [␣-35 S]dUTP (Amersham Corp.) with T3 and T7 polymerases from pSKPBPT. Hybridization, washing, and examination of results were as described elsewhere (44). RESULTS Cloning of PBP-To identify novel factors involved in PPAR signaling, we employed a yeast two-hybrid assay that would detect proteins interacting with PPAR␥. Of the 13 positive clones that interacted with PPAR␥, two represented the overlapping sequences of mSRC-1 (42). Cloning and characterization of the full-length mSRC-1 cDNA showed that SRC-1 acted as a PPAR␥ coactivator (42). In this report, we describe the characterization of two other overlapping clones, 759 and 2116 bp in size, respectively, that represented a second cDNA. Since these two clones did not contain the full-length cDNA, we used RACE PCR to obtain the remaining 5Ј-end and 3Ј-end sequences. The nucleotide sequence of putative full-length composite cDNA is shown in Fig. 1. The cDNA, 5676 bp in length, contains a short 5Ј (120 bp)-and a long 3Ј (996 bp)-untranslated region and an open reading frame of 4680 which encodes a peptide of 1560 amino acids with the predicted molecular mass of the protein of 165 kDa. This protein was designated PBP (PPAR-binding protein) to signify its ability to bind PPARs. The start of the coding sequence was defined by the first ATG downstream of an in-frame stop codon at position Ϫ66. The sequences (GTAAGATGAGCTCC) surrounding ATG essentially conform to a consensus sequence for the translation initiation site (45). The two partial cDNAs isolated by the twohybrid system represented amino acid residues 487-739, and 626 -1297 of PBP, respectively. Comparison of the deduced primary structure of the PBP protein with sequences in the data base revealed that the central domain of the full-length PBP cDNA shows 90% similarity to TRIP2, a 250-bp cDNA fragment isolated from human HeLa cDNA library by the twohybrid system using TR␤1 as a bait (46). Information on the full-length cDNA sequence of this TRIP-2, and its role in receptor signaling is not available. TRIP-2 cDNA fragment corresponds with the 607-686-amino acid stretch of the fulllength mouse PBP. PBP contains two LXXLL (where L is leucine and X is any amino acid) motifs which are at 589 -593, and 630 -634 amino acids, respectively. LXXLL has been recently identified as a signature motif in transcriptional coactivators that mediates binding to nuclear receptors (37,47). A third motif in reverse orientation (LLXXL) is present at 4 -8 amino acids (Fig. 1). Interaction of PBP with PPAR␥ in Yeast-The influence of PPAR␥ ligand BRL49653 on the interaction between PPAR␥ and PBP was examined in yeast. PGADH10-PBPT, which was isolated the by two-hybrid system and expressed as the fusion protein between the GAL4 activation domain and truncated PBP (from 487 to 739 amino acids) in yeast, was co-transformed with GAL-PPAR␥ into yeast HF7C, and the ␤-galactosidase activity measured as an indication of the relative strength of the interaction in the presence or absence of ligand. The interaction between PPAR␥ and PBP resulted in a 48-fold increase in the ␤-galactosidase activity and this interaction is enhanced approximately 2-fold in the presence of ligand (Fig. 2). Thus, the ligand can moderately increase the affinity of the interaction. The extreme carboxyl-terminal region of the ligand binding domain conserved among the nuclear receptors appears essential for the ligand-dependent transcriptional activation. Deletion of this region reduces the transcriptional activation, but does not affect the ligand binding activity. To ascertain whether this region is important to the binding of mPPAR␥ with PBP, we used GAL-PPAR␥⌬12, which lacks the last 12 amino acids at the carboxyl terminus of the PPAR␥ ligand binding domain, to cotransform with PGADH10-PBPT into yeast HF7C. As shown in Fig. 2, the presence of PBP did not increase the ␤-galactosidase activity over the control in the presence or absence of ligand, suggesting that the mutant PPAR␥ is unable to bind with PBP. In Vitro Binding of PBP to Different Nuclear Receptors-To determine whether PBP directly interacts with PPAR␥, in vitro binding was assayed with bacterially generated fusion protein of GST with PPAR␥ (GST-PPAR␥) and in vitro translated PBP. A matrix-bound GST-PPAR␥, but not GST alone, retained radiolabeled PBP in the presence or absence of PPAR␥ ligand (Fig. 3A). The presence of BRL49653, a PPAR␥ ligand in the assay mixture, increased the physical interaction (Fig. 3). We also examined the ability of PBP to bind with PPAR heterodimerization partner, RXR. As shown in Fig. 3A (lanes 3 and 5), a matrix bound fusion protein of GST-RXR␣, but not GST alone, retained [S 35 ]methionine-labeled PBP, and the presence of RXR ligand, 9-cis-retinoic acid enhanced this interaction. To ascertain if PBP interacts with some other nuclear receptors, we used a truncated PBP (from 487 to 739 amino acids, designated PBPT) that was capable of binding to PPAR␥. A fusion protein of GST with PBPT was bacterially produced and used for binding assays with [S 35 ]methionine-labeled in vitro translated PPAR␣, RAR␣, and TR␤1. All three receptors bind to PBP, and the interaction is stronger in the presence of respective ligands (Fig. 3B). It appears that PPAR␣ and RAR␣ interaction with PBP is more prominent in the presence of their The ligand for PPAR␥ was BRL49653 and the ligand for RXR was 9-cis-retinoic acid. The bound proteins were eluted and analyzed by SDS-PAGE and autoradiographed. Note that PBP binds to both GST-PPAR␥ and GST-RXR␣ with or without the ligand, but the level of interaction is increased in the presence of ligand. No binding is seen to GST alone, which served as control. B, interaction of PBP with PPAR␣, RAR␣, and TR␤1 in vitro. [ 35 S]Methionine-labeled PPAR␣, RAR␣, or TR␤1, generated by in vitro transcription and translation, were incubated with glutathione-Sepharose beads bound with either purified E. coli-expressed GST-PBPT or GST, in the presence (ϩ) or absence (Ϫ) of ligand. The ligands used were: Wy-14643 for PPAR␣, 9-cis-retinoic acid for RAR␣, and T 3 for TR␤1. The bound proteins were eluted and analyzed using 10% SDS-polyacrylamide gel electrophoresis and autoradiographed. In lanes referred to as input, 1 l of translated receptor proteins was used as a control. ligands when compared with the ligand-influenced interaction of PBP with PPAR␥, RXR␣ or TR␤1. PBP Modestly Increases the Transcriptional Activity of PPAR␥-To investigate the functional relevance of the binding of PBP with PPAR␥, PBP and PPAR␥ were transiently coexpressed with the reporter luciferase gene, PPRE-TK-LUC. PBP did not affect the transcriptional activity of PPAR␥ in HeLa, Chinese hamster ovary, or CV-1 cell lines, either in the presence or absence of ligand BRL49653 (data not shown). Nonetheless, in NIH 3T3 cells, PBP reproducibly increased the transcription of luciferase gene by about 1.7-fold in the presence of the ligand BRL49653 (Fig. 4). Truncated PBP Decreases the Transcriptional Activity of PPAR␥-Since the increased expression of PBP affected only moderately the function of PPAR␥ in one of the cell lines tested, it is possible that the concentration of PBP in these cells may not be a limiting factor for the function of PPAR␥. We then examined if the short truncated PBP (487-739 amino acids) which was capable of binding PPAR␥ would compete with the wild type PBP and influence PPAR␥ activity. When the truncated PBP was cotransfected with PPAR␥ into CV-1 cells, the transcriptional activity of PPAR␥ markedly decreased in the presence of BRL49653 (Fig. 5), whereas no significant change was noted in the absence of the ligand (Fig. 4). Truncated PBP also exerted a similar inhibitory effect on the transcriptional activity of PPAR␣ and TR␤1 (data not shown). Tissue Distribution of PBP Transcripts-The PBP mRNA, which is approximately 8 kb in length, is expressed in all tissues examined with higher levels in liver, kidney, testis, and lung (Fig. 6). In the testis, a second transcript, approximately 2.7 kb in length, is present (Fig. 6); this may represent either an alternatively spliced form of PBP or a closely related, but different, isotype. The expression of PBP during ontogeny, as well as in the adult tissues, was examined using RNA in situ hybridization. At E9.5 days, PBP message was strongly expressed primarily in the neural tissues throughout the embryo, branchial arche 1, and the primitive gut (Fig. 7A). At E13.5 days, the PBP expression was ubiquitous with high levels in the roof of the forebrain and mid brain and in the developing liver, gut, kidney, tongue, lower jaw, thymus, genital tubercle, and lung (Fig. 7B). Hybridization with sense-strand probe for PBP gave no appreciable signal (Fig. 7C). In the adult mouse, the expression of PBP was observed in liver, bronchial epithelium in the lung, intestinal mucosa, kidney cortex, thymic cortex, splenic follicles, and seminiferous epithelium in testis (Fig. 7D). No signal was seen when the adult tissues were hybridized with sense-strand probe of PBP (data not shown). DISCUSSION In a previous study, using the yeast two-hybrid system, we isolated and characterized mouse SRC-1 as a PPAR coactivator (42). The data presented in this report demonstrate that PPAR is capable of interacting with factor(s) other than SRC-1. Our data suggest that PBP, a 165-kDa protein that interacts with PPAR␥, serves as a coactivator. We also show that PBP binds to PPAR␣, RXR␣, RAR␣, and TR␤1 and this binding is increased in the presence of their respective ligands. It is pertinent to note that the ligands for PPAR␣ and RAR␣ were effective in enhancing the interaction between PBP-PPAR␣ and PBP-RAR␣, respectively. On the other hand, there was only a modest increase in the interaction between PBP and other receptors, namely TR␤1, PPAR␥, and RXR␣, in the presence of their respective ligands. The significance of the differential influence of ligands on the interaction remains to be explored. Further studies are also needed to determine whether PBP is capable of interacting with steroid receptors, such as estrogen, growth hormone, and progesterone receptors. Overexpression 6. Northern blot analysis of PBP mRNA. A mouse multiple tissue Northern blot (CLONTECH) containing 2 g of poly(A) RNA for each tissue was probed with 32 P-labeled PBP cDNA. The PBP hybridized blots were exposed to film at Ϫ80°C with intensifier screens for 24 h. The transcript size of PBP165 is 8 kb in all tissues examined. An additional 2.7-kb transcript is present in testis. of PBP exerted only a modest influence on the transcriptional activity of the PPAR␥, implying that this protein does not appear to be a rate-limiting factor. This is in contrast to the coactivator activity of CBP/p300 and SRC-1, which either singly or together are able to markedly increase the transcriptional activation by several nuclear receptors including PPAR␥ (31, 48 -52). Nonetheless, we have shown that the truncated form of PBP (amino acids 487-735), which contains the putative receptor-binding domain, acts as a dominant-negative repressor, suggesting that it is a genuine coactivator for PPARmediated gene expression. Northern blot analysis reveals that the gene encoding PBP is widely expressed, but at different levels in various tissues of the adult, the most prominent being the testis. The distribution of PBP in the adult tissues, in general, parallels the expression of PPARs (14). The abundance of expression in the testis suggests a possible role for PBP in cellular division and differentiation. In situ hybridization data on the developing mouse embryo revealed widespread expression of PBP, suggesting that this gene may play an important role in development and differentiation, which is consistent with the function of coactivators. PBP is detectable as early as E9.5 with strong expression in neural epithelium, primitive gut, and branchial arche, suggesting its possible biological involvement in the genesis of their derivatives. The in situ hybridization data of E13.5-day embryo, which reveals expression in tongue, lower jaw, and other organs further supports this assumption. The finding that PBP failed to interact with PPAR␥ that had the deletion of the last 12 amino acids at the extreme carboxyl terminus of the ligand binding domain is of interest, in that this domain may be critical for the interaction of coactivators. For example, RIP140 and TIF1 are incapable of binding to nuclear receptors that lack this domain (36,41). This region is important for the transcriptional activation function of the nuclear receptors but dispensable for the hormone binding and heterodimerization function (27). Two studies published recently point out that a sequence motif LXXLL is necessary and sufficient for the binding of several cofactors to nuclear receptors (37,47). PBP contains two copies of this motif that are located at residue 589 -593 and residue 630 -634, respectively. Based on the sequence data of the overlapping regions of two partial PBP cDNA clones isolated using PPAR as a bait, we suggest a single LXXLL motif at residue 630 -634 is sufficient for the binding of the PBP to PPAR, and other receptors analyzed for binding in this study. Nonetheless, detailed studies on the mutational analysis of this motif, as well as detailed studies on the mapping of binding sites, are needed to ascertain other regions in PBP protein that might play a role in protein-protein interactions. There is increasing evidence for the participation of multiple molecular partners in determining the transcriptional outcome of nuclear receptors in response to ligands (30,31,34,37). We have shown that both SRC-1 and PBP act as coactivators for PPAR target gene expression. We also ascertained that SRC-1 and PBP do not interact with each other. 2 Since SRC-1 interacts with CBP/p300 to augment transcription of nuclear receptors (31), it remains to established whether PBP is capable of binding or interacting with CBP/p300. As the transcriptional activity of the nuclear receptors appears to vary, depending on the cell type and the nature of response elements in the target gene promoter, there is a need to fully dissect the role and availability of different combinations of cofactors and corepressors in the cell-specific target gene expression. In an earlier study we demonstrated that deoxyuridine triphosphatase serves as a corepressor of PPAR target gene transcription (43), suggesting a role for both coactivators and corepressors in the PPAR-mediated transcription. In summary, the availability of different cofactors to a specific gene promoter may determine the specificity of gene expression.
5,464.2
1997-10-10T00:00:00.000
[ "Biology", "Chemistry" ]
The Effect of Visible Light on Cell Envelope Subproteome during Vibrio harveyi Survival at 20 °C in Seawater A number of Vibrio spp. belong to the well-studied model organisms used to understand the strategies developed by marine bacteria to cope with adverse conditions (starvation, suboptimal temperature, solar radiation, etc.) in their natural environments. Temperature and nutrient availability are considered to be the key factors that influence Vibrio harveyi physiology, morphology, and persistence in aquatic systems. In contrast to the well-studied effects of temperature and starvation on Vibrio survival, little is known about the impact of visible light able to cause photooxidative stress. Here we employ V. harveyi ATCC 14126T as a model organism to analyze and compare the survival patterns and changes in the protein composition of its cell envelope during the long-term permanence of this bacterium in seawater microcosm at 20 °C in the presence and absence of illumination with visible light. We found that V. harveyi exposure to visible light reduces cell culturability likely inducing the entry into the Viable but Non Culturable state (VBNC), whereas populations maintained in darkness remained culturable for at least 21 days. Despite these differences, the starved cells in both populations underwent morphological changes by reducing their size. Moreover, further proteomic analysis revealed a number of changes in the composition of cell envelope potentially accountable for the different adaptation pattern manifested in the absence and presence of visible light. Introduction Vibrio species are frequently used as models organisms to study the strategies developed by marine bacteria to cope with adverse and changing environments. A large number of studies have demonstrated that the survival of vibrios in the natural environment is largely determined by temperature, and some authors [1,2] have indicated that these bacteria represent an important and tangible barometer sensing the impact of climate change in marine ecosystems. These effects are more profound during the summer seasons, which are also characterized by more intensive solar radiation. Several studies have reported the complex responses of aquatic bacteria exposed to photosynthetically active radiation (PAR; visible light) (400 to 700 nm). While it positively affects the physiology of autochthonous marine Vibrio harveyi Strain and Inocula Preparation A V. harveyi strain ATCC 14126 T was used throughout this study. For inocula preparation, cells were cultured aerobically in marine broth (MB, PanReac AppliChem, Barcelona, Spain) at 26 • C with shaking (120 rpm) until they reached the stationary phase. The cells were harvested by centrifugation (4000× g, 4 • C, 20 min), washed three times with sterile saline solution (1.94% NaCl, w/v) and were suspended afterwards in sterile saline solution. Survival Experiments All the glass flasks used for handling V. harveyi cultures were cleaned with H 2 SO 4 (96%, v/v) beforehand, rinsed with deionized water, and kept at 250 • C for 24 h to get rid of residual organic matter. Erlenmeyer flasks containing 2 L filtered and autoclaved seawater, collected from Port of Armintza in the North of Spain (43 • 26 24 N and 2 • 54 24 W), were inoculated with stationary-phase V. harveyi cells to reach a density of 10 8 cells mL -1 and incubated at 20 • C with shaking (120 rpm) in darkness and exposed to photosynthetically active radiation (PAR) up to 21 days. Illumination was provided by five Sylvania Standard F25W/30 lamps emitting in the 400 to 700 range. The populations received a light intensity of 15.93 W m −2 . Periodically, samples were collected in triplicate for bacterial count, determination of cell size and extraction of membrane proteins. All the experiments were performed three times. The values presented in datasets are the means of three experiments, and the standard deviations between replicates were less than 12%. The differences between the means were calculated by a one-way analysis of variances. Probabilities that were less than (or equal to) 0.05 were considered to be significant. Cell Counting and Estimation of Bacterial Size The total number of bacteria (TNB) was determined according to the procedure described by Hobbie et al. [31]. Viable bacteria, estimated as bacteria with intact cytoplasmic membranes (MEMB+), were counted with Live/Dead BacLight TM kit (Thermo Fisher Scientific Inc., Madrid, Spain) as described by Joux et al. [32]. The number of culturable bacteria (CFU) was determined by spreading cell suspensions on marine agar (MA, PanReac AppliChem, Barcelona, Spain) followed by incubation for 24 h at 26 • C and cell counting. The length variations of V. harveyi cells during their survival at 20 • C were estimated via image analysis of epifluorescence preparations [33] by using an image analysis system, which included a video camera of high resolution (Hamamatsu 2400, Hamamatsu Photonics, Hamamatsu City, Japan). Digitized images of microscopic fields were analyzed by Scion Image 1.62 a software. In total, 200 cells were measured in each sample. The mean value (x) and the corresponding standard deviation (SD), which defined the size of the cells in the initial inoculate, were used to establish three arbitrary ranges of cell size (≤x-SD, >x-SD and ≤ x+SD, >x+SD) subsequently used to present the time-dependent changes of cell size in V. harveyi populations [5]. Isolation of Membrane Proteins by Using Sodium Carbonate Extraction To extract membrane proteins, cells were harvested by centrifugation (8000× g, 40 min, 4 • C) and the pellet obtained was suspended in 10 mL of Tris-buffered saline (TBS, pH 8). The cells were collected by centrifugation again (8000× g, 20 min, 4 • C), the cell pellet was briefly washed with TBS, which was decanted and the cell pellet was suspended in 10 mL of TBS followed by addition of 250 µL of Protease Inhibitor Cocktail (Sigma-Aldrich, Madrid, Spain) per g of cell pellet and 90 µL of 2 mM of phenylmethylsulfonyl fluoride (PSMF, PanReac AppliChem, Barcelona, Spain), the suspensions were frozen in liquid nitrogen and stored at −80 • C. Cells were disrupted by intermittent sonication (SONICS VibraCell TM VCX130 Ultrasonic Cell Disruptor, Sonics & Materials Inc., Newtown, CT, USA) using a 6-mm-diameter probe (65% amplitude setting, 30 s on/45 s off cycles for 3 min in total). Unbroken cells and cellular debris were removed by centrifugation at 6000× g for 20 min at 4 • C. The supernatant fractions were stored on ice while the pellets were suspended in 10 mL of TBS and sonicated under the same conditions as specified above. This procedure was repeated at least three times. The supernatants obtained from the same samples were combined, diluted (1:1) with 0.2 M sodium carbonate solution, incubated on ice for 1 h with gentle shaking and ultracentrifuged at 115,000× g for 1 h at 4 • C. The supernatants were discarded and protein pellets containing membrane proteins were suspended in 1 mL of TBS. Protein Identification and Quantification Analysis of protein samples containing membrane proteins was performed in the Proteomics Core Facility-SGIKER at the University of the Basque Country, using the protocol previously described by González-Fernández et al. [34]. Briefly, 50 µg of total protein were precipitated by using a 2-D Clean-Up kit (GE Healthcare, Bilbao, Spain) according to the manufacturer's instructions. The pellet was suspended in 0.2% RapiGest solution (Waters Corporation, Cerdayola del Vallès, Spain), heated (85 • C, 15 min), reduced with DL-dithiothreitol (DTT, 5 mM), alkylated with iodoacetamide (15 mM) and digested with trypsin (Roche Diagnostics, Leganés, Spain; 2 µg per sample) overnight at 37 • C. RapiGest was inactivated by the addition of HCl at a final concentration of 0.5% and incubation at 37 • C for 40 min. Samples were centrifuged at 16,000× g for 10 min, the supernatant was collected and MassPREP Enolase Digestion Standard (Waters Corporation, Cerdayola del Vallès, Spain) was added as an internal standard for protein absolute quantification. Data independent acquisition analyses were performed in a NanoAcquity UPLC System coupled to a SYNAPT HDMS (Waters Corporation, Cerdayola del Vallès, Spain). A final amount of 0.5 µg (containing tryptic peptides and 100 fmol of MassPREP Enolase Digestion Standard) were loaded onto a Symmetry 300 C18, 180 µm × 20 mm precolumn (Waters Corporation). The precolumn was connected to a BEH130 C18 column (75 µm × 200 mm, 1.7 µm [Waters Corporation, Cerdayola del Vallès, Spain]) and peptides were eluted with a 120 min linear gradient (3 to 40%) of acetonitrile (v/v) followed by a 15 min linear gradient (40 to 60%) of acetonitrile (v/v). Mass spectra (MS) were acquired using a data independent acquisition mode (MSE) described by Silva et al. [35]. Briefly, 1 s alternate MS acquisitions were performed at low (6 eV) and high (12-35 eV ramping) collision energies and the radio frequency (RF) offset was adjusted so that the MS data were acquired from m/z 350 to 1990. [Glu1]-fibrinopeptide B (Sigma-Aldrich) at a concentration of 100 fmol/µL sprayed through the NanoLockSpray source and sampled every 30 s. The obtained spectra were processed with ProteinLynx Global Server v2.4 Build RC7 (Waters Corporation) using the doubly protonated monoisotopic ion of [Glu1]-fibrinopeptide B for mass correction. Protein identification was carried out by using the embedded database search algorithm of the program [36] and a Vibrio harveyi UniProt Knowledgebase (UniProtKB) (version 2020_06, 4950 sequences). For protein identification, the following parameters were adopted: carbamidomethylation of C as a fixed modification; N-terminal acetylation, N and Q deamidation, and M oxidation, as variable modifications; 1 missed cleavage and default automatic precursor and fragment error tolerance. A maximum false positive rate of 5% was allowed. Absolute protein quantification based on peak area intensity of peptide precursors was automatically calculated by ProteinLynx Global Server using Enolase peptides as an internal standard [37]. A total of 648 proteins were confirmed by finding at least three protein-derived peptides in the tryptic digest, 327 proteins were detected in at least two biological replicates and were subsequently used for absolute quantification. Individual absolute quantification values were normalized versus the total protein amount present in the sample. Proteins with a significant (p < 0.05, t-test) increase (>1.5 fold) or decrease (<0.5 fold) in their relative abundance with respect to the initial time (P0) were considered to be differentially affected by survival conditions. UniProt (http://www.uniprot.org/) and KEGG: Kyoto Encyclopedia of Genes and Genomes (http://www.genome.jp/kegg/) databases were used to verify the name and possible function of the proteins (accessed on 1 November 2020). The subcellular localization of many polypeptides annotated as membrane-associated proteins with known functions was further scrutinized by searching for the cognate membrane-binding domains with the PSORTb 3.0 program [38]. Analysis of V. harveyi Persistence at 20 • C The variations in integrity, viability, culturability, and cell size distribution of V. harveyi populations maintained at 20 • C under nutrient scarcity (i.e., incubation in seawater microcosms) are shown in Figure 1. The numbers of total (TNB) and viable (MEMB+) bacteria remained practically unchanged throughout the experimentation time regardless of PAR irradiation. However, the number of culturable cells (CFU) declined approximately 0.53 and 1.83 log after 21 d of incubation in the absence and presence of illumination, respectively ( Figure 1A,B). The significant loss of culturability for population exposed to PAR, along with the preservation of cell viability, indicated that the major part of the population (98.51%) had likely acquired the VBNC phenotype at the end of the incubation time. The size of the starved V. harveyi cells varied along the survival process, in fact, the cells reduced considerably their length during incubation, from a medium length of 1.93 µm at the beginning of the experiments to 0.97 or 0.92 µm after 21 d of incubation in the absence or presence of illumination, respectively. These phenotypical changes led to the appearance of cells with the coccoid-like morphology associated with the VBNC state in Vibrio species [39]. The length reduction was more profound when the experiments were carried out under illumination. The fraction of shorter cells (length ≤ 0.91 µm) increased nearly 3.6 times during exposure to visible light and about 2.7 times in darkness when compared to the initial values ( Figure 1C,D), ultimately reaching 43.5 and 57.5%, respectively. Moreover, the cells with a length exceeding 1.74 µm were not found after 21 days of incubation under both conditions. The size of the starved V. harveyi cells varied along the survival process, in fact, the cells reduced considerably their length during incubation, from a medium length of 1.93 µ m at the beginning of the experiments to 0.97 or 0.92 µ m after 21 d of incubation in the absence or presence of illumination, respectively. These phenotypical changes led to the appearance of cells with the coccoid-like morphology associated with the VBNC state in Vibrio species [39]. The length reduction was more profound when the experiments were carried out under illumination. The fraction of shorter cells (length ≤ 0.91 μm) increased nearly 3.6 times during exposure to visible light and about 2.7 times in darkness when compared to the initial values ( Figure 1C,D), ultimately reaching 43.5 and 57.5%, respectively. Moreover, the cells with a length exceeding 1.74 µ m were not found after 21 days of incubation under both conditions. Changes of Membrane Subproteome during Permanence at 20 °C From survival assays carried out in darkness or upon exposure to visible light, the samples were collected at different incubation times: immediately after inoculation (P0), 6 days (P1), and 21 days (P2). Proteins detected in at least two biological replicates, whose biological functions were previously defined or could be inferred by homology, were selected for further analysis. The dataset of proteins contained a high number of predicted cytosolic proteins (35.5%). Some of them belonged to cytosolic subunits of membrane protein complexes or were annotated as proteins that can transiently be associated with the membrane [5]. The above properties may explain the presence of these "cytosolic" proteins in the membrane fraction. After determining the composition of the membrane subproteomes, the identified proteins were sorted according to their biological functions and grouped to form the following categories of proteins involved in: (i) maintenance of cell structure, (ii) transport, (iii) bioenergetics, (iv) signal transduction, (v) protein synthesis, degradation and turnover, or other (vi) miscellaneous functions. rveyi cells varied along the survival process, in fact, the length during incubation, from a medium length of 1.93 iments to 0.97 or 0.92 μm after 21 d of incubation in the ion, respectively. These phenotypical changes led to the oid-like morphology associated with the VBNC state in uction was more profound when the experiments were he fraction of shorter cells (length ≤ 0.91 μm) increased e to visible light and about 2.7 times in darkness when igure 1C,D), ultimately reaching 43.5 and 57.5%, respeclength exceeding 1.74 μm were not found after 21 days ons. ome during Permanence at 20 °C d out in darkness or upon exposure to visible light, the nt incubation times: immediately after inoculation (P0), teins detected in at least two biological replicates, whose sly defined or could be inferred by homology, were seataset of proteins contained a high number of predicted f them belonged to cytosolic subunits of membrane prod as proteins that can transiently be associated with the rties may explain the presence of these "cytosolic" profter determining the composition of the membrane subs were sorted according to their biological functions and ategories of proteins involved in: (i) maintenance of cell nergetics, (iv) signal transduction, (v) protein synthesis, er (vi) miscellaneous functions. The size of the starved V. harveyi cells varied along the survival process, in fact cells reduced considerably their length during incubation, from a medium length of μm at the beginning of the experiments to 0.97 or 0.92 μm after 21 d of incubation in absence or presence of illumination, respectively. These phenotypical changes led to appearance of cells with the coccoid-like morphology associated with the VBNC sta Vibrio species [39]. The length reduction was more profound when the experiments w carried out under illumination. The fraction of shorter cells (length ≤ 0.91 μm) incre nearly 3.6 times during exposure to visible light and about 2.7 times in darkness w compared to the initial values ( Figure 1C,D), ultimately reaching 43.5 and 57.5%, res tively. Moreover, the cells with a length exceeding 1.74 μm were not found after 21 of incubation under both conditions. Changes of Membrane Subproteome during Permanence at 20 °C From survival assays carried out in darkness or upon exposure to visible light samples were collected at different incubation times: immediately after inoculation 6 days (P1), and 21 days (P2). Proteins detected in at least two biological replicates, w biological functions were previously defined or could be inferred by homology, wer lected for further analysis. The dataset of proteins contained a high number of predi cytosolic proteins (35.5%). Some of them belonged to cytosolic subunits of membrane tein complexes or were annotated as proteins that can transiently be associated with membrane [5]. The above properties may explain the presence of these "cytosolic" teins in the membrane fraction. After determining the composition of the membrane proteomes, the identified proteins were sorted according to their biological functions grouped to form the following categories of proteins involved in: (i) maintenance of structure, (ii) transport, (iii) bioenergetics, (iv) signal transduction, (v) protein synth degradation and turnover, or other (vi) miscellaneous functions. Changes of Membrane Subproteome during Permanence at 20 °C From survival assays carried out in darkness or upon exposure to visibl samples were collected at different incubation times: immediately after inocul 6 days (P1), and 21 days (P2). Proteins detected in at least two biological replica biological functions were previously defined or could be inferred by homolog lected for further analysis. The dataset of proteins contained a high number of cytosolic proteins (35.5%). Some of them belonged to cytosolic subunits of mem tein complexes or were annotated as proteins that can transiently be associate membrane [5]. The above properties may explain the presence of these "cyto teins in the membrane fraction. After determining the composition of the mem proteomes, the identified proteins were sorted according to their biological fun grouped to form the following categories of proteins involved in: (i) maintena structure, (ii) transport, (iii) bioenergetics, (iv) signal transduction, (v) protein degradation and turnover, or other (vi) miscellaneous functions. Changes of Membrane Subproteome during Permanence at 20 • C From survival assays carried out in darkness or upon exposure to visible light, the samples were collected at different incubation times: immediately after inoculation (P0), 6 days (P1), and 21 days (P2). Proteins detected in at least two biological replicates, whose biological functions were previously defined or could be inferred by homology, were selected for further analysis. The dataset of proteins contained a high number of predicted cytosolic proteins (35.5%). Some of them belonged to cytosolic subunits of membrane protein complexes or were annotated as proteins that can transiently be associated with the membrane [5]. The above properties may explain the presence of these "cytosolic" proteins in the membrane fraction. After determining the composition of the membrane subproteomes, the identified proteins were sorted according to their biological functions and grouped to form the following categories of proteins involved in: (i) maintenance of cell structure, (ii) transport, (iii) bioenergetics, (iv) signal transduction, (v) protein synthesis, degradation and turnover, or other (vi) miscellaneous functions. The proteins that did not show any significant variation in abundance (i.e., they were not upregulated [>1.5-fold] or downregulated [<0.5-fold]) during the survival experiments are listed in Table 1. This group includes proteins involved in (i) maintaining the structure of cell envelope (e.g., lipoproteins [D0XEL2_VIBH1, D0XD95_VIBH1], components of the β-barrel assembly machinery [BAM] complex, membrane protein insertase YidC and rod shape-determining protein MreB); (ii) transmembrane transport (ion transporters as OmpU or D0XAK6_VIBH1 porin, vitamin B12 transporter BtuB, maltose operon periplasmic protein [MalM] and others) and in protein translocation (YajC, SecA, and SecD subunits) and secretion (Type II secretion system core protein G, a TolC family protein or multidrug resistance protein MdtA); as well as (iii) proteins whose function is related to bioenergetics (namely, different subunits of ATP synthase, cytochrome b or subunits of Na(+)-translocating NADH-quinone reductase); (v) protein biogenesis (HflC and HflK proteins); and (vi) translation (elongation factors EF-Tu and EF-G). Table 1. Membrane proteins of V. harveyi ATCC 14126 T whose level did not show a significant change (>1.5 or <0.5 fold change) after 6 days (P1), and 21 days (P2) of starvation in seawater at 20 • C with respect to initial values (P0). In addition to proteins listed in Table 1, there was a group of proteins whose level was affected by experimental conditions (Table 2). In other words, we found that the level of numerous proteins was altered after 21 d of starvation at 20 • C both in the absence and presence of PAR irradiation. Protein Accession Number Namely, some components of phosphotransferase systems (PTS) (D0X8N6_VIBH1, D0XD90_VIBH1), TatA protein translocase, cytochrome c5, so-called methyl-accepting chemotaxis proteins (D0XEK4_VIBH1, D0XEY1_VIBH1, D0X9J1_VIBH1, D0XEC5_VIBH1, D0XHW4_VIBH1, D0XGG1_VIBH1, D0X5R4_VIBH1, D0X9F5_VIBH1, and D0XCQ6_VIBH1) and flagellin became undetectable (Table 2), whereas the level of YhcB readily declined after 21 d. On the contrary, only a few proteins (e.g., mechanosensitive ion channel MscS, bacterioferritin and catalase-peroxidase, see Table 2) undetectable in the initial inoculates (time P0) were detected after long-term starvation at 20 • C under both conditions. These proteins were detected earlier (6 d) in populations maintained under illumination. Table 2. Changes in the level of membrane proteins of V. harveyi ATCC 14126 T subjected to starvation in seawater at 20 • C. The data are presented for the initial population (time P0) and populations analyzed after starvation of V. harveyi for 6 (P1) and 21 (P2) days in the absence (−) and presence (+) of illumination. In addition, there was a group of membrane proteins differentially affected by incubation in the presence vs. absence of illumination. For instance, several proteins related to bioenergetic (e.g., cytochrome c oxidase subunit CcoO, cytochrome c4, ubiquinolcytochrome c reductase iron sulfur subunit, ubiquinol-cytochrome c reductase cytochrome c1) and others (penicillin-binding protein activator LpoA and proteases, D0XAK5_VIBH1 and ATP-dependent Zn protease) were downregulated only in populations that were exposed to PAR. Protein Accession Vice versa, several proteins, in particular those related to transport of phosphate and glucose, one isoform of ATP synthase subunit beta, OmpK, OmpA-like protein D0X6J9_VIBH1, and ATP-dependent zinc metalloprotease FtsH, were downregulated (or undetectable) only in the populations maintained in darkness. The upregulation of several proteins was likewise light-dependent. Namely, while some proteins (ATP synthase subunit c, subunit I of cytochrome d ubiquinol oxidase or NAD (P) transhydrogenase subunit beta) were upregulated in darkness, the same polypeptides were undetectable in the populations exposed to PAR. Similarly, illumination of starved V. harveyi cells for 21 days led to an increase in the level of some structural and transport-related proteins including the outer membrane protein Slp, YcfL peptidoglycan-associated lipoprotein D0XFJ5_VIBH1, general secretion pathway protein D, outer membrane protein TolC and agglutination protein D0XI94_VIBH1. Discussion The life style and persistence of microorganisms in natural aquatic systems are greatly dependent on diverse abiotic and biotic stress factors (e.g., suboptimal salinity and pH, temperature up-and downshifts, nutrient availability, solar radiation, predation, etc.). A number of studies, which were aimed at addressing the effect of these environmental factors, previously employed V. harveyi as a model organism. Although the individual impact of some stress factors on V. harveyi is well characterized [40,41], little is known about their joined action. Here we studied the combined effects of nutrient limitation, temperature, and visible light on V. harveyi adaptation in seawater microcosms. A typical and well-documented survival response of several Vibrio species under nutrient limitation is the acquisition of the VBNC state [4,8,9,42,43], which is more frequently observed at low temperatures rather than at temperatures ranging from 13 to 22 • C [9,26,44,45]. Consistently, we found that, unlike in the experiments carried out at 4 • C [5], V. harveyi strain ATCC 14126 T populations did not acquire the VBNC state after at least three weeks of incubation in seawater (nutrient scarcity, darkness) at 20 • C. Thus, our data indicate that, similar to the key role of temperature in adaptation of V. parahaemolyticus [46] or V. vulnificus [10,47], it also determines V. harveyi survival responses. Moreover, the persistence of V. harveyi populations was accompanied by a progressive reduction in cell size. This observation supports the idea that the initial response of V. harveyi to starvation leads to morphological changes rather than an immediate transition to the VBNC state [5,26]. In addition to nutrient availability and temperature, exposure to visible light is another important stress factor known for its contribution to Vibrio growth and survival [27][28][29]. To compare the long-term adaptation of V. harveyi in the absence and presence of illumination, experiments were also carried out upon exposure of V. harveyi populations to visible light. Our data demonstrate that illumination with visible light not only accelerates the bacterial size reduction but also decreases cell culturability, thus suggesting the cell entry into the VBNC state. However, the effect of visible light on transition to the VBNC state might be less profound than that in other bacteria, in which the acquisition of this phenotype can occur within a few days (e.g., in Escherichia coli [48,49] or Enterococcus faecalis [48,50]) or even hours (e.g., in Pseudomonas aeruginosa [51]). As exposure to visible light can provoke oxidative stress, the prolong persistence of Vibrio spp. population could be due to their ability to activate protective mechanisms mitigating the damaging effects of light. In fact, Rees et al. [52] speculated on the protective role of bacterial luminescence against oxidative stress. Consistently, other authors have indicated that bacterial bioluminescence may play an important role in detoxification of reactive oxygen species [53,54] and in stimulating DNA repair [55,56]. Another important response to oxidative stress involves catalase overproduction. It has been described for different Vibrio species subjected to abiotic stress [57][58][59]. In addition, recent studies described overexpression of other enzymes conferring protection against the toxic effects of H 2 O 2 and reactive oxygen species during V. harveyi permanence in seawater at different temperatures [24,26]. Therefore, our observation that catalase peroxidase becomes detectable in populations exposed to visible light earlier (i.e., at time P1) than in those lacking illumination could indicate a role of this enzyme in sustaining V. harveyi resistance to visible light. V. harveyi survival is determined by the capacity of this bacterium to respond to changing environment by reprogramming gene expression, thus affecting the entire proteome. Owing to the essential role of cell envelope in bacterial adaptation to stress, the second part of this study was focused on determining the stress-related changes in V. harveyi membrane subproteome and discussing their possible contribution to cell resistance to stress. Analysis of cell envelope subproteome revealed that the level of some membrane proteins with key roles in maintenance of cellular structure, transport, and bioenergetic processes remained unchanged until 21 d (Table 1). Moreover, some of those proteins (i.e., OmpW, maltose operon periplasmic protein, vitamin B12 transporter BtuB, several ATP synthase subunits, Na(+)-translocating NADH quinone reductase subunits or Hflk protein) were also maintained in VBNC populations induced during exposure to cold temperature [5]. Therefore, these proteins appear to constitute a pool of proteins inherently present in the viable (culturable or nonculturable) cells under starvation. Additionally, the level of many structural proteins detected in this study was preserved or even increased under stress. They include lipoproteins, Bam factors and other outer membrane proteins (e.g., OmpW porin [60]) apparently essential for maintaining the integrity of the outer cell membrane throughout the survival process. MreB (Table 1) is another protein whose level remained nearly the same during the survival process. This protein plays an important role in cell shape maintenance and division [61]. While sequence analysis predicts that it is a cytoplasmic protein, there is some evidence that suggests the transient association of this protein with the membrane [62]. Moreover, Chiu et al. [63] demonstrated that the MreB protein could be detected close to the membrane in starved cells. Although previous work had shown that MreB became undetectable in the membrane fraction during the first 12 h of permanence at 4 • C [5], we did not see any significant time-dependent changes in MreB levels for populations incubated at 20 • C in the presence or absence of illumination (present study). These results suggest that MreB association with the membrane (and therefore its cellular localization) in the starved cells are likely dependent on temperature. The permanent presence of other membrane proteins (Table 1) is likely linked to their essential functions (e.g., protein transport) during starvation. Indeed, our data revealed the constant presence of proteins (D0X124_VIBH1, D0X520_VIBH1) that were components of Types I and II secretion systems. Moreover, the concentration of some components (D0X523_VIBH1, D0X7A6_VIBH1, D0XI94_VIBH1) has even increased in the illuminated populations after 21 days. Concerning the Sec-mediated transport in V. harveyi, the abundance of SecA, SecD, and YajC proteins also remained unaltered along survival, while TatA became already undetectable after 6 days of starvation, thereby suggesting that nutrient limitation favored Tat-independent secretion. Unlike the mechanisms affecting protein secretion in V. harveyi, Brucella suis response to starvation appears to limit the Sec-dependent transport possibly to reduce the overall metabolic activity and energy consumption [64]. Similarly, Campylobacter jejuni persistence in tap water at different temperatures [65] has been reported to support the Tat-dependent (rather than Sec-dependent) transport. Elongation factors are essential bacterial proteins, and EF-TU has been described as a cytoplasmic chaperone [42,66] involved in protein synthesis and other cellular processes [67,68]. The higher abundance of EF-TU in the subproteomes of the stressed cells [5,18] and its upregulation upon exposure to stress [5,69] could imply the involvement of EF-TU in cell adaptation. Nevertheless, the lack of significant variations in the level of these proteins in populations examined in the present study does not support the general role of these protein factors in bacterial adaptation to stress. Besides the continuous presence of many proteins apparently essential for both the normal growth and cell survival (Table 1), there was a group of polypeptides differentially affected by starvation (Table 2). Variations in their levels could be attributable to stress adaptation induced by nutrient deprivation, and subsequent energy and carbon depletion. In particular, we found that multiple methyl-accepting chemotaxis proteins became rapidly undetectable in V. harveyi populations maintained in seawater at 20 • C regardless of expo-sure to visible light, thus mimicking V. harveyi response to starvation at 4 • C [5]. Similarly, the level of flagellin declined after 21 days. These results agree with previous observations obtained with starved Vibrio S14 cells by Malmcrona-Friberg et al. [70]. They found that most cells lost motility under starvation and suggested that the chemosensory system could be shutdown already after first 24 h of starvation. Likewise, our findings are also consistent with the results of Stretton et al. [71], who showed the detachment of flagellum during the first days of starvation and argued that due to high energy cost of synthesis, assembly, and function of flagellum, the transition to the non-motile (but metabolically active) state would be more beneficial for cells under starvation. In a similar study, Chen and Chen [72] likewise demonstrated that V. vulnificus motility diminished along the time of permanence under nutrient scarcity conditions. Moreover, some authors [73][74][75] revealed that starvation not only leads to the loss of motility but also increases cell adhesion. Therefore, the loss of chemotactic activity and motility observed in our study for V. harveyi populations under starvation could be an important strategy enabling to save energy and ensure cell survival under stress. Dissolved iron concentrations in open ocean surface waters typically stay below 0.2 nM [76], thus establishing iron-limiting conditions for marine organisms. Several authors [77,78] indicated that the control of iron homeostasis and responses to oxidative stress are interdependent. In other words, iron is not only an essential element for bacterial growth but it is also a toxic metal able to promote the formation of reactive oxygen species (ROS). They cause oxidative stress, consequently elevating the level of catalase peroxidase. In previous work, the success of V. harveyi permanence in seawater microcosms at 4 • C was linked to iron homeostasis involving bacterioferritin during the entry into the VBNC [5]. Regarding the populations exposed to starvation at 20 • C (present study), bacterioferritin, which was undetectable at the beginning of experiments, becoming expressed even though no culturability loss was detected. A similar expression pattern was observed for the mechanosensitive ion channel protein MscS. It seems likely that in addition to its role in coping with osmotic stress, this protein is also involved in cell wall repairing to protect against sustained stress [79] stimulated by visible light. Taken together, our results demonstrate that V. harveyi adaptation to starvation at 20 • C induces morphological changes leading to cell size reduction and acquisition of the coccoid-like morphology, apparently triggering the acquisition of the VBNC state by the cell exposed to visible light. This finding suggests that the exposure to visible light along with variations in temperature, salinity, and others, might promote the V. harveyi entry to the VBNC state in aquatic systems. Moreover, several studies have proposed that the VBNC cells can potentially preserve their capacity to elicit infections [6,39,80]. Further analysis of cell envelope subproteome revealed that a number of membrane proteins playing the key roles in maintenance of major cell envelope functions constitute a pool of proteins continuously present in viable (culturable and nonculturable) V. harveyi ATCC14126 T populations exposed to stress. The presence of these proteins enables to sustain the key functions of the membranes, such as selective permeability and transport. In contrast, nutrient depletion leads to the loss of proteins involved in cell mobility and chemotaxis. In addition, starvation could potentially affect iron homeostasis largely dependent in the stressed populations on bacterioferritin. Likewise, as exposure to visible light potentially increases oxidative stress, there was a continuous presence of catalase and peroxidase proteins in the long starved cells. Taken together, our proteomic data indicate that adjustments in the cell envelope subproteome were more profound in the case of populations exposed to visible light. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
7,932.4
2021-03-01T00:00:00.000
[ "Environmental Science", "Biology" ]
Asymmetry Optimization for 10 THz OPC Transmission over the C + L Bands Using Distributed Raman Amplification An optimized design for a broadband Raman optical amplifier in standard single-mode fiber covering the C and L bands is presented, to be used in combination with wideband optical phase conjugation (OPC) nonlinearity compensation. The use of two Raman pumps and fiber Bragg grating reflectors at different wavelengths for the transmitted (C band) and conjugated (L band) WDM channels is proposed to extend bandwidth beyond the limits imposed by single-wavelength pumping, for a total 10 THz. Optimization of pump and reflector wavelength, as well as pump powers, allows us to achieve low asymmetry across the whole transmission band for optimal nonlinearity compensation. System performance is simulated to estimate OSNR, gain flatness and nonlinear Kerr distortion. Introduction Multiple solutions have been proposed over the past decades to address the critical problem posed by Kerr nonlinearity's cap to capacity in optical fiber communications [1]. Although digital compensation techniques have been successfully applied to the mitigation of nonlinear effects, they are inextricably associated to an increase in computational cost and energy consumption, and thus the possibility of finding a solution to the problem that works on the physical layer, is a very attractive one. Using optical phase conjugation (OPC) in the middle of the optical fiber link is a particularly effective way to combat the nonlinearities [1][2][3][4][5][6][7][8][9][10][11][12][13][14], which allowed for the first demonstration of optical communications above Shannon's limit [10]. The technique, is, however, not free from technical challenges in terms of implementation. To maximize its efficiency when applying it to multi-channel nonlinearity compensation, there are a few approaches which will need to be implemented on the fiber link. For example, in [15,16], the fiber nonlinearity compensation using a mid-link OPC can be achieved using a symmetrical chromatic dispersion slope or effective management of dispersion mapping before and after the OPC. Alternatively, in [17][18][19][20][21][22][23][24], with a purposefully designed distributed Raman amplification scheme, a symmetrical signal power profile along the fiber before and after the OPC was demonstrated to maximize the effectiveness of nonlinearity compensation with a mid-link OPC. In [24], we numerically optimised the in-span signal power asymmetry for three different advanced Raman amplification schemes using a single channel in the middle of the C-band at 1545 nm and identified that the second-order distributed Raman amplification based on a single-side FBG random distributed feedback laser is the most convenient design to achieve the best signal power profile symmetry. Next, in [21] we advanced our simulations to demonstrate the possibility of WDM transmission across the C band, by optimising symmetry over a broad section (40 channels with a 25 GHz spacing). Recently, ultra-wideband or multi-band optical transmission assisted by Raman amplification has been a hot topic of discussion [25][26][27][28], as an efficient tool to fully unlock the potential transmission capacity of single mode fiber (SMF). In this context, the possibility of applying the second-order Raman amplification scheme discussed in [17,[29][30][31] in other transmission bands, using a fiber Bragg grating in another band with appropriate Raman pump wavelengths, becomes particularly interesting. In particular, in [17] we showed a method for bandwidth extension using a fixed wavelength Raman pump centered at 1366 nm, while using FBGs at different wavelengths for the originally transmitted and the conjugated channels. Utilizing this method, we managed transmission over 6 THz with a 5.9% average asymmetry in a 192-198 THz band using 60 km SMF span. In [18] we summarized our experimental work reviewing several configurations of distributed Raman amplifiers designed specifically for fiber nonlinearity compensation in a mid-link optical phase conjugation system, demonstrating that, for nearly symmetrical signal power profiles, the Raman schemes in both the single-span and two-span systems provide a 9 dB enhancement of the nonlinear threshold in a 200 Gb/s DP-16QAM transmission system using a mid-link OPC In this paper, we make use of a random distributed feedback fiber Raman laser amplifier scheme and we significantly extend the working bandwidth to a full C band (before the OPC) and L band (after the OPC) optimizing the amplifier specifically for dual-band OPC with a total bandwidth of 10 THz. Unlike in previous theoretical and experimental work, where we used a fixed single Raman pump wavelength, in this case we consider the wavelengths and optical power of the primary Raman pumps and the wavelengths of the secondary FBG mirrors as optimisable parameters to find the best average symmetry in multi-channel C and L band transmission systems with mid-link OPC. Amplifier Design for Optical Phase Conjugation In the recent past, the design of optical networks was based exclusively on lumped amplification and constrained to the C band due to the convenience, reliability and cost effectiveness of Erbium-doped fiber amplifiers (EDFAs). However, the exponential increase of data traffic over the internet has pushed further bandwidth extensions towards the optical L and S bands, which are not easily achievable with doped fibers alone. In this context, Raman amplification, whether lumped or distributed, offers an attractive alternative. Higher order Raman amplification is well known for improving transmission performance through improved noise performance, achieving extended bandwidth even with a single pump wavelength [17]. At the same time, distributed amplification allows for a precise control of signal power variation across transmission fibre [24], which is necessary to minimize asymmetry for the OPC system. In our previous works [17][18][19][20][21][22][23][24] we compared several designs of distributed Raman amplifiers: first order, second order and dual order using bi-directional and backward-only pumping schemes. Bi-directionally pumped distributed Raman amplification with a single FBG at the end of the transmission span performed best in terms of asymmetry [24] and relative intensity noise (RIN) [30][31][32], which is a key design feature for data transmission in a 60 km span, hence we continue with this design and modifications to meet our bandwidth requirement. In order for nonlinear impairments to be perfectly compensated in a system with mid-link optical phase conjugation, the following ideal condition must be fulfilled for each of the channels: where β 2 represents the dispersion coefficient for the channel wavelength, β 2 is its equivalent for the conjugate channel, γ and γ represent the nonlinear coefficients for the original and conjugate channels and P and P' indicate the corresponding signal powers. L OPC indicates the position of the optical phase conjugator, and z ranges from 0 to 2L OPC . The key to maximize performance in OPC-assisted systems lies in reducing signal power asymmetry between P and P . Dispersion and nonlinearity coefficients at the wavelengths of the original and conjugated channels depend on fiber characteristics, and can be very similar in modern com-Sensors 2023, 23, 2906 3 of 13 mercially available SMFs, so optimization options are limited to signal power evolution, which must be made as symmetrical as possible before and after the mid-link OPC for the original and conjugate channels. In practice, and since long-haul communications rely on the use of periodic amplification cells, the more efficient approach [21,24], is to aim for symmetric power evolution with respect to the periodic span mid-point, as well as similar power variation levels on both the original and conjugate channels, defining an asymmetry parameter to be optimized (see Section 3,below). In our research to design an amplifier spanning a 10 THz bandwidth we independently simulated schemes based on different wavelengths for the first and second order pumps (i.e., the FBG center wavelengths) with various forward and backward pump powers for a transmitted (5 THz C band) and conjugated (5 THz L band) wavelength division multiplexed (WDM) grid with a 100 GHz spacing. The schematic design of an amplifier for the OPC-based transmission system is shown in Figure 1. Primary forward and backward Raman pump frequencies v p 1 , as well as the central frequency of the FBG, v p 2 were chosen accordingly to the target amplification bandwidth, aiming for the best asymmetry performance in a 60 km span length. Forward pump powers P + p 1 of the first order Raman laser for both bands were simulated from 0.7 to 1.4 W with a 100 mW step. Backward pump powers P − p 1 were simulated to give 0 dB net gain for a channel under test, and then all remaining WDM channels were simulated with fixed pump powers. The FBG (200 GHz bandwidth) located at the end of the transmission line reflects backscattered Rayleigh Stokes-shifted light from the backward pump P − p 1 and form a random DFB laser acting as secondary backward pump P − p 2 that amplifies the WDM signal at the C or L transmission band. The transmitted power per channel is set to −10 dBm. The key to maximize performance in OPC-assisted systems lies in reducing signal power asymmetry between and . Dispersion and nonlinearity coefficients at the wavelengths of the original and conjugated channels depend on fiber characteristics, and can be very similar in modern commercially available SMFs, so optimization options are limited to signal power evolution, which must be made as symmetrical as possible before and after the mid-link OPC for the original and conjugate channels. In practice, and since long-haul communications rely on the use of periodic amplification cells, the more efficient approach [21,24], is to aim for symmetric power evolution with respect to the periodic span mid-point, as well as similar power variation levels on both the original and conjugate channels, defining an asymmetry parameter to be optimized (see Section 3,below). In our research to design an amplifier spanning a 10 THz bandwidth we independently simulated schemes based on different wavelengths for the first and second order pumps (i.e., the FBG center wavelengths) with various forward and backward pump powers for a transmitted (5 THz C band) and conjugated (5 THz L band) wavelength division multiplexed (WDM) grid with a 100 GHz spacing. The schematic design of an amplifier for the OPC-based transmission system is shown in Figure 1. Primary forward and backward Raman pump frequencies , as well as the central frequency of the FBG, were chosen accordingly to the target amplification bandwidth, aiming for the best asymmetry performance in a 60 km span length. Forward pump powers of the first order Raman laser for both bands were simulated from 0.7 to 1.4 W with a 100 mW step. Backward pump powers were simulated to give 0 dB net gain for a channel under test, and then all remaining WDM channels were simulated with fixed pump powers. The FBG (200 GHz bandwidth) located at the end of the transmission line reflects backscattered Rayleigh Stokes-shifted light from the backward pump and form a random DFB laser acting as secondary backward pump that amplifies the WDM signal at the C or L transmission band. The transmitted power per channel is set to −10 dBm. To study the performance of the amplifier in the 5 THz C band (1528.77-1567.95 nm) the wavelength of the first order pump is made to range from 1362 to 1374 nm, whereas the wavelength of the FBG ranges from 1456 to 1474 nm. L Band: 50 Conjugated Channels with 100 GHz Spacing 186.2-191.1 THz To study the performance in the 5 THz L band (1568.77-1610.06 nm) we simulated for wavelengths of the first order pump ranging from 1402 to 1414 nm, and wavelength of the FBG ranging from 1492 to 1508 nm. Simulation Parameters To simulate our 10 THz wide band WDM OPC system we used our model of a second order Raman amplifier with a single FBG mirror at the end of the transmission span, that was derived and developed from [33]. The transmission band (C or L) was amplified by To study the performance of the amplifier in the 5 THz C band (1528.77-1567.95 nm) the wavelength of the first order pump is made to range from 1362 to 1374 nm, whereas the wavelength of the FBG ranges from 1456 to 1474 nm. L Band: 50 Conjugated Channels with 100 GHz Spacing 186.2-191.1 THz To study the performance in the 5 THz L band (1568.77-1610.06 nm) we simulated for wavelengths of the first order pump ranging from 1402 to 1414 nm, and wavelength of the FBG ranging from 1492 to 1508 nm. Simulation Parameters To simulate our 10 THz wide band WDM OPC system we used our model of a second order Raman amplifier with a single FBG mirror at the end of the transmission span, that was derived and developed from [33]. The transmission band (C or L) was amplified by the gain from the primary Raman pump in forward P + p 1 and backward P − p 1 directions as well as secondary pump in the backward direction P − p 2 generated at the wavelength of the FBG reflector. where P ± p are the powers of the forward (+) or backward (-) propagating pump, α is the corresponding attenuation, A e f f is the effective core area, g is the Raman gain coefficient depending on the frequency shift of the lasing and each WDM signal's wavelength for a standard single mode fiber as in Figure 2. the gain from the primary Raman pump in forward and backward directions as well as secondary pump in the backward direction generated at the wavelength of the FBG reflector. where ± are the powers of the forward (+) or backward (-) propagating pump, is the corresponding attenuation, is the effective core area, is the Raman gain coefficient depending on the frequency shift of the lasing and each WDM signal's wavelength for a standard single mode fiber as in Figure 2. and are the forward and backward noise at the frequency of the signal, is the frequency and ∆ is bandwidth of each component: (primary pump), (secondary pump) and s (signal). ℎ is the Planck's constant, is the Boltzmann constant and T is the absolute temperature. is the Rayleigh backscattering coefficient. Our model also takes into account the accumulated light power from all WDM channels and both pumps that is depleting amplification gain (it is then added or subtracted to Equations (2) and (3) n + s and n − s are the forward and backward noise at the frequency of the signal, v is the frequency and ∆v is bandwidth of each component: p 1 (primary pump), p 2 (secondary pump) and s (signal). h is the Planck's constant, K B is the Boltzmann constant and T is the absolute temperature. is the Rayleigh backscattering coefficient. Our model also takes into account the accumulated light power P D from all WDM channels and both pumps that is depleting amplification gain (it is then added or subtracted to Equations (2) and (3) assumed to be 1.0 × 10 −4 , 6.5 × 10 −5 and 4.5 × 10 −5 km −1 , respectively. The bandwidth of the FBG P − p 2 in the simulations was set to 200 GHz. With relatively low input power per channel (−10 dBm) and channel spacing of 100 GHz we do not consider cross-gain modulation in our simulations. The span length was 60 km. The asymmetry for each channel was calculated using the formula: where L is the span length, P 1 and P 2 represents signal power evolution of the transmitted and conjugated channels, respectively. The coefficients in the simulations were adjusted to match our experimental measurements of the signal power variation (SPV) in the SMF span. To measure the SPV, a laser source at 1545 nm with launch power of 0 dBm was used to provide a probe signal whose power evolution along the 80, 100 and 120 km transmission span was then monitored using a standard OTDR [34]. Results of the OTDR traces (noisy) and simulations (solid) are shown in Figure 3. amplified spontaneous emission (ASE) noise (calculated in 0.1 nm bandwidth) from each spectral component in the transmission band. The values of the Rayleigh backscattering coefficients for primary pump ± , lasing and at the frequencies of the signal channels are assumed to be 1.0 × 10 −4 , 6.5 × 10 −5 and 4.5 × 10 −5 km −1 , respectively. The bandwidth of the FBG in the simulations was set to 200 GHz. With relatively low input power per channel (−10 dBm) and channel spacing of 100 GHz we do not consider cross-gain modulation in our simulations. The span length was 60 km. The asymmetry for each channel was calculated using the formula: where L is the span length, P1 and P2 represents signal power evolution of the transmitted and conjugated channels, respectively. The coefficients in the simulations were adjusted to match our experimental measurements of the signal power variation (SPV) in the SMF span. To measure the SPV, a laser source at 1545 nm with launch power of 0 dBm was used to provide a probe signal whose power evolution along the 80, 100 and 120 km transmission span was then monitored using a standard OTDR [34]. Results of the OTDR traces (noisy) and simulations (solid) are shown in Figure 3. To verify the accuracy of the simulations we also measured asymmetry using modified OTDR system [34] in a 60 km SMF span for various forward and backward pump power rations. The results of the simulations are shown in Figure 4 (red). There is a very good agreement between experimental measurements and numerical simulations. To verify the accuracy of the simulations we also measured asymmetry using modified OTDR system [34] in a 60 km SMF span for various forward and backward pump power rations. The results of the simulations are shown in Figure 4 (red). There is a very good agreement between experimental measurements and numerical simulations. Results and Discussion To evaluate the optimum configuration for the lowest asymmetry across whole transmission spectrum we verified the results obtained with each original pump wavelength against different FBG (for original pump 1) and conjugated pump wavelengths and FBGs Results and Discussion To evaluate the optimum configuration for the lowest asymmetry across whole transmission spectrum we verified the results obtained with each original pump wavelength against different FBG (for original pump 1) and conjugated pump wavelengths and FBGs (for conjugated pump 2). As an example, below in Figure 5 we show the optimization process for a primary pump centered at 1364 nm and FBG ranging from 1456-1462 nm. For clarity, the results for L band pump (pump 2) wavelength (1402-1414 nm) are already given for the optimum (best average asymmetry match) FBG (simulated from 1492-1508 nm). Results and Discussion To evaluate the optimum configuration for the lowest asymmetry across whole transmission spectrum we verified the results obtained with each original pump wavelength against different FBG (for original pump 1) and conjugated pump wavelengths and FBGs (for conjugated pump 2). As an example, below in Figure 5 we show the optimization process for a primary pump centered at 1364 nm and FBG ranging from 1456-1462 nm. For clarity, the results for L band pump (pump 2) wavelength (1402-1414 nm) are already given for the optimum (best average asymmetry match) FBG (simulated from 1492-1508 nm). The asymmetry difference between the worst and best performing channels shown in Figure 6 is heavily biased due to the first WDM channel in the C band (CH1) that is off the grid of the Raman amplification gain. This is explained and shown with the further results where we present signal power variation, asymmetry and on-off gain for each individual channel in a 10 THz band. The asymmetry difference between the worst and best performing channels shown in Figure 6 is heavily biased due to the first WDM channel in the C band (CH1) that is off the grid of the Raman amplification gain. This is explained and shown with the further results where we present signal power variation, asymmetry and on-off gain for each individual channel in a 10 THz band. The best primary pump wavelength offset between the transmitted and corresponding conjugated WDM grid was found to be 48 nm: for the primary pump in C band centered at 1364 nm, the best matching primary pump for the L band was 1412 nm. In Figure 7 we show the best average asymmetry performance for all 100 WDM channels (50CH in C band versus 50CH in L band) as a function of primary pump and optimized FBG wavelengths for the conjugated L band channels. The choice of the wavelength of the FBG was previously investigated and shown in Figure 5. In this case the best asymmetry perfor- The best primary pump wavelength offset between the transmitted and corresponding conjugated WDM grid was found to be 48 nm: for the primary pump in C band centered at 1364 nm, the best matching primary pump for the L band was 1412 nm. In Figure 7 we show the best average asymmetry performance for all 100 WDM channels (50CH in C band versus 50CH in L band) as a function of primary pump and optimized FBG wavelengths for the conjugated L band channels. The choice of the wavelength of the FBG was previously investigated and shown in Figure 5. In this case the best asymmetry performance gave FBG centered at 1458 nm, with an average asymmetry below 10%. Figure 6. Asymmetry difference between worst and best performing channel in a 50 channels transmission band based on the results shown in Figure 5. The best primary pump wavelength offset between the transmitted and corresponding conjugated WDM grid was found to be 48 nm: for the primary pump in C band centered at 1364 nm, the best matching primary pump for the L band was 1412 nm. In Figure 7 we show the best average asymmetry performance for all 100 WDM channels (50CH in C band versus 50CH in L band) as a function of primary pump and optimized FBG wavelengths for the conjugated L band channels. The choice of the wavelength of the FBG was previously investigated and shown in Figure 5. In this case the best asymmetry performance gave FBG centered at 1458 nm, with an average asymmetry below 10%. Using the same methodology, we evaluated the primary pump wavelengths for transmitted C band channels ranging from 1362 to 1374 nm and for conjugated L band channels from 1402 to 1414 nm. Additionally, for each pump wavelength we simulated a range of different FBGs for the transmitted C band: 1456 to 1474 nm and conjugated L band channels 1492 to 1508 nm, with a 2 nm step for all cases, giving us proximately 1.5×10 6 possible combinations (pump wavelength × FBG × pump power × possible channel optimizations (50 × 50)). Out of all available combinations, the best performing configuration giving an average asymmetry of 8.2% across all WDM channels was achieved with the distributed Raman amplifier settings shown in Table 1 Using the same methodology, we evaluated the primary pump wavelengths for transmitted C band channels ranging from 1362 to 1374 nm and for conjugated L band channels from 1402 to 1414 nm. Additionally, for each pump wavelength we simulated a range of different FBGs for the transmitted C band: 1456 to 1474 nm and conjugated L band channels 1492 to 1508 nm, with a 2 nm step for all cases, giving us proximately 1.5 × 10 6 possible combinations (pump wavelength × FBG × pump power × possible channel optimizations (50 × 50)). Out of all available combinations, the best performing configuration giving an average asymmetry of 8.2% across all WDM channels was achieved with the distributed Raman amplifier settings shown in Table 1 below. The results of the best performing configuration with primary C band pump at 1370 nm as a function of primary L band pump for conjugated channels with optimized FBG are shown in Figure 8 (red). For reference we also show the discussed results for the primary C band pump wavelength centered at the 1364 nm (blue). We can notice that for a 6 nm (1364 to 1370 nm) shift in primary pump wavelength for the C band, the choice of the best matching L band primary pump wavelength would only change by 2 nm from 1412 nm (blue) to 1410 nm (red). However, we would like to stress that the average asymmetry is highly biased by the few worst performing channels, with performances that are off by 20-30%, while the rest varies by +/− 2%, hence the choice of optimal wavelengths Sensors 2023, 23, 2906 8 of 13 of the primary pumps and FBGs is not simple, and will depend on system needs and circumstances. This issue becomes even more evident if we start leveraging the negative impact of RIN on actual data transmission due to high forward pump powers and the benefit achieved from lower averaged overall asymmetry performance. In [31], the authors show that lower signal power variation due to higher forward pumping does not necessarily translate to better actual transmission performance. This problem can be mitigated using broadband forward pump power [31], which justifies our choice of higher order Raman amplification without direct forward lasing, one Stokes down-shifted from the band of the amplified signal. nm as a function of primary L band pump for conjugated channels with optimized FBG are shown in Figure 8 (red). For reference we also show the discussed results for the primary C band pump wavelength centered at the 1364 nm (blue). We can notice that for a 6 nm (1364 to 1370 nm) shift in primary pump wavelength for the C band, the choice of the best matching L band primary pump wavelength would only change by 2 nm from 1412 nm (blue) to 1410 nm (red). However, we would like to stress that the average asymmetry is highly biased by the few worst performing channels, with performances that are off by 20-30%, while the rest varies by +/− 2%, hence the choice of optimal wavelengths of the primary pumps and FBGs is not simple, and will depend on system needs and circumstances. This issue becomes even more evident if we start leveraging the negative impact of RIN on actual data transmission due to high forward pump powers and the benefit achieved from lower averaged overall asymmetry performance. In [31], the authors show that lower signal power variation due to higher forward pumping does not necessarily translate to better actual transmission performance. This problem can be mitigated using broadband forward pump power [31], which justifies our choice of higher order Raman amplification without direct forward lasing, one Stokes down-shifted from the band of the amplified signal. The impact of the forward pump power on the asymmetry of each WDM channel in the OPC system is shown in Figure 9. We can notice that the asymmetries of the first seven channels are practically immune to forward pumping power and do not vary significantly. This can be explained with Figure 10, where we the signal power variations (SPV) for each individual channel in C and L band are displayed. Higher SPV and asymmetry mismatch are directly related to the gain performance of our amplifier. In Figure 11 we show the best possible over all on-off gain for all channels (blue) as well gain performance for each channel at the best asymmetry performance configuration (red) in 10 THz C + L The impact of the forward pump power on the asymmetry of each WDM channel in the OPC system is shown in Figure 9. We can notice that the asymmetries of the first seven channels are practically immune to forward pumping power and do not vary significantly. This can be explained with Figure 10, where we the signal power variations (SPV) for each individual channel in C and L band are displayed. Higher SPV and asymmetry mismatch are directly related to the gain performance of our amplifier. In Figure 11 we show the best possible over all on-off gain for all channels (blue) as well gain performance for each channel at the best asymmetry performance configuration (red) in 10 THz C + L band distributed Raman amplification. The best gain flatness, with about 3 dB gain variation, was achieved for the configuration with the primary C band pump centered at 1370 nm with the FBG at 1460 nm. The primary L band pump was set to 1408 nm with the FBG at 1498 nm. Gain performance at best asymmetry (blue) is shown for configuration as in Table 1. The best asymmetry performance between the transmitted and corresponding conjugated channel was channel 18, giving the lowest asymmetry of 2.82%. The power profiles of both channels are shown below in Figure 12. We may notice a very low signal power variation of 1.74 dB or less for the transmitted and conjugated channels across the whole 60 km raw distributed Raman transmission span. band distributed Raman amplification. The best gain flatness, with about 3 dB gain variation, was achieved for the configuration with the primary C band pump centered at 1370 nm with the FBG at 1460 nm. The primary L band pump was set to 1408 nm with the FBG at 1498 nm. Gain performance at best asymmetry (blue) is shown for configuration as in Table 1. Figure 9. Impact of the forward pump power on asymmetry of each channel in best performing configuration as in Table 1. G ain [d B ] Best on/off gain Gain @ best asymmetry Figure 9. Impact of the forward pump power on asymmetry of each channel in best performing configuration as in Table 1. band distributed Raman amplification. The best gain flatness, with about 3 dB gain variation, was achieved for the configuration with the primary C band pump centered at 1370 nm with the FBG at 1460 nm. The primary L band pump was set to 1408 nm with the FBG at 1498 nm. Gain performance at best asymmetry (blue) is shown for configuration as in Table 1. Figure 9. Impact of the forward pump power on asymmetry of each channel in best performing configuration as in Table 1. G ain [d B ] Best on/off gain Gain @ best asymmetry band distributed Raman amplification. The best gain flatness, with about 3 dB gain variation, was achieved for the configuration with the primary C band pump centered at 1370 nm with the FBG at 1460 nm. The primary L band pump was set to 1408 nm with the FBG at 1498 nm. Gain performance at best asymmetry (blue) is shown for configuration as in Table 1. Figure 9. Impact of the forward pump power on asymmetry of each channel in best performing configuration as in Table 1. G ain [d B ] Best on/off gain Gain @ best asymmetry Figure 11. Gain performance of proposed Raman amplifier showing best possible on/off gain (blue) and actual gain (red) for best asymmetry of all 100 channels in a 10 THz transmission. The best asymmetry performance between the transmitted and corresponding conjugated channel was channel 18, giving the lowest asymmetry of 2.82%. The power profiles of both channels are shown below in Figure 12. We may notice a very low signal power variation of 1.74 dB or less for the transmitted and conjugated channels across the whole 60 km raw distributed Raman transmission span. In Figure 13 we show the theoretical prediction of Four wave mixing (FWM) power comparison using a mid-link OPC configuration (red) and a raw transmission (blue). The FWM power (defined in ref [20,35]) in the best scenario was suppressed by over 45 dB in a low frequency range and 40 dB at its peak just below 20 GHz. That demonstrates that the nonlinear distortion limiting the capacity of long-haul optical communication systems can be efficiently controlled with the fine optimization of the mid-link OPC in a real time data transmission. FWM nonlinearity compensation may be also limited by using various techniques of digital signal processing, however, this solution is computationally expensive and time consuming which, at the current state of art of the computational power, does not really allow for an advanced real time transmission. In Figure 13 we show the theoretical prediction of Four wave mixing (FWM) power comparison using a mid-link OPC configuration (red) and a raw transmission (blue). The FWM power (defined in ref [20,35]) in the best scenario was suppressed by over 45 dB in a low frequency range and 40 dB at its peak just below 20 GHz. That demonstrates that the nonlinear distortion limiting the capacity of long-haul optical communication systems can be efficiently controlled with the fine optimization of the mid-link OPC in a real time data transmission. FWM nonlinearity compensation may be also limited by using various techniques of digital signal processing, however, this solution is computationally expensive and time consuming which, at the current state of art of the computational power, does not really allow for an advanced real time transmission. Sensors 2023, 23, x FOR PEER REVIEW 11 of 13 Figure 13. Theoretical prediction of Four wave mixing (FWM) power as a function of frequency separation for mid link OPC link (red) and without OPC (blue) for the best performing channel shown in Figure 12. Finally, in Figure 14 we show the optical signal to noise ratio (OSNR) performance calculated over a 0.1 nm bandwidth as the difference between the signal power and the noise power as well as nonlinear phase shift (NPS) results for all transmitted and conjugated channels in an optimized 10 THz WDM grid (186.2-196.1 THz) in a 60 km standard single mode span. The OSNR varies from just below 39 dB to 40.5 dB, which is a very good performance across such a wide bandwidth with a raw Raman amplified transmission. NPS variation is also very low across all transmission bandwidth, with the lowest performance in front of C and L bands. Finally, in Figure 14 we show the optical signal to noise ratio (OSNR) performance calculated over a 0.1 nm bandwidth as the difference between the signal power and the noise power as well as nonlinear phase shift (NPS) results for all transmitted and conjugated channels in an optimized 10 THz WDM grid (186.2-196.1 THz) in a 60 km standard single mode span. The OSNR varies from just below 39 dB to 40.5 dB, which is a very good performance across such a wide bandwidth with a raw Raman amplified transmission. NPS variation is also very low across all transmission bandwidth, with the lowest performance in front of C and L bands. Figure 13. Theoretical prediction of Four wave mixing (FWM) power as a function of frequency separation for mid link OPC link (red) and without OPC (blue) for the best performing channel shown in Figure 12. Finally, in Figure 14 we show the optical signal to noise ratio (OSNR) performance calculated over a 0.1 nm bandwidth as the difference between the signal power and the noise power as well as nonlinear phase shift (NPS) results for all transmitted and conjugated channels in an optimized 10 THz WDM grid (186. 2-196.1 THz) in a 60 km standard single mode span. The OSNR varies from just below 39 dB to 40.5 dB, which is a very good performance across such a wide bandwidth with a raw Raman amplified transmission. NPS variation is also very low across all transmission bandwidth, with the lowest performance in front of C and L bands. Conclusions Using numerical simulations based on experimental results, we propose and demonstrate, for the first time, an amplifier design for C + L band mid-link OPC transmission achieving the lowest average asymmetry up to date over a 10 THz bandwidth. Using half- Conclusions Using numerical simulations based on experimental results, we propose and demonstrate, for the first time, an amplifier design for C + L band mid-link OPC transmission achieving the lowest average asymmetry up to date over a 10 THz bandwidth. Using half-open cavity random DFB Raman laser amplification with two different pump wavelengths for the transmitted and corresponding conjugated channels in combination with different FBGs we successfully extend the operating bandwidth of the mid-link OPC setup, obtaining very promising performance results. The optimized system is capable of 10 THz transmission with OSNR values above 38.8 dB and an average asymmetry of 8.2% for all WDM channels. The best possible configuration shows gain flatness below 3 dB across the 10 THz grid in a raw Raman transmission without any gain flattening filters applied.
8,461.6
2023-03-01T00:00:00.000
[ "Physics", "Engineering" ]
Some Reflections About the Success and Bibliographic Impact of the Dynamic Geometry System GeoGebra The authors were surprised by the number of articles that used or cited the computer algebra system DERIVE more than 10 years after it was discontinued and developed a small bibliographic study about it, published in 2019. Now they address in a similar way the very successful dynamic geometry system GeoGebra that, although created 20 years ago, later than the other great dynamic geometry systems (Cabri Geometry II, The Geometer’s Sketchpad and Cinderella), has now dozens of millions of users around the world. Not surprisingly, the cites to GeoGebra in the well known bibliographic databases Scopus, Web of Science and Google Scholar show an impressive growth. First Notes About the Dynamic Geometry System GeoGebra and this Study Nowadays the Dynamic Geometry System (DGS) GeoGebra has become a very successful piece of software, claiming over 100 million users (!!!) all around the world (many more than any other DGS or CAS). It now addresses not only dynamic geometry but also includes algebraic capabilities. GeoGebra is a free piece of software that welcomes the contributions and ideas from its users and is spread through a network of the so called GeoGebra Institutes. We shall give afterwards a brief overview of DGS in general, as well as a summary of the main characteristics and capabilities of GeoGebra. Finally, a bibliographic study of the evolution of the papers mentioning GeoGebra will be presented. About the Authors The first author has taught computational mathematics to students from the School of Education at the Universidad Complutense de Madrid for 36 years within the frame of different subjects about the use of information and communication technologies (ICT) in mathematics teaching. In these subjects he has used different hardware and languages (in the past, mainly, Logo, Derive and The Geometer's Sketchpad and now, mainly, Scratch, Maple and GeoGebra). He hs also taught computational mathematics to postgraduates at the School of Mathematics along these years. He was beta tester of the DGS The Geometer's Sketchpad. The second author has been a teacher of the School of Librarians of the Universidad de Extremadura for 27 years. She is specialized in quantitative studies in Social Sciences and Humanities. The Pioneers Cabri Géomètre (later renamed Cabri Geometry II [28] and now Cabri II Plus) and The Geometer's Sketchpad [29,34], were available in the early '90 s. They included the main features of DGS: dynamism and a mouse-based data introduction, allowing to comfortably experiment with plane geometry (Fig. 1). • Geometry Expressions [9,30], that includes a small internal CAS that allows to directly perform symbolic computations derived from the geometric constructions. DGS and Symbolic Computation: Possibilities Providing the DGS with the possibility to perform symbolic computations opens new and exciting fields like: • Automatic Theorem Proving in geometry (ATP) [22]: In a naïve way: the geometric conditions are translated into algebraic conditions, and it is checked whether the (algebraic) thesis condition follows from assuming the (algebraic) hypotheses conditions. Let's see a trivial example. Proof: Assign general coordinates to the vertices of the triangle, for instance A=(0,0), B=(b1,0), C=(c1,c2) (the reference system is properly chosen). The linear system consisting in the equations of these three lines is compatible, so they are collinear (Fig. 2). The equations systems are not always linear (if circumferences or distances are involved, second degree equations arise and the corresponding equations system are algebraic but not linear). The best known solving methods in such case are Wu's pseudoremainder method [4,32,33] and Gröbner bases method [2,16] (both even allowing to prove new theorems [23][24][25]). • Automatic discovery of theorems in geometry (derived from the previous one, oriented to hypotheses completion) [17]. • Exact geometric loci finding [1,23]. • Applications in physics, for instance to linkages [14]. • The designers and developers of the DGS can incorporate a CAS to the DGS. That is the case of Geometry Expressions and GeoGebra (Fig. 3). • the designers and developers of the DGS can facilitate the communication between the DGS and external CAS (another possibility of Geometry Expressions and GeoGebra). • external designers and developers develop a connection between a DGS and a CAS using the output file of the DGS (2D case [18,20], 3D case [21]). About GeoGebra According to the epigraph "Short History of GeoGebra" of [10]: "GeoGebra was created by Markus Hohenwarter in 2001/2002 as part of his master's thesis [11] in mathematics education and computer science at the University of Salzburg in Austria. Supported by a DOC scholarship from the Austrian Academy of Sciences he was able to continue the development of the software as part of his PhD project in mathematics education [12]. During that time, GeoGebra won several international awards, including the European and German educational software awards, and was translated by math instructors and teachers all over the world to more than 25 languages." GeoGebra has always been freely available, initially thanks to the support of the Austrian Ministry of Education and later thanks to the American NSF project "Standard Mapped Graduate Education and Mentoring". The graphic interface of GeoGebra is similar to those of other DGS (see Figs. 1 and 2). However there is a difference with respect to other DGS like, for instance, The Geometer's Sketchpad: in The Geometer's Sketchpad the constructible geometric objects (the selectable "tools") depend on the already selected geometric objects, while, in GeoGebra the "tools" are firstly chosen and the input geometric objects are selected a posteriori. Main Milestones in the Development of GeoGebra GeoGebra "basic" windows (the Graphical View and the Algebraic View) have a bidirectional connection (Fig. 4) [10]: • Changes introduced with the mouse in the Graphical View induce the corresponding changes in the Algebraic View, and, • Conversely, the changes introduced through the keyboard in the Algebraic View induce the corresponding changes in the Graphical View. The mathematical software GeoGebra has reached an unprecedented success, claiming, as said above, over 100 million users. We'll analyse afterwards through a bibliographic study its impact in academic papers. General Scopus Data The search for GeoGebra in the database Scopus [27] in "Title-Abstract-Keywords" ("T-A-K") finds 832 references. The search for GeoGebra in "All Fields" finds 2264 references. They are distributed as shown in Table 1 and Fig. 7. The values in "T-A-K" are close to those of a monotonically increasing function. The values in "All Fields" correspond to a monotonically increasing function if we exclude the 2008 value. Scopus Data by Author According to Scopus, the top authors citing GeoGebra in "T-A-K" are: It has to be noticed that the first three authors in the previous lists are working in a remarkable "official" extension of GeoGebra (GeoGebra Discovery), that is able to find and formally proof theorems directly from geometric constructions (using algebraic ATP techniques) [15]. Scopus Data by Subject Area The top subject areas where GeoGebra is cited in "T-A-K" in Scopus database are (Fig. 8 T AK 0 0 0 1 3 0 5 10 22 20 46 31 46 50 76 93 124 119 159 All 1 0 1 1 8 3 14 18 46 47 75 85 93 116 186 224 335 416 Due to the characteristics and purpose of GeoGebra, we believe that most "Social Sciences" papers will correspond to educational papers This is confirmed if we check the journals where the papers in this area have been published. For instance, the most recent publications in this area indexed in Scopus are published in the journals: (26) It is surprising to us that the US occupies the 7th place, the UK the 18th, Germany the 24th and China the 26th. Meanwhile, the top 14 countries when looking in "All Fields" instead are (Fig. 9 The position changes of the US, China and Germany are remarkable and worth a deeper study. Can they be related, for instance, to cultural issues? The distribution by countries can also be visualized in the maps of Figs. 10 and 11. Bibliographic Data from Web of Science (as on April 29th 2022) The search for GeoGebra in the database Web of Science [6] in "Title" finds 330 references. The search for GeoGebra in "Topic" finds 800 references. They are distributed as shown in Table 2 and Fig. 12. Both lists of values show an increasing general tendency, although with more oscillations than when using Scopus as data source. The general search for GeoGebra in the database Google Scholar [8] results in an impressive ∼ 73,800 references. The advanced search in "Title" finds 9820 references. They are distributed as shown in Table 3 and Fig. 13. The values for the general case correspond to a monotonically increasing function from 2006 onwards. The values for the search in "Title" also show an increasing tendency, with a slight maximum in 2018, and are more or less stabilized since 2017. Conclusions The available DGS are great tools for exploring geometry. The huge number of papers using GeoGebra and their constant growth confirms this fact and, moreover, the success of this particular piece of software. The three bibliographic sources used (Scopus, Web of Science and Google Scholar) provide data with similar tendencies (constant growth). In the three sources consulted, we perceive a slight decrease in the number of citations in 2020, coinciding with the pandemic. It is noticeable that very many papers are published in educational journals. We guess that the success is due to the good policy of this software: • It is free, • Training has been provided but the GeoGebra Institutes, • It is multilingual, • It development has been opened to the contribution and suggestions of the users community. There are open questions: • Which are the reasons for the changes in the positions of the US, the UK, Germany and China if ordering countries by publications indexed in Scopus mentioning GeoGebra in "T-A-K" or in "All Fields"? • Which are the reasons for the growth of references in Google Scholar in the general search and the stabilization of references in the search in the title? exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2,367.4
2023-05-29T00:00:00.000
[ "Mathematics", "Computer Science" ]
An Orthogonal Covalent Connector System for the Efficient Assembly of Enzyme Cascades on DNA Nanostructures oligonucleotide binding tag (HOB). Since both linkers exhibit neither cross-reactivity nor non-specific binding, they allowed orthogonal assembly of an enzyme cascade consisting of the stereoselective ketoreductase Gre2p and the cofactor-regenerating isocitrate dehydro-genase on DON. The cascade showed approximately 1.6-fold higher activity in a stereoselective cascade reaction than the corresponding free solubilized enzymes. The connector system presented here and the methods used to validate it represent important tools for further development of DON-based multi-enzyme systems to investigate mechanistic effects of substrate channeling and compartmentalization relevant for exploitation in biosensing and catalysis. Introduction Scaffolded DNA origami nanostructures (DON) have emerged as powerful and versatile tools for diverse fields, ranging from nanotechnology and materials sciences to biochemistry and biomedicine. [1] Since these self-assembling structures can be used as frameworks for arranging proteins with nanometer precision, a variety of applications are emerging in the field of sensing, biocatalysis, or as tools for studying biological processes. [2] The generation of enzyme-decorated DNA nanoarchitectures is of particular interest with respect to the mimicking are achieved, even when a large excess of 1000 equivalents per ligand and longer incubation times are applied. [5a] Despite its potential, only very few studies have focused on an improvement of the SNAP-tag for DNA-based applications. For instance, the Morii group has developed a site-specific method to optimize SNAP-tag binding by using a zinc finger protein as a modular adaptor. [11] Although achieving coupling rates of >85%, this method requires the modification of both the POI and the DNA nanostructure with the adaptor and its respective DNA binding domain, respectively. Furthermore, the addition of the zinc finger protein to the SNAP-tag adds additional bulk to the linker and these proteins are sometimes difficult to express in heterologous hosts. To overcome these limitations, we report here the direct optimization of the SNAP-tag domain to enable SNAP-tagged fusion enzymes that can efficiently bind to DNA nanostructures. Following our previous approach, [7] rational re-design of five amino acids in the SNAP-domain led to an efficient linker for ligation with BG-modified oligonucleotides as well as DNA nanostructures. Furthermore, using a cascade based on a sensitive, stereoselective ketoreductase, and a cofactor-regenerating enzyme, we show that the combination of the new SNAPtag variant with the HOB-tag provides a powerful orthogonal linkage system to efficiently perform a two-step reduction cascade for the asymmetric synthesis of chiral alcohols on DNA nanostructures. Design of SNAP-Tag Variants The efficient application of the SNAP-tag as a connector for DNA nanostructures requires a high affinity toward the extremely dense negative charges on the DNA scaffold surface. Based on the previous research devoted to both characterization and optimization of the SNAP-tag, we chose six well-investigated amino acids for mutation to increase affinity toward BG-modified DNA nanostructures ( Figure 1A). The mutations K125A, A127T, R128A, S151G, and S152D had previously been introduced in the course of engineering the SNAP-tag for cell biology applications in order to disrupt the protein-DNA interaction exhibited by native hAGT. [10] The residues S151 and S152 were found to interact with the phosphate backbone of double-stranded DNA, [12] while the KAAR motif (K125, A126, A127, R128) plays a crucial role in nuclear retention of hAGT. [13] Based on our experience with the HOB-tag that more efficient connectors for DNA nanostructures become accessible by affecting electrostatic interactions, [7] we hypothesized that reversing A128R and A125K to positively charged amino acids could not only restore overall DNA affinity but also increase coupling efficiencies to the negatively charged DON surface. Moreover, we chose position 160 for mutation, which is located near the active site, and was found to be crucial for affinity toward the BG-ligand. [14] Mutations of amino acid at position 160 modulate substrate specificity, and, depending on the incorporated amino acid, can either increase or disrupt the binding capabilities. [15] Particularly, the exchange of Gly160 with Trp led to a threefold increase in BG-affinity, [9,16] presumably due to hydrophobic interactions with the substrate, making it an interesting spot for optimized coupling of the SNAP-tag to BG-modified DNA nanostructures. Although the roles of these amino acids in hAGT for DNA affinity have been extensively studied in literature, the research was performed with small-molecule linked substrates and short oligonucleotides. [10,16,17] These interaction partners have substantially different properties than DNA nanostructures in terms of negative charge density and steric accessibility. Effects of mutation of these six amino acids have not yet been evaluated for interaction with DNA nanostructures. Therefore, we chose to investigate the above described amino acids regarding their influence on DNA affinity and their use as an optimized immobilization tag, hence generating four different variants, which were compared to the conventional SNAP-tag ( Figure 1B). As expected, the variants exhibit an only slightly increased calculated isoelectric point (pI) as compared to the SNAP-tag, since no more than 6 out of 182 amino acids of the protein were changed. However, comparing the electrostatic map of these variants with the original SNAP-tag, the mutant variants bearing the mutations A125K, T127A, A128R, G151S, and D152S show a clear local accumulation of positive charges in the immediate vicinity of the entrance channel ( Figure S2, Figure 1. Overview of SNAP-tag variants. A) Structure of conventional SNAP-tag (PDB: 3KZZ) with highlighted amino acids that could contribute to increased binding to DNA nanostructures. B) SNAP-tag variants investigated in this study with their respective mutations. Note that 'SNAP_R' indicates for 're-engineered', wherein respective amino acids are identical to wildtype O6-alkylguanine-DNA alkyltransferase (AGT). The isoelectric point (pI) values of all variants were obtained by calculation using the Geneious software. Note that although the mutations cause little difference in the global charge distribution of the entire protein, the electrostatic map of these variants shows that the mutations cause a significant local accumulation of positive charges in the immediate vicinity of the entry channel ( Figure S2, Supporting Information). Supporting Information). This suggests a favorable interaction with the negatively charged DNA and is consistent with previous studies in which differences in local charge distribution with only small increases in total pI resulted in increased affinity for DNA. [7] Furthermore, we also investigated two recently reported alternative tags with respect to their capability for site-specific coupling to DNA nanostructures. The AGT from Sulfolobus solfataricus (SsOGT_wt) [18] as well as an engineered variant SsOGT_H5 [19] were described as tags for extremophilic organisms, as they possess a high thermostability. Since they have a specific binding ability to BG substrates comparable to the conventional SNAP-tag, we wanted to test whether they could also be used for protein immobilization in DNA-based applications. Coupling Efficacy of SNAP-Tag Variants with Oligonucleotides The in total seven SNAP variants were genetically fused to the dimeric NADPH-regenerating enzyme ICDH from Bacillus subtilis. The fusion proteins were heterologously expressed in Escherichia coli and purified to homogeneity by Ni-NTA affinity ( Figure S3, Supporting Information). For an initial assessment of DNA affinity, we investigated the coupling capabilities of ICDH-SNAP and its variants using a low ratio of 1.3 molar equivalents 5′-BG-modified oligonucleotides per subunit of the dimeric ICDH. The use of a slight excess (1.3 equiv.) of oligonucleotide was chosen to ensure complete consumption of the SNAP protein even at high coupling rates, as established in our previous work. [7] The kinetic analysis revealed significantly altered coupling activity of all rationally designed SNAP variants to the BG-modified oligonucleotide, as compared to the conventional ICDH-SNAP fusion protein (Figure 2A,B; see also Figure S4 for kinetic analysis of all variants; see Figure S5, Supporting Information, for stoichiometric analysis of conjugate formation). In contrast, the thermostable tags SsOGT_wt and SsOGT_H5 showed poor performance ( Figure S6, Supporting Information). Therefore, they were considered unsuitable for the application envisaged here and were not used for further investigations. Of the rationally designed SNAP variants, ICDH-SNAP_R5 and -SNAP_R5W performed best and resulted in conjugate formation of up to 95%, with the yield of conjugate exceeding 90% after only 10 min of reaction time. To validate these results and verify applicability of the modified SNAP-tag to other enzymes, the most promising variants R5 and R5W were fused to the monomeric NADPH-dependent enzyme Gre2p from Saccharomyces cerevisiae ( Figure S3, Supporting Information), a ketoreductase that catalyzes the stereoselective reduction of prochiral ketone derivatives. [20] As shown in Figures 2C and 2D, almost complete conversion of the 5′-BG-modified oligonucleotide was observed for both Gre2p-SNAP_R5 and -SNAP_R5W, whereas Gre2p-SNAP reached a maximum of only ≈60% conjugate formation. These results thus very clearly confirmed the data obtained with the ICDH variants. The observed increased activity of the R5 and R5W mutants toward BG-modified oligonucleotides is consistent with previous studies, [15,17a] which revealed that mutation of these amino acids plays a critical role in the substrate (BG) and/or DNA binding affinity of the SNAP-tag. Immobilization of SNAP-Enzyme Fusions to DNA Origami Nanostructures Based on the promising results obtained with oligonucleotide conjugation, we then compared the immobilization of ICDH-SNAP_R5, -SNAP_R5W, and conventional ICDH-SNAP on the plane of a quasi 2D rectangular DON that contained three distinguishable BG-modified binding sites (see schematic illustration in Figure 3A,C; for details on the DON design, see Figure S7; for functional analysis of the BG-modified staples see Figure S8, Supporting Information). To enable a direct AFM analysis, a low excess of three molar equivalents of the respective fusion protein per binding site was applied. Binding of conventional ICDH-SNAP led to an average occupancy of only 23% ( Figure 3A; see Figure S9, Supporting Information, for large-scale images), whereas ICDH-SNAP_R5 resulted in a much higher occupancy of 74% ( Figure 3B). This is a tremendous improvement, as previously occupancies of ≈60% were only achieved with a very large excess of 1000 molar equivalents of the conventional SNAP-tag. [5a] The superior binding properties of SNAP_R5 were also observed when only a small excess of 1.3 molar equivalents was used, resulting in occupancy densities of 37% or only 7% for ICDH-SNAP_R5 and ICDH-SNAP, respectively ( Figure S10A, Supporting Information). It is also important to note that an enzyme excess of five or more equivalents already limited AFM analysis to such an extent that immobilized proteins could no longer be clearly distinguished ( Figure S10B, Supporting Information). A similar result was obtained with the Gre2p-SNAP_R5 fusion protein, which led to an occupancy density of 72% ( Figure 3C), whereas coupling of conventional Gre2p-SNAP resulted in only 22% surface occupancy ( Figure 3D). This clearly indicates that the SNAP_R5 mutant is readily applicable to other monomeric enzymes. Interestingly, a less significant increase in coupling efficiency was observed for ICDH-SNAP_R5W (58%, Figure S11, Supporting Information), indicating that the G160W mutation in this variant increases affinity toward the BG ligand but not toward the negatively charged DNA surface. Indeed, it has been reported that the tryptophan side chain is likely located outside of the BG binding pocket and has a stabilizing effect on the protein-BG complex through stacking interactions with small-molecule BG derivatives. [15] This stabilization could be disrupted by unfavorable interactions between the protein and the bulky DON surface. To demonstrate the potential of SNAP_R5 for constructing enzyme cascades on DON, the binding efficiency was first compared to that of a co-immobilized HOB-tag fusion enzyme, using the monomeric Gre2p fused to the HOB-tag. To enable AFM analysis as well as accurate determination of enzymatic activity on origami without the interference of unbound enzymes, we employed a bead-assisted purification method of ICDH-SNAP, B) 3 equiv. ICDH-SNAP_R5, C) 3 equiv. Gre2p-SNAP, or D) 3 equiv. Gre2p-SNAP_R5, respectively, per available BG-ligand. The bar diagrams show the average occupancy (Ø) and distribution of DON with n = 0, 1, 2, or 3 proteins, as assessed by AFM analysis after 120 min incubation at 25 °C. Scale bars: 100 nm. Note that ICDH is a dimer. Since only a single dimeric ICDH molecule can bind per binding site due to the steric accessibility of the BG-ligands presented on the DON, the equivalents per ICDH dimer were calculated in these studies. Representative large-scale AFM images are presented in Figure S9, Supporting Information. protein-decorated DON, [6] schematically depicted in Figure 4A. In brief, DONs bearing six distinguishable binding sites, three of which equipped with BG-and three with CH-ligands, were additionally equipped with three cleavable biotin linkers (for details, see Figure S7, Supporting Information). These constructs were allowed to bind three molar equivalents of each enzyme per binding site (Gre2p-HOB for CH-ligands and ICDH-SNAP or -SNAP_R5 for BG-ligands) for 120 min and the resulting DON-enzyme constructs were extracted and purified with streptavidin-coated magnetic beads. After cleavage with the reducing agent dithiothreitol (DTT), occupancy densities were determined by AFM analysis. Due to the asymmetric arrangement of the BG-and CH-binding sites on DON, the respective enzyme can be precisely identified and the average distance between the enzymes can be determined (see Figures S7 and S12A, Supporting Information). The co-immobilization of ICDH-SNAP with Gre2p-HOB led to a drastically decreased binding of the conventional SNAP-tag of only 7%, while the Gre2p-HOB fusion protein achieved 85% occupancy on the DON. This data translates to a ratio of 1:12.1 of ICDH-SNAP versus Gre2p-HOB ( Figure 4B; for large-scale images, see Figure S12B, Supporting Information). In contrast, a remarkably high assembly efficiency of ICDH-SNAP_R5 of 78% was observed, which was similar to the Gre2p-HOB occupancy of 84% and correlates with a ratio of 1:1.1 of ICDH-SNAP_R5 versus Gre2p-HOB ( Figure 4C). The improved binding properties of R5 variant were confirmed by electrophoretic analysis of the above coupling reactions ( Figure S13, Supporting Information). Hence, these results clearly demonstrate the suitability of SNAP-tag variant R5 for efficient orthogonal coupling of enzymes to DNA nanostructures. Construction and Characterization of the Gre2p/ICDH Cascade Having ensured the efficient construction of the enzyme cascade, we now investigated its functionality. To characterize its catalytic performance, we used the stereoselective reduction of the prochiral substrate 5-nitrononae-2,8-dione 1 (NDK). [20] Gre2p converts NDK 1 with an extraordinary high stereoselectivity, yielding almost exclusively (S)-anti-hydroxy ketone 2 (e.r. > 99:1, Figure 5A). The second reduction of the remaining carbonyl group, which would lead to the corresponding (S,S)configured diol, is not catalyzed by Gre2p under the usual conditions. [20] Regeneration of the essential cofactor NADPH is achieved by means of co-immobilized ICDH through the oxidation of isocitric acid. Activities of the co-assembled enzyme cascades on DON, non-immobilized (free) enzymes, and negative controls (NC) were quantified by monitoring the formation of 2 using chiral HPLC ( Figure 5B). Both controls of free enzymes (1 molar equivalent, 30 nm each), which contained either ICDH-SNAP (blue bars) or ICDH-SNAP_R5 (red bars) Figure 4. Assembly of an enzyme cascade based on Gre2p and ICDH on DNA nanostructures. A) Schematic illustration of the bead-assisted purification to yield pure protein-decorated DON for AFM analysis. B) Construction of the cascade with Gre2p-HOB and ICDH-SNAP or C) ICDH-SNAP_R5, respectively, on DNA nanostructures. 3 equiv. of each enzyme were applied per CH-and BG-ligand, and the resulting protein-DNA nanostructures were bead-purified. Bar diagrams show the average occupancy (Ø) and distribution of DON containing co-immobilized enzymes nHOB = 0, 1, 2, or 3 and nSNAP/R5 = 0, 1, 2, or 3, as assessed by AFM analysis. Scale bars: 100 nm. The molar equivalents refer to complete enzymes, similarly as in Figure 3. Representative large-scale AFM images are presented in Figure S12B, Supporting Information. Note that due to the asymmetric design of binding sites on the DON, exact positions and distances of the enzymes can be determined (see Figure S12A, Supporting Information). in addition to Gre2p-HOB freely present in solution, showed comparable activity. This result thus allowed for direct comparison of the ICDH-SNAP and ICDH-SNAP_R5 cascades and showed that both ICDH variants have comparable catalytic activity. For assembly on DON, three molar equivalents of each enzyme were used per binding site, and the resulting DONenzyme constructs were bead-purified to eliminate influences from the remaining unbound enzymes. Since DTT, normally used for reductive cleavage of the constructs (see Figure 4A), interferes with HPLC measurements, the biocatalytic cascade reaction was performed directly with the DON coupled on beads. To rule out the possibility that the activity of the enzyme cascade is affected by bead/DON presence, previously performed experiments had shown that comparable activities resulted in the presence or absence of beads and DON ( Figure S14, Supporting Information). Furthermore, negative controls were performed to exclude possible nonspecific adsorption of the enzymes to DON or beads. For this purpose, DON equipped either with no binding sites (NC1) or only with BG binding sites (NC2) were immobilized on the beads and incubated with the mixture of Gre2p-HOB and ICDH-SNAP_R5 and bead-purified. NC2, in which only ICDH-SNAP_R5 was bound on the beads, also served as a control for the fact that for a successful cascade reaction, both enzymes must indeed be present and co-immobilized to yield the product. As expected, no formation of the hydroxy ketone 2 product was observed in either control ( Figure 5B), confirming that all unbound enzymes were successfully removed by bead-purification. Directional assembly of Gre2p-HOB and ICDH on DON revealed remarkable differences in activities between the ICDH-SNAP-and ICDH-SNAP_R5-based cascades. When the conventional SNAP-tag was used in the cascade mounted on DON (blue), a much lower product formation of 2 was observed, which can thus be attributed to the poorer binding properties and the resulting removal of ICDH-SNAP by washing steps in the bead cleaning process. In contrast, the cascade with ICDH-SNAP_R5 resulted in the efficient formation of 2, confirming that this SNAP-tag variant allows for efficient assembly of the enzyme cascade on DON. Interestingly, the directional assembly of Gre2p-HOB and ICDH-SNAP_R5 on DON resulted in ≈60% higher overall activity compared to the enzymes in free solution. To corroborate this result, the enzyme amounts in the samples of free and immobilized Gre2p-HOB and ICDH-SNAP_R5 were additionally quantified by Western blot analysis ( Figure S15, Supporting Information), and the results confirmed that the observed increase in activity was indeed due to the directional nanoscale assembly and not to different enzyme amounts. The here observed increase in catalytic activity of enzyme cascades on DNA scaffolds has previously been observed in several studies. For instance, Hao Yan and coworkers observed an about twofold increase in a DON assembled Glucose Oxidase (GOx) horseradish peroxidase (HRP) cascade, where the enzymes were positioned in distances between 10-45 nm, and this effect was attributed to distance-dependent substrate diffusion with the help of modeling. [21] However, Hess and coworkers have shown that proximity alone does not contribute to activity enhancement in the GOx-HRP cascade, [22] which led them to emphasize that substrates in nature are channeled by confinement rather than proximity [23] and to propose design principles for a compartmentalized enzyme cascade reaction. [24] We therefore hypothesize that the enhanced cascade activity observed in our study is likely due to effects such as diffusion constraints in compartmentalized microenvironments generated by the DNA nanostructures on the microbeads. Further detailed studies as well as refined methodological approaches are needed to clarify the mechanistic origin of these phenomena and to exploit them for technical applications. [1d,4,23,25] It seems feasible that the efficient connector system, in combination with robust controls, the bead-based purification and quantifications by AFM and Western blot analysis shown here, point in the right direction to generate the quantitative data needed for this purpose. Conclusion In summary, by re-engineering five amino acids of the established SNAP-tag, we created an effective, genetically fusible connector for site-selective immobilization of enzymes on DNAbased nanostructures. The resulting SNAP_R5 fusion proteins showed up to 11-fold increased coupling efficiency toward BGmodified oligonucleotides as well as DON as compared to the conventional SNAP-tag, leading to typical occupancy densities of 75%. By combining the new SNAP linker with the equally efficient HOB-tag, orthogonal coupling of the sensitive, stereoselective ketoreductase Gre2p and the NADPH-regenerating ICDH on DON was enabled. Since both linkers exhibit neither cross-reactivity nor non-specific binding, they allowed orthogonal assembly of an enzyme cascade that exhibited an activity approximately 1.6-fold higher than that of the corresponding biocatalytic reaction with free solubilized enzymes. Although this result is quantitatively consistent with some other cascades described previously, there is agreement that further detailed studies using robust methodological approaches and tools are needed to elucidate the mechanistic origin of the underlying effects and exploit them for technical applications. [1d,4,23,25] We believe that the efficient connector system presented here and the methods used for its validation represent an important contribution to the further development of DON-based multienzyme systems. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
4,769.8
2021-11-25T00:00:00.000
[ "Biology", "Engineering" ]
Antecedents of E-Marketing of Agriculture Products in This Digital Era: An Empirical Study Agriculture is the backbone of the Indian economy. The majority of the citizens of this country are dependent upon the agricultural supply chain for the livelihood. This study shows the role of the workforce in this digital era for the e-marketing of agriculture products. E-marketing platforms (i.e., search engine optimization, affiliate marketing, social media marketing, and e-mail marketing) help digital marketers to track and analyze the dynamic and complex buying behavior of consumers. Structural equation modelling is used to test the framework for the e-marketing of agriculture products. The developed model can enhance the capability of workforce in this digital era for developing an effective e-marketing strategy for agriculture products. a significant contribution in finding out the antecedent for e-marketing of agricultural products which enable the digital workforce to make an effective digital marketing strategy. Further relationship between agricultural products e-marketing and their antecedents have been measured. The outcome of this study will be useful for the digital workforce to sell agricultural products through different digital platforms. It will be beneficial for the consumers to purchase the products through digital platforms because it is hassle-free, and consumers get the agricultural product at their doorstep. The government is focusing and working on the digitalization of farmers. In present digitalized world, e-marketing could become an effective tool for the e-marketing of agriculture products. Consumers can purchase agro products by sitting at their home and products could be delivered to their doorstep. This study helps marketers to evaluate the different e-marketing platform for the sales of agriculture products. E-marketing platform i.e., social media marketing, search engine optimization, affiliate marketing and e-mail marketing generate big data consisting of information of consumers. This big data helps marketers and decision-makers to analyse the buying behaviour pattern of the consumers. Even this digital platform generates billion of data on the daily basis. E-marketing of agriculture products is more convenient and it’s continuously attractive the consumers towards a digital platform. The output of the research helps the digital workforce to make the consumers shopping more comfortable. In traditional marketing, it is difficult to trace the consumers but in e-marketing, marketers easily trace the consumers. The study helps to monitor and evaluate the dynamic buying behaviour of the consumers. The findings of this study present several implications to achieve excellence in understanding consumer behaviour. The digital workforce can attain excellence in e-marketing by analysing the behaviour of consumers towards the e-marketing platform. The role of the digital workforce is very challenging. They are responsible to enhance the sales volume of agriculture products through digital platforms. INTRoDUCTIoN Agriculture plays a very significant role in strengthening the Indian economy. More than 50% of people in India are employed in the agriculture sector and the contribution of the agricultural sector in GDP is also increasing (Veeranjaneyulu, 2014). Agriculture also assists in providing raw materials to the industries. But still, the good quality agricultural products are not able to reach the consumers due to the under development of this sector. The marketing of the agricultural product is not the same as the traditional marketing of other products in which the sole emphasis is given on fulfilling the demand and satisfying the expectation of the consumers (Singla & Sagar, 2012). E-marketing of agricultural products is related to the e-marketing of the basic need of the people which is food and getting food is also a human right. The e-marketing of agricultural products also involves various services like packaging, grading, transport, storage, advertising, and promotion of agricultural products. Therefore, the government must act accordingly so that people can get food. Supply chain management of the agricultural products must travel a long distance which results in the decaying and rotting of the agricultural products before reaching the marketplace (Wells et al., 2007) and the farmers are not getting anything for their investment. This is the major drawback of the agricultural sector in India. There is a long list of intermediaries in the whole process that starts right from the production to the consumption of agricultural products. Many intermediaries also result in increasing the price of the products and the farmers remain the least earners during the whole process. At least four intermediaries are involved in the process of the production of the agricultural products until their consumption and they do not add any value to the products (Feldmann & Hamm, 2015). The price strategy is not clear and open. At every level of the intermediaries, the price of the product increases which is not in the knowledge of the producers. The main reason behind the increase of the transaction cost of the agricultural products is various taxes, fees, and various licensing systems (Thompson & Scoones, 2009) which in total increase the price of the products which fall on the pocket of the common man. To curtail this, the Government of India has set up the MSP (Minimum Support Price) which helps the farmers to earn at least the fixed price for their agricultural products. The Government of India has also amended 'The Agricultural Products Market Committee Act' to improve the transparency in the selling of agricultural products. This Act is mainly aimed to exclude the mediators between the producers and the end-users and improve the direct sales so that the producers can get the best of their products. Many states have adopted this act in full and some have adopted this partially (GKToday, 2014). Apart from this, the producers are now looking up to the e-commerce portals for the sale of the products generated from agriculture. To end up all this, new and modern technology is introduced in the agricultural sector also i.e., e-marketing (Alavion et al., 2017). E-marketing is capturing the market at a very high speed. Digital marketers are trying their best to get the customers on the digital portal. The online selling of the products is a decade old but the selling of the agricultural products is introduced only a few years back. The online marketing of agricultural products helps in eliminating the drawbacks of the traditional market system where the consumers had to go out of the comfort zone of their home to buy the fruits and vegetables from the local market. But in the present time preferably in the urban area, the people do not want to go out because there is no time in their busy schedule, but the agriculture products are the basic needs of the people, and these are important for fulfilling the daily requirement of the people. So, the companies have found a solution to the problems of the lack of time (Chaudhary & Suri, 2020). People can now order fruits and vegetables and other agricultural products with just one click and the products will reach their doorstep. The companies like Grofers, Big Basket, and Amazon are a few names that provide this facility to their consumers. Apart from this, the e-marketing of the agricultural products also helps in reaching a large number of people in very little or no time which also assist in boosting up the sales of the agricultural products which ultimately helps in helping towards making the economy strong. This also helps in providing agricultural products to a large number of people as the demands of the people in India are increasing as the population is increasing (Chauhan, 2014). Digitalization has revolutionized the consumer marketing. Digital marketing required knowledge and practice. Traditional marketers need to enhance their capabilities and knowledge for digital marketing (Herhausen et al., 2020). Interactivity on social websites has a huge impact on brand experience and brand choice, which has a significant impact on customer purchasing intentions (Ye et al., 2019). There is another advantage of e-marketing of agricultural products is that the consumers have to pay less for the products. In developing countries, e-businesses are still in the struggling face and lacking in sustainable e-marketing implementation (Sheikh et al., 2018). Therefore, the scope of this study is to find out the role of digital workforce for the sustainable development of e-marketing strategies for agriculture products in developing countries like India. E-marketing is changing the traditional method of international marketing and will continue to change (Sheth & Sharma, 2005). Symmetric information about online products is essential to increase the level of influence of customers towards digital platforms. Product distortion should be reduced for effective digital marketing (Pei & Yan, 2019). Product categorization is required for customized digital marketing. Various sector is introducing modern business models influenced by digital transformation. Consumer online purchase behaviour must be understood in order for e-marketing to be effective. Digital marketing aspects differ depending on the products (Kiang et al., 2011). In comparison to other products, e-marketing of agriculture products is a little more challenging. Understanding the response of consumers towards agriculture product brand promotion is essential (Liao et al., 2020). In the digital age, a workforce with a working knowledge of digital technologies and techniques is needed (Siddoo et al., 2019). Consumers' online buying behaviour is totally changed by digital and internet of things technologies. Consumers are more concerned about security concerns while online transaction (Fu et al., 2020). Maintaining customer satisfaction is the biggest challenge in online shopping. Higher level of customer satisfaction required higher service quality. The privacy, security and design of digital platforms is essential for superior customer services (Rita et al., 2019). A company has to develop superior services to retain the existing customers. Research is needed to find the impact of e-marketing on agriculture products and also discuss the challenges of the digital workforce for implementing the e-marketing strategies. The agriculture products don't have the same opportunities as other products on the digital platform (Baourakis et al., 2002). This research tries to determine the opportunities of digital marketing of agriculture products by establishing an effective digital workforce. Hence: RQ 1: How e-marketing platform influence consumers and the role of the digital marketing for e-marketing of agriculture products which assist the digital workforce? Smartphone, high internet speed and social media platform are changing the purchasing behaviour of consumers. Information technology and virtual communications play very crucial tools in the marketing of agriculture products (Alavion & Taghdisi, 2020). Information technology enhances the capabilities to update the information of agriculture products. Digital marketing accelerates the growth of e-marketing of agriculture products (Behera et al., 2015). The consumers visiting e-shop due to digital footprints used to understand the behaviour of the consumers (Gerrikagoitia et al., 2015). To advance the literature, further study is needed to determine the impact of e-marketing platform on the sales of agriculture products. RQ 2: What framework digital marketers should follow for agriculture products? E-marketing has the benefit of market expansion and cost reduction. In e-marketing the number of intermediaries decreases which have a significant impact on cost reduction. This will have a positive impact on the expansion of e-marketing (Shaltoni & West, 2010). E-marketing makes purchasing easier for the consumers to get the products at their doorstep (Arayesh, 2015). Agricultural information is critical to the development and improvement of Indian farmers (Zhang et al., 2016). The limited literature is available about the e-marketing of agriculture products. A study is needed to determine the most successful e-marketing approach for agricultural products, which will facilitate digital workforce. The section of this study is classified as follows: Section 1 consists of the introductory part of the study. Section 2 consist of the theoretical background of the study. Section 3 shows the hypothesis adopted for the research. Section 4 represents the research methodology. Section 5 represents data analysis. Section 6 represents the conclusion and section 7 consist of the conclusion of the study. THEoRETICAL BACKGRoUND As India is a land of agricultural products hence it is the main source of earning and developing the economy. Therefore, for a developing country like India, it is very essential to embrace and enhance the development of the e-marketing of agricultural products because it is the need of the hour (Rao et al., 2019). E-marketing will help in promoting the sales volume of agricultural products. A large market segment can be covered. The need of the people will be fulfilled as more and more items are needed to fulfil the demand of the ever-increasing population of the country (Alavion et al., 2017). The gap between the farmer and the consumers will be eliminated by the e-marketing of the agricultural products. The e-marketing of the agricultural products will also help in enhancing the supply chain of the products which is related to agriculture. With the help of the expansion of the channels in the agriculture sector, there will be a well-organized structure for selling agriculture products on a very large scale (Calzolai et al., 2012). This will help in maintaining an asymmetrical channel of information regarding providing information to both the producer and the consumer of the agriculture products and this will ultimately help in reducing the loss. This will help in maximizing the profits and minimizing the transaction links and transaction costs incurred in the whole process. The e-marketing channel for the agriculture channel will also help in making a platform for other subsidiary industries like fisheries and horticulture. Overall, this will help in speeding up industrialization in the agriculture sector also. Thus, the e-marketing of agriculture products helps in fulfilling the needs of the people as well as strengthening the economy too (Suhartanto et al., 2019). The e-marketing of agriculture products is being emphasized to pamper the agriculture sector. Noticeable research in the field of the e-marketing of agriculture products has been done in measuring its performance. The income of the consumers is increasing, it is also arousing the need for the commercialization of agriculture products. The need is also increasing due to the liberalized trade policies and urbanization (Feder & Umani, 1993). The demand for agriculture products is also increasing due to the use of high-end technologies and organic farming of agriculture products. The e-marketing of agriculture products is also aiding in expanding the agriculture sector. Geographical factor has an impact on behaviour factor. In the e-commerce era, geographical space becomes a catalyst in the changing behaviour of the farmers (Alavion & Taghdisi, 2021). The impact of e-commerce on agriculture products is very crucial and important. The agriculture products firms have started developing a tendency to use information technology, especially the internet to sell their agro-products (Baourakis et al., 2002). The digitalization process necessitated the development of skills and a digital workforce capable of effectively implementing a digital marketing campaign. Digital workforce is essential for e-marketing for products and services (Vial et al., 2019). A study is needed to create a model for agriculture product e-marketing that will assist the digital workforce in developing an effective digital marketing strategy. Affiliate Marketing Presently affiliate marketing has emerged as extremely popular in this fierce online marketing of the products for digital promotion (Sheth & Sharma, 2005). The rapid technological advancement has provided a platform for the availability, marketing and advertising of agriculture products that are readily accessible to the consumers on the internet (Pentina & Hasty, 2009). Internet marketing has gained popularity due to various benefits. The most important benefit is targeting a large market segment that can cover across the globe and the marketer can target those consumers directly. Affiliate marketing is also cost-effective. It is an online sales technique that allows increasing the sales target by affiliating the website on another digital platform (Mariussen et al., 2010). According to Spilker & Brettel (2010), marketers sublease an extraneous professional who helps in the promotion of the products online. It helps to increase online traffic. Affiliate marketing was first time introduced by Amazon in the year 1996. It is mostly done on a commission basis (Truong & Simmons, 2010). The other websites are paid when a person clicks on the advertisement of the advertiser and the publisher gets paid for that. This is called the PPC i.e., pay per click. It is one of the most regular types of affiliate marketing. Affiliate marketing is also widely used for generating online traffic which will increase the sales of the products (Mariussen et al., 2010). This can be used through various sources like promoting the products on a payment basis; this will help in the lead generation which in turn will increase the profits of the business. The conversion ratio of the lead generated can also be increased through affiliate marketing. This is highly used in the sales of agriculture products. In metropolitan cities, people prefer to purchase agricultural products by e-commerce websites (Alavion et al., 2017). H1 : Affiliate marketing has a significant impact on the e-marketing of agricultural products. Social Media Marketing Social media is used to visualize products and services by using the virtual world. Social media marketing is taking the agriculture products sector over traditional marketing. In India, many service providers like BNSL are providing various facilities to the farmers to encourage them to use social media for increasing their sales. For example, the Mahakrishi plan is introduced by BSNL. The farmers are more inclined towards using the social media platform for increasing the sale volume of their products as it helps to overcome the geographical boundaries of the country making it easily accessible. As consumers are spending more time on the internet, it becomes necessary for the framers to have prominence and an effective presence online (Akar & Topçu, 2011). They must learn and adopt new methods that are used by the consumers for getting information and should act accordingly (Goyal & Eilu, 2019). Creating interaction with the user of the product is the key element of the internet marketing of agriculture products (Baourakis et al., 2002). It also helps in creating a platform where the people of the same interest, interact with each other. It also helps to overcome the barrier of one-way communication and allowing consumers to get more involved in the purchase of the products digitally (Mariussen et al., 2010). In the agriculture sector, the use of social media is having a positive impact on brand or company awareness, increasing sales, and increasing interaction with the consumers (Sheth & Sharma, 2005). According to Alavion et al., (2017), social media provide considerable scope for the buying and selling of agriculture products. H2: Social media marketing has a significant impact on the e-marketing of agricultural products. Search Engine optimization Search engine optimization is a new technology that is used by the digital marketer to perform in-depth research and established a strong network marketing structure (Aswani et al., 2018). It is a medium through which a fruitful result is received by the user for the search online. It helps the farmers to increase their income and efficiency. It plays a very crucial role in increasing the rank of the products related to agriculture which in turn helps in improving the product and brand marketing of the agriculture products (Ohe & Kurihara, 2013). The search engine helps the marketers to advertise the products at the time to the right person. The SEO, when used efficiently defining the outline of the products, provides accessibility to a large number of searchers . The search engine optimization will help the farmers to increase their revenue by using the technology properly as it will help in increasing the traffic movement and the ranking of the website will also be boosted up (Aswani et al., 2018). A website that provides attractive and informative content attracts searchers and users which are very important to increase profits when talking about agriculture products. The farmers remain underpaid for their products due to less use of Information and Communication Technology (ICT). The Government has launched the National Agricultural Market (NAM) which provides an e-marketing platform for agriculture products. This also aims towards providing better returns on agriculture products and a transparent environment to increase revenue. The government has also introduced various e-marketing websites for the selling of agriculture products like KisanMandi.com. This is an initiative taken by the Government of India to promote the Agriculture Products E-Marketing. H3: Search engine optimization has a significant impact on the e-marketing of agricultural products. E-Mail Marketing E-mail marketing is a process of sending bulk commercial e-mails to increase the sale of the products. This will help in targeting a large market segment with fewer efforts. Also, it helps in overcoming the barriers of the traditional pattern of marketing (Hartemo, 2016). A strong supply chain system will turn into an additional working hand for the farmers which will help them to get the right value for their products (Bosona & Gebresenbet, 2013). Earlier the farmers were not able to sell their products and the products decay lying in the farms or the warehouses. Due to the increase in the purchasing power of the people, the requirement for agricultural products is increased. This was made possible due to the use of technologies like the processing of agricultural products and distribution systems (Ali & Kumar, 2011). E-mail marketing helps the farmers to reach those consumers who need agricultural products, and they can earn their revenue by selling the products. Nowadays, various websites provide a database and e-mail list which helps them to directly reach the needy consumers and in return, the sales volume will be increased. E-mail marketing helps to identify the needs of the consumer and they can be provided with a personalized offer. H4: E-mail marketing has a significant impact on the e-marketing of agricultural products. Construct operationalization The latent variables have identified through extensive literature review. The latent variables for e-marketing of agriculture products are affiliate marketing, social media marketing, search engine optimization and e-mail marketing. Affiliate marketing is defined as the process of affiliating a website to another website to sell the products (Iwashita et al., 2018). Latent variables affiliate marketing can be measured by observed variables conversion rate, percentage of traffic generation, earning per click and return on aids click (Constantinides, 2002;Ballestar et al., 2018;Mariussen et al., 2010). Social media marketing is another latent variable that has a significant impact on e-marketing (Michaelidou et al., 2011). Social media marketing is referring to the process of gaining online traffic through social media platform (Balakrishnan et al., 2014). The positivity or like of the consumers towards products on social media platform can measure the effectiveness of social media (Erdoğmuş & Cicek, 2012). Interaction, differentiation, and accessibility are the observed variables that are used to measure social media marketing (Liu et al., 2019;Iankova et al., 2019 and Wang;2017). Search engine optimization is one of the crucial tools for e-marketing. SEO has a significant impact on consumers and can influence their buying behaviour (Skiera et al., 2010). SEO is an extensively used technique for reaching websites (Jansen & Spink, 2006). Keywords, content, and quantity of traffic used as an observed variable to measure the latent variable SEO (Winer, 2009;Xiang &Pan, 2011 andAbou Nabout et al., 2012). E-mail marketing is the oldest platform of e-marketing. E-mail marketing is the process of sending a commercial message to potential customers through the mail (Ellis-Chadwick & Doherty, 2012). The observed variables personalization, target audience and frequency are used to construct and measured the latent variables of e-mail marketing (Ye et al, 2010;Hartemo, 2016;Poon & Swatman, 1999). The marketing mix is the set of controllable variables product, price, place, and promotion used by the companies to influence the buying response of the consumers (Wongleedee, 2015;Westerbeek & Shilbury, 1999;Constantinides, 2006;Pomering, 2017). The marketing mix variables has taken as a construct for e-marketing of agriculture products. Sampling Strategy and Data Collection This research was based on an empirical survey. Primary data was collected for this research. Data was collected by using a structured questionnaire. The sample size of 400 respondents in the Delhi region was considered for this study. The sample were selected randomly in Delhi city. Simple random sampling technique was used for the study. Fifteen observed variables were identified from the review of the literature. These variables helped to measure the four antecedents affiliate marketing, social media marketing, search engine optimization and e-mail marketing of agricultural products e-marketing. Structured questionnaire was prepared based on identified variables. Structural equation modelling was used to test the hypothesis. Instrument Development The questionnaires were classified into two parts. The first part contains the personal information of the respondents whereas the second part contains the four antecedents i.e., affiliate marketing, social media marketing, search engine optimization, and e-mail marketing. The questionnaire was based on identified 15 observed variables. Each respondent was asked to provide their rating on 5-point Likert scale. On a rating scale, 1 meaning strongly disagree to 5 which means strongly agree. A pilot survey was conducted to measure the reliability of the variables. The Cronbach's alpha value of the variables was greater than 0.50. Hence all the identified variables were reliable. To measure the antecedent a scale has been developed. Table 2 represents the three demographic profiles of respondents. The data was collected from 400 respondents out of which 47.25% were male whereas 52.75% were female. About 12.50% of the respondents were of the age between 25-35 years, 33.75% fell in between 35-45 years, and 37.25% fell in between 45-55 years whereas the remaining 16.50% were above 55 years. Based on income, 10% of respondents were having their income in between 15000-35000 INR, 23% between 35000 -55000 INR, 26.25% between 55000-75000 INR, 27.75% of respondents were having income between 75000-95000 INR and the rest 13% respondents were having income above 95000 INR. (2015) Promotion V54 Pomering (2017) DATA ANALySIS Structural Equation Modeling is a multivariate analysis technique. It is the combination of confirmatory factor analysis, regression analysis, and path analysis. It helps to determine the relationship between latent variables and observed variables. Structural equation modeling is the combination of the measurement model which is used to test the validity and reliability between the latent and observed variables and the structural model used to measure the path strength and the direction of latent variables. (McQuitty, 2004). The measurement model was developed by applying confirmatory factor analysis using Amos 22. Table 3 shows a satisfactory level of reliability and validity. Testing of validity and reliability is essential before developing a structural model. Figure 1 shows the final measurement model of latent variables. The measurement model shows that four latent variables affiliate marketing, social media marketing, search engine optimization, and e-mail marketing is measured by 14 observed variables. Table 3 shows that the Cronbach alpha of each construct is greater than 0.7 which proves the reliability of the construct. The validity of the construct is measured by using discriminant validity and convergent validity. The convergent validity represents the proportion of the variance of the construct. Factor loading is used to measure the validity of the construct. The regression weights are significant and show that observed variables are significant and represent the latent variables. The entire factor loading of observed variables is greater than 0.5 which shows that observed variables can measure the latent variables. This validates the convergent validity of the construct. Discriminant validity determines how the construct is distinct from others. There are two methods to measure discriminant validity. Firstly, the value of the correlation between the construct should not be very high. Secondly, the variance of the individual construct should be higher than the variance of the average construct. Measurement Model The model fit was tested by using different fit indices like the goodness of fit indices (GFI), comparative fit indices (CFI), trucker Levis indices (TLI), normed fit indices (NFI) and root mean square error approximation (RMSEA). For a model to be fit the chi-square value should be less than 3. The value of root means square error approximation (RMSEA) should be greater than 0.08 and the value of CFI, GFI, TLI, and NFI should be greater than 0.90. Table 4 represents that the value of chi-square is 2.245. The value of RMSEA is 0.047 and the value of GFI, CFI, TLI, and NFI is 0.98, 0.95, 0.91, and 0.96. Table 4 shows that the measurement model had a good fit so we can proceed with the structural model. Structural Model Structural equation modeling was used to test the hypothesis for the conceptual model. Table 5 shows that the value of chi-square is 2.134. The value of RMSEA is 0.049. The value of GFI, CFI, TLI, and NFI is 0.92, 0.94, 0.93 and 0.92 respectively. Since all the indices value is good to fit so we can move forward for further analysis. Table 6 represents the β, critical ratio, standard error, and result of the hypothesis. The significance level of hypothesis testing is 0.05. The value of R 2 represents the coefficient of determination which measures the strength of the model. The obtained value of R 2 is 0.72 which shows that the four constructs explained 72% variation in agricultural product e-marketing. Table 6 also represents the hypothesis testing. The value of β shows the significant importance of the construct of agricultural products e-marketing. Figure 2. Measurement model The most important significant factor of agricultural product e-marketing is social media marketing (β=0.61, p < 0.05). Hence, H2 which states that social media marketing has a significant impact on the e-marketing of agricultural products is supported. The second important factor of e-marketing of agricultural products is search engine optimization (β = 0.43, p < 0.05). So, H3 which states that search engine optimization has a significant impact on agricultural product e-marketing is also supported. The third and fourth factor of agricultural product e-marketing is e-mail marketing (β = 0.34, p < 0.05) and affiliate marketing (β = 0.23, p < 0.05). Hence, H4 and H1 are also supported. Theoretical Implications Consumer tastes and preferences are dynamic. The consumers buying behavior pattern is changing and consumers prefer to purchase their products through e-commerce websites (Kim & Ko, 2012). The farmers should focus on the digitalization to sell their agricultural products. In this research, the framework for the e-marketing of agricultural products has been developed (Feldmann & Hamm, 2015). The rural economy has a significant positive impact on behavioural variables of agro-products consumers (Alavion & Taghdisi, 2021). Agriculture products benefit greatly from the use of digital marketplaces. Digital platforms enable the customers to do the comparative analysis of agriculture products (Anshari et al., 2019). This study further explored the potential of e-marketing platform for the sales and promotion of agriculture products. The measurement model proved the reliability and validity of the construct (Baumgartner & Homburg, 1996). Farmers benefit from digital marketing since it expands their prospects and increases their income. Digital marketing enables the digital marketers to access the international market for agricultural products (Bowen & Morris, 2019). The digitalization process is transforming traditional agricultural processes and creating new market opportunities for agricultural products (Rijswijk et al., 2019). According to the findings, digital marketing is critical for agricultural products. Four important factors have been identified for digital marketing of agriculture products. All four antecedents viz. affiliate marketing, social media marketing, search engine optimization, and e-mail marketing were found to have a significant relationship with agricultural products e-marketing. The most important antecedent of agricultural products e-marketing is social media marketing (β=0.61, p < 0.05). Among all the digitalized platform social media marketing is the most effective platform to influence the consumers to purchase the agricultural products. So, it better for the agricultural product seller to use social media marketing for selling products. The second important antecedent of agricultural product e-marketing is search engine optimization (β = 0.43, p < 0.05). The third and fourth antecedent of agricultural product e-marketing is e-mail marketing (β = 0.34, p < 0.05) and affiliate marketing (β = 0.23, p < 0.05) respectively. The workforce demand has drastically changed as a result of the digital transition. Employees must be digitally trained to meet organizational goals (Siddoo et al., 2019). This study proposes a framework for agriculture product e-marketing that will assist the digital workforce in developing an integrated and profitable e-marketing strategy for agriculture products. Managerial Implications The agricultural products e-marketing is the new area for research and very little research have been conducted in this field. This research has a significant contribution in finding out the antecedent for e-marketing of agricultural products which enable the digital workforce to make an effective digital marketing strategy. Further relationship between agricultural products e-marketing and their antecedents have been measured. The outcome of this study will be useful for the digital workforce to sell agricultural products through different digital platforms. It will be beneficial for the consumers to purchase the products through digital platforms because it is hassle-free, and consumers get the agricultural product at their doorstep. The government is focusing and working on the digitalization of farmers. In present digitalized world, e-marketing could become an effective tool for the e-marketing of agriculture products. Consumers can purchase agro products by sitting at their home and products could be delivered to their doorstep. This study helps marketers to evaluate the different e-marketing platform for the sales of agriculture products. E-marketing platform i.e., social media marketing, search engine optimization, affiliate marketing and e-mail marketing generate big data consisting of information of consumers. This big data helps marketers and decision-makers to analyse the buying behaviour pattern of the consumers. Even this digital platform generates billion of data on the daily basis. E-marketing of agriculture products is more convenient and it's continuously attractive the consumers towards a digital platform. The output of the research helps the digital workforce to make the consumers shopping more comfortable. In traditional marketing, it is difficult to trace the consumers but in e-marketing, marketers easily trace the consumers. The study helps to monitor and evaluate the dynamic buying behaviour of the consumers. The findings of this study present several implications to achieve excellence in understanding consumer behaviour. The digital workforce can attain excellence in e-marketing by analysing the behaviour of consumers towards the e-marketing platform. The role of the digital workforce is very challenging. They are responsible to enhance the sales volume of agriculture products through digital platforms. CoNCLUSIoN The findings of this research demonstrate that digital marketing of agricultural products is a pressing requirement. Consumers in the digitalized world prefer not to go to a physical store; instead, they purchase agriculture products by just clicking on their laptop or smartphone. In this study, we provide evidence of the relationship between social media marketing, search engine optimization, affiliate marketing and e-mail marketing of agriculture products. Our results show that different e-marketing platform has a significant positive impact on e-marketing of agriculture products. E-marketing revolutionizes the marketing of agricultural products. It can track the consumers and analyse the buying behaviour of the consumers. The digital workforce can influence consumers to purchase agricultural products using an e-marketing platform. E-marketing of agriculture products is beneficial and convenient for both seller and buyer. Digital marketing of agriculture products provides the opportunities to the youths especially those who live in rural areas. There is huge potential of digital marketing in rural India. Digital marketing of agriculture products is providing the new business model for e-commerce companies. The outcome of the research supports the developed model and hypothesis. The study helps to understand the effects of e-marketing on consumers buying behaviour. A lot of attention has been given to developing research methodology, data collection and data analysis. The study contributes significantly to accumulate the knowledge of e-marketing in the sales and promotion of agriculture products. Digital marketing is the booming industry. E-commerce companies are showing enthusiasm towards digital marketing. Digital marketing becomes an emerging tool to sell the products in smother manner. The outcome of this study helps the digital workforce to adopt and implement the e-marketing strategy for agriculture products suggested in this research. To summarise, the internet has accelerated the rate of growth and offered businesses with unparalleled growth potential. With the advent of digital marketing, businesses can now cater to the requirements of a bigger client base in a shorter amount of time. Although there are numerous growth prospects that will significantly improve earnings for a company, it is critical that business organizations focus on meeting the requirements of their consumers rather than maximising profits. Contribution to Theory Extensive literature review has been done on e-marketing but there is a lack of study on e-marketing of agriculture products. This study is based on measuring the impact of e-marketing of agriculture products. This study used structural equation modeling as a multivariate tool to measure the impact of e-marketing on sales and promotion of agriculture products. Marketing mix have four important elements i.e., product, price, place and promotion. These elements have been used to measure the latent variables e-marketing of agriculture products. The outcome of the study enhances the capabilities of digital workforce to design effective e-marketing strategy. Limitation and Future Scope of the Study As like other research, this study has also some limitations which can be explore further and could be future scope of the research. E-marketing of agriculture products is still in its growth phase. This study considers only four e-marketing platform i.e., social media marketing, affiliate marketing, search engine optimization and e-mail marketing. Exploring more e-marketing platform for agriculture products like gamification marketing could be the future scope of the study.
8,673
2022-06-15T00:00:00.000
[ "Agricultural and Food Sciences", "Economics", "Business" ]
A SIMPLE APPROACH TO TEACH NEWTON’S THIRD LAW The results of previous researches indicated that there were problems with the mental model and students’ conceptual understanding of the action-reaction law (Newton’s third law, NTL). This research aimed to reveal the effect of a simple approach in teaching NTL. The research was conducted in the first-year of pre-service physics teachers at the Physics Education Department of Tadulako University. Research designs for three consecutive years were (1) one-group, pre-test, and post-test design, (2) a static group comparison (pre-test for the experimental group), and (3) a quasi-experimental. The approach used was an interactive demonstration that consisted of five phases, i.e: eliciting an intuitive argument, demonstrating a continuous force: pulling, demonstrating a continuous force: pushing, demonstrating impulsive force: collisions, and refining the concept with Elby’s pair. Data were collected using a multiple-choice test developed in previous research. The results of the data analyses showed that the approach could improve students’ understanding of the action-reaction law, supporting conceptual change by exhibiting N-gain in the moderate and high categories. The instructional design can be considered for implementation in learning in high schools, lecture on pre-service physics teachers and basic physics lecture, in general. © 2020 Science Education Study Program FMIPA UNNES Semarang INTRODUCTION A Reviews of university fundamental physics textbooks, such as Tipler & Mosca (2007), Halliday et al. (2013), Ling et al. (2016) and high school physics books such as Kanginan (2013) and Handayani & Damari (2009) shows that no examples of NTL provided in the context of impulsive force. The law is typically taught in the references to a continuously applied force (e.g., a student pulls on a fixed rope; a book is on a table, etc.). Using examples, the authors engage readers to consider the magnitude and orientation of the action-reaction force pair, and this consideration is used to explain NTL. Presentation of examples in textbooks is dominant in the case of continuous force that can be thought to affect the understanding of students and physics teachers about NTL for impulsive force cases. This is supported by research Mansyur et al. (2010) which shows that 8 high school students, 13 pre-service physics teacher students, 7 physics teachers and 4 master program students (who were also physics teachers) were only familiar with the examples of continuous forces but had difficulty in problem-solving for impulsive force case. None of them had the correct mental model associated with NTL for impulsive force case. They had contradictory arguments and inadequate lines of reasoning when explaining action-reaction law for the case. Some of them did not agree that NTL can also be applied to the case. Common examples of impulsive force interactions conceptually examine the collision of two objects where one is of a much larger mass (e.g., a car crashes into a truck, an apple falls to the ground, etc). When students are asked about the magnitude of forces involved (e.g., "Which object experiences more force?" or "When does an object exert a stronger force on the other object?"), they generally refer to mass, velocity, size or a combination of the mass and velocity of the objects. Responses stating that "the faster object" or "the more massive object" exerts a greater force on the other are common. These embedded misconceptions challenge the teacher or lecturer in laying a correct conceptual framework of Newton's laws. According to a widely held constructivist view, this is associated with the idea that students enter physics classes with a set of concepts about how the physical world works that are often contradictory to canonical scientific understanding (Sharma et al., 2010) so that it is a challenge for educators (Brown et al., 2018), where many of the concepts are controversial, or counter-intuitive (Nadelson et al., 2018). At the practical level, students' understanding of NTL is often difficult to be developed (Terry & Jones, 1986). Typically, examples of this concept are provided by reviewing contact forces more closely related to the student's daily experiences. Although this is beneficial in general, reaction forces can sometimes be taken for granted, and students can turn to lose the opportunity to truly think about what is happening. For magnetic forces, however, determining the force of action applied at a certain distance requires a careful inspection of the force involved and a more detailed analysis of the situation. Research presented highlights failures in the validity of NTL related to moving charged particles (Kneubil, 2016). Other research found junior secondary school students, senior high school and university students to experience difficulty when distinguishing between interaction and balance forces (Zhou et al., 2015). In Feldman's (2011) work, a simple demonstration of NTL is presented in the context of a magnet falling through a hollow conducting tube. The results are unambiguous and lead students to an irrefutable verification of NTL. Zhou et al. (2015) classified various situations related to NTL into two groups: static and dynamic groups. In a static group, bodies in contact are considered. The dynamic group focuses on students' levels of understanding of cases in which bodies are moving. A popular issue related to NTL concerns students' misconception (Low & Wilson, 2017) and difficulties with comprehending coarse quantitative aspects according to interaction forces that are always equal in magnitude (Zhou et al., 2015). About teaching, Savinainen et al. (2012) investigated the use of interaction diagrams in fostering students' understanding of NTL. Smith & Wittman (2008) investigated ways of teaching NTL based on the style of three tutorial materials taken from other researchers. Their study examined three tutorials designed to improve student understanding of NTL: Tutorials in Introductory Physics (TIP), Activity-Based Tutorials (ABT), and the Open Source Tutorials (OST). Each tutorial is designed with a certain purpose and agenda and is implemented using different methods to help students understand physics. In using Force and Motion Conceptual Evaluation (FMCE) (Thornton & Sokoloff, 1998) and lectures, the authors found students using the OST version of the tutorial to perform better than those using the other two methods. The response to the phenomena related to NTL is influenced by existing knowledge and it is a facet of knowledge. A facet is closely related to the specific context and is less involved than the p-prim in terms of its underlying properties. A facet may apply several concepts related to ways of representing an individual's understanding of a situation. A facet can be a piece of generic knowledge or a specific context of reasoning or it can refer to a specific strategy (Galili & Hazan, 2000). An example of a generic piece of knowledge is the expression "more means more". Other examples presented in previous studies on NTL draw on mental models (Smith & Wittman, 2008;Bao et al., 2002). A situation involving an object (mass M) moving at a certain velocity and colliding with another object (mass m, m < M) can involve abstract primitive reasoning whereby the 'greater agent' has a 'great effect' (Mansyur et al., 2014). When the agent is mapped to 'mass' while 'effect' is mapped to 'force,' a facet results: 'a massive object exerts the larger force during the collision'. This reflects incorrect mapping. When 'the agent' is mapped to 'mass' while 'effect is mapped to 'the change velocity', the following facet results: 'the massive object creates a change of greater velocity'. This is an example of correct mapping. From this example, it could be stated that reasoning can be mapped as an incorrect or correct facet. Primitive reasoning was defined (e.g., 'a lower mass car reacts more during a collision') as raw intuition that could be refined into two forms (Elby, 2001). One of the forms may lead to an incorrect imp-lication. In the previous study (Zollman, 1994), students considered velocity to the lower mass car. If the force serves as a reference, this implies an inappropriate use of NTL. When the change in velocity (acceleration) is used as a reference, then NTL is satisfied. As explained earlier that it was easy for the students and teachers to understand the law when the example involves a continuous force but had difficulties solving NTL problems associated with impulsive forces. Studies are needed that can be a bridge between the habit of presenting in textbooks and learning practices. Based on the research findings and recommendations of Zollman (1994), Redish (1994) and Bao et al. (2002) related to the cognitive science research for effective teaching, Mansyur et al. (2010) proposed a hypothetical approach for teaching NTL that involves the impulsive force. The proposed approach has been tried out through an open lecture with an interactive demonstration lecture (IDL) that was attended 13 science magister program students (physics major) of Tadulako University and junior and senior high school physics teachers in Palu City. The lecture was a part of the development process of the approach. At the end of the lecture, they discussed the advantages and the weaknesses of the approach and asked them to give suggestions for enhancing the structure quality of the approach. The approach has a simple structure with five phases and uses some simple equipment, i.e: ropes, springs, and masses. From the discussion above, our problem: how does the approach support teaching NTL? Can the approach design support the conceptual understanding of NTL? Population and Sample The research was conducted on first-year students at the Physics Education Department, Tadulako University for three years. The students had different backgrounds and were predominantly from high school in Central Sulawesi. These students were heterogeneously distributed into three classes for each year. A description of the populations, samples and sampling method is presented in Table 1. The samples were selected purposively from the classes of first-year students who were taking the Basic Physics I course. The lecturer in the selected classes was the first author. The lecturer taught in these classes based on an assignment and a schedule from the department so that he did not require special permission to carry out the research. The lecturer involved in the teachingwas only in the experimental group. In the control group, the teaching activities were handled by another lecturer. The instructional design for the control group followed a regular lecture, and we categorized it as a conventional lecture. In addition to the experiment, we also conducted reflective teaching to enhance and mature the structural quality of the approach. Experimental Design In the first year, we conducted a pre-experiment on one group by conducting pre-and post-test. Treatment for the experiment involved IDL applying the instructional design described in the next section. We identified opportunities for the approach that can support the improvement of students' conceptual understanding. The experimental design was applied to one class (Figure 1). The learning effects were observed from the improvements in the students' levels of understanding. Pretest and posttest used the same instrument. The experimental design of the second year involved two groups, i.e., an experimental group and a control group. We applied the Static Group Comparison Design but as a pretest for the group. The purpose of this pretest of the experimental group was to obtain data for calculating N-gain. The design was limited however in O X O that it did not involve testing the equivalence of the two groups before the experiment was carried out. Conclusions were drawn by comparing the performance of each group to determine the effect of the treatment on one group and namely, the experimental group. The experimental design adopted is presented in Figure 2. Figure 2. Static Group Comparison The third year of this research applied a quasi-experimental (non-equivalent) design (Creswell & Creswell, 2017). The experimental design used is presented in Figure 3. In this experiment, the two groups are considered to be not equivalent because no group member was applied settings. In this case, there was no random determination of group members. Researchers selected both groups, as distributed as determined by the department. Before applying the treatment, the two groups completed the pre-test and then the treatment was applied to the experimental group while conventional learning was applied to the control group following instructional guidelines. Instructional Design An approach design for a lecture must be prepared to construct students' conceptual understanding of NTL. We applied the design in a small-scale introductory course provided at Tadulako University for over three years. The use of simple equipment in the overall phases is an advantage of the teaching. A constraint related to the availability of equipment for effective teaching could be treated by choosing the simple equipment. The general description of the phases of the instructional for the experimental group is presented in the following. The instructional design for the control group used a conventional lecture. Phase-1: Eliciting an Intuitive Argument. In this phase, students are asked to answer a question intuitively. A problem (e.g., the R-FCI problem) (Hestenes et al., 1992) is presented on an LCD projector. Phase-2: Demonstrating a Continuous Force-Pulling. This phase was used to facilitate discussion on the continuous force. The previous study showed that students are very familiar with continuous forces (Bao et al., 2002) with identifying action-reaction force pairs. Phase-3: Demonstrating a Continuous Force-Pushing. This phase was applied to show and discuss the continuous force. Students could identify force pairs. Phase-4: Demonstrating Impulsive Force-Collisions. In this phase, the lecturer facilitated the students in demonstrating a collision of two objects. Phase-5: Refining the Concept with Elby's Pair. To make a refining concept from the previous phases, the lecturer introduced Elby's pair (Elby, 2001). The detail steps for each phase are presented in Appendix. Data Collection and Instrument Data collection was carried out through testing. The same test was applied for the pre-test and post-test. Data were collected during three academic years. The students' levels of conceptual achievement were measured using a test of 30 items on NTL (Mansyur et al., 2014). The test covered multiple-choice items focused on five central force contexts: gravitation, electrostatics, magnetics, pushing, and crashing (impulsive force). The test items were designed using various representations (i.e., verbal, diagram/vectorial and graphical). The test was developed through development and validation. A summary of the results of test analysis and the test items as a whole is presented in Mansyur et al. (2014) based on criteria (desired values) (Nieminen et al., 2010). The test has a limitation related to its scope and construction. Although the results of our items analysis and test as a whole illustrate the appropriateness of the items and test used for data collection, the test employed is limited by the scope of the examined concept. Results of the test (Mansyur et al., 2014) found the overall statements of correct answers to test items to be similar. For instance, a respondent referring to NTL or the general statement that "a force involving A acting on B is equal to a force involving B acting on A" for all test items can have completed the test with mostly correct answers. Data Analysis Data of the experiments were analyzed using quantitative-descriptive on pretest, posttest and normalized gain (N-gain) according to Hake (1999). We also carried out a qualitative analysis of data on conceptual changes and analysis of the advantages and weaknesses of the approach. During the research, the researchers conducted several controls to ensure consistency and avoid bias: (i) Both lecture versions of each group used the same conceptual content; (ii) Because the two groups were handled by different lecturers, the lecturers always conducted coordination and communication related to the lectures scenario; (iii) Duration for both classes was the same and followed the regular lecture schedule; and (iv) In both groups, students were asked to put away all materials that can distract attention (mobile phone, paper, etc.), both during the lectures and while taking the tests. Result for the First Year Our analysis of the pretest and posttest scores generated the average N-gain for the first year pre-experimental design presented in Figure 4. Figure 4 shows an increase from the pre-test to the post-test with a high N-gain. This illustrates that learning activities applying the instructional design structure described above can effectively support the learning of the studied concepts. Result for the Second Year An analysis of the results of tests conducted in the first year with the static group comparison design (with additional pretest applied to the control group) is presented in Figure 5. Figure 5 shows a moderate improvement in performance (N-gain) for the experimental group. The difference observed in the x posttest results of the two groups also shows that the experimental group (moderate category) is superior to the control group (low category). The posttest results illustrate the advantages of an applied instructional design relative to the conventional design. Result for the Third Year The results of applying the quasi-experimental design with experimental group intervention, the above listed instructional design and traditional methods to the control group are presented in Figure 6. The pretests of both groups generated similar results while striking differences were observed from the post-test, and thus the intervention effectively improved the students' levels of understanding. The striking differences in N-gain values observed qualitatively confirm the influence of the intervention on the experimental group. From the N-gain values observed from year to year for the experimental group, we observe slight fluctuations. Average N-gain values were recorded as 71.15%; 62.99% and 68.98%, indicating that moderate to pronounced changes occurred. The N-gain data show that the interventions involving interactive demonstration learning and the five-phases design had a strong influence on the students' conceptual understanding. The learning structure and design focused on developing intuitive arguments (Phase-1: Eliciting an Intuitive Argument) by submitting a case that encouraged the students to participate in the lecture. This fits a view of Redish (1994) and Miller et al. (2013) that the involvement of students' capital (resource) even when it is simplistic helps them realize their potential. Students entering a class are not seen as empty vessels that are merely ready to be filled (DiSessa & Sherin, 1998). The intuition forms as part of the content in the vessel that already exists and that can be added or arranged together with new content. In this case, learning steps have accommodated the intuitions of some students to be integrated with new knowledge and to become adequate knowledge useful for solving problems. The process of conceptualizing intuitive knowledge into conceptual knowledge through construction in the intuition activation stage until the refining phase has been successfully applied to improve learning outcomes. All teaching phases covered (encouragement, attraction, and impulsivity) with stages ranging from simple to more complex accompanied by refinement as the final phase are key to the success of the learning design examined in this research. The dynamic involvement of the instructor through the above learning structure serves as an integral part of that success. This is consistent with the constructivist view suggesting that students are involved through interactivity (Sharma et al., 2010). While learning, students can respond by asking questions or by presenting their opinions to the instructor directly. Some of the students examined help demonstrate certain concepts when the instructor asked questions to the demonstrator and students. They can feel the force of 'resistance' (as a reaction) upon pulling the rope one end of which was tied to a window. The findings of this research regarding the contributions of demonstrations to learning reinforce finding that demonstration is a remarkably simple but strikingly effective approaching requiring the use of specialized equipment (Feldman, 2011). Using the apparatus to study impulsive forces was very helpful for students to observe the effects of the collision of two objects through changes in the length of spring as an indicator of the force acting on both objects. The final phase reviewed Elby's pair while addressing the case of impulsive forces supported the refinement of the conceptualization process of the action-reaction law. As examples of intervention effects on the students' conceptual understanding, we choose test results for two problems (Problems 15 and 30) related to impulsive forces as displayed in Figure 7. (Mansyur et al., 2014) 15. A carpenter hits a nail (on a wood bar) using a hammer. The diagram below describes the forces involved when the hammer hits the nail, which are... Udin and Ical each throw a ball. Ical's ball is larger than Udin's and Ical throws his ball faster than Udin. The balls knock into each other in the air. The diagram below illustrates the forces involved during the collision, which are... What is interesting about the findings of this study is that although the examined demonstration only focused on forces involved when pushing an object or pulling on a rope, on the impulsive imposed on an object and the collision of two objects, the students can extend the application of their knowledge to concepts related to forces related to other contexts. They can also solve problems involving electrostatic, magnetic and gravitational forces. It can thus be concluded that students can effectively apply what is learned to new isomorphic problems. In other words, learned information was applied from one context to another. This application of knowledge can be understood with the Lobato's Model (Lobato, 2003) of "actor-oriented transfer" (AOT), which defines transfer learning as the "personal construction of similarities" between two contexts. Lobato's model focuses on how these "actors" (or learners) see two contexts as the same (Cui et al., 2006). Conceptual change does not just happen, as support from the environment and learning systems is indispensable to the success achieved. A learning structure originating from a review of the attraction force and forces acting on an impulsive determines the conceptual change that occurs. Demonstrations of a simple case are then followed by an examination of more complex cases, contributing to the average N-gain achieved through this research. The students did not merely observe the change in the length of the two springs when the lecturer demonstrated the pair of action-reaction forces acting on the rope, as they were also invited to review several pairs of actions and reactions acting on the rope starting from the pair of forces acting on a point on a wall to the pair of forces exerted onto the demonstrator's hand. By asking probing questi-ons and assisting while having the students watch what happened during the demonstration, the instructor helped the students develop their knowledge base. This confirms the findings of Kestin et al. (2020) that IDL can help the students to understand the underlying phenomena and concepts by asking them to make predictions of the outcome and then discuss them with each other. An initially inappropriate conception can be converted into an appropriate conception. The results of this study show that a demonstration through "refining raw intuitions" can improve students' understanding as reflected by the Ngain (in the first year) and by learning outcomes superior to those of traditional learning (the second year posttest comparison and the third year N-gain). Striking differences in N-gain (the third year) values observed for both groups confirm the benefits of learning through the interactive demonstration over conventional lectures. The proposed instructional design can also mitigate the action dependent facet whereby one object exerts a force while another object is subjected to that force (Smith & Wittmann, 2008). Figure 8 shows the shift in the distribution of answers given from the pretest to the posttest. The figure shows that the percentage of students selecting the correct answer increased with Ngain values are 58,82% and 48,57%. Even though it has decreased in proportion, there were still many students from 35,14% to 24,32% (Problem 15) who misunderstood the case of the collision of two objects. They understood that an object that is mass/larger in size and comes crashing into a resting object gives a greater force compared to rest objects whose small mass/size and speed. A larger proportion (from 64,86% to 45,95% for Problem 30) occurs when an object that is larger and faster collides an object of smaller size and speed. This illustrates that the context that includes the mass/size and speed of objects affects the conception of the students. This situation can not be completely overcome by the approach. This is in line with the findings that the conceptual change associated with NTL should be viewed in conjunction with changes in the students' overall understanding of the notion of force (Terry & Jones, 1986) and context features (Bao et al., 2002). The variance (qualitatively) in the students' answers for the pretest was markedly more pronounced than the posttest. Thus, learning outcomes achieved through the interactive demonstration successfully reduced levels of variance in the students' understanding of the collision of two objects. The learning process successfully helped the students develop an accurate conceptual understanding of the action-reaction force for the studied case. An overview of Elby's pair can be used to illustrate the implications of two paths of reasoning. The first path of reasoning does not meet the conditions of NTL while the second implies the fulfillment of the law. This shows that even though there is still weakness related to variations of the context in the demonstration process, the learning design structure has advantages in supporting the achievement of learning objectives, in general. CONCLUSION We applied an approach with interactive demonstration learning in which students negotiated their understanding and their raw intuition. The phases of this approach have helped students build their understanding by learning from simple cases and are generally exemplified in textbooks to more complex NTL cases. A review of Elby's pair in the final phase is an integral and crucial part of this approach for refining the raw intuition that leads to an appropriate understanding of NTL. The approach improved the students' understanding of action-reaction forces, supporting conceptual change and exhibiting average normalized gains of the moderate to high categories. The design can be considered for implementation at high school or introductory physics course and physics teacher preparation program. INSTRUCTIONAL DESIGN FOR THE EXPERIMENTAL GROUP Procedure a. Phase-1: Eliciting the Intuitive Argument In this stage, students are asked to answer a question intuitively. A problem (e.g., the R-FCI problem) is presented on an LCD projector. In the figure below, student "a" has a mass of 95 kg and student "b" has a mass of 77 kg. They sit in identical office chairs facing each other. Student "a" places his bare feet on the knees of student "b", as shown. Student "a" then suddenly pushes outward with his feet, causing both chairs to move. During the push and while the students are still touching one another, which student feels the greater force? Figure A. An R-FCI Case (Hestenes et al., 1992) Count the percentage of students providing each type of answer and ask the students to explain their choices. Do not discuss their arguments. Continue to the next phase. b. Phase-2: Demonstrating the Continuous Force-Pulling The previous study showed that students are very familiar with continuous forces with identifying action-reaction pairs. This was used to facilitate discussion on the impulsive force.Procedure: 1. Tie a rope to a wall. 2. Ask the student to predict interaction forces between the rope and wall by asking questions such as the following: "What would happen if we pulled on (applied force to) the rope? How about the wall?" The common answer (potentially due to high school experience): "The wall would exert a force (reaction force) onto the rope". 3. The lecturer may continue to be asking the following question: what is the magnitude of the force (if present)? The students may answer the following (alternatives): it is the same, it is different, there is no force, etc. To accommodate the "no force" answer, the students are asked to state what they feel when pulling the rope. The lecturer, in this case, may use a cognitive conflict. In asking questions, the lecturer must make sure that the students are aware of the presence of the force placed on the rope and themselves. 4. When it is determined that the force is the same, the lecturer asks: "Why are they are same? (common answer based on NTL). Could you show that they are the same/different?" 5. Introduce the position variable restoring the force concept of a spring or Hooke's Law. Show that from the formula: F = -k ∆x or F ∞ ∆x. Have the students demonstrate that ∆x directly represents a measure of F (Figure B). Measure the spring length (e.g., x 0 ). Figure B. Description of Changes in Springs 6. Take another piece rope and tie it to both springs. Spring-1 (S1) represents the 'action' force placed on the wall and S2 represents the 'reaction' to rope ( Figure C). Have the students demonstrate that pulling the wall (spring S1) by pulling Rope A reflects an 'action' directed at the wall. Our attention must focus on S1. When Rope A is pulled, ask the students to notice the change in the length of S1. What happens to another spring (S2) or the length of S2? To have the students determine the magnitude of ∆x, measure the last spring's length (x 1 ) and ask them to compare the approximate change in length of both springs (∆x = x 1 -x 0) ). 8. When they find similarities in the changes in spring lengths, continue to the next phase. Introduce the terms 'action force' and 'reaction force'. (The students should understand the pairs of forces by using a diagram such as a diagram below). c. Phase-3: Demonstrating the Continuous Force-Pushing 1. Arrange two springs as shown in figure 5. 2. Negotiate the springs' status. The first is a 'target' spring and the other is an 'effect' spring. 3. Push the plunger ring (slowly and continuously) (figure 6) and ask the students to look at the 'target' spring. State: "I am applying a force to the target spring (S A )." Hold the plunger ring to form an 'effect' spring (S B ). Ask the students to notice the change in S A 's length. What happens to the 'effect' spring (S B )? Ask the students to identify similarities or differences (when present) in the latter spring's length. (Table A) Ask the students to consider the terms: 'action' and 'reaction.' Remind them that ∆xA represents the action force (for a target) and ∆xB denotes the reaction force (for an effect). 7. Have the students relate their conclusions to the activities of Phase-2 and Phase-3. 8. Conclude the role of NTL for not only a continuous force but also an impulsive force. 9. Discuss arguments raised in Phase-1. 10. Extend the discussion to other iterations of the impulsive force (e.g., an apple falling to the ground, a hammer striking a nail, a bird crashing into a window, a large magnet exerting a force on a smaller magnet, etc.). By this phase, it is sufficient to alter the students' views related to our and others' research findings. To determine why the action force is equal to the reaction force and how this can be explained, we may continue to Elby's Pair. e. Phase-5: Refining with Elby's Pair Elby's pair is illustrated in Figure G. 1. Discuss the collision case of Phase-4 or introduce a case from the case illustrated in Figure F. 2. Consider the raw intuition derived from Phase-1 and compare it to the raw intuition illustrated in Figure G
7,268.8
2020-03-31T00:00:00.000
[ "Physics", "Education" ]
Proposal of Mutation-Based Bees Algorithm (MBA) to Solve Traveling Salesman & Jobs Scheduling Problems This paper presents an improved swarm-based algorithm which is based on Bees Algorithm and Mutation Operator. Mutation-based Bees Algorithm (MBA) is very useful to solve some NP-complete problems. This paper contains the basic version of MBA with solving two NP-complete problems as examples and experiments for testing the suggested approach. These two problems are Traveling Salesman Problem and Job Scheduling Problem. The experimental results show that the suggested approach is very suitable for solving NP-complete problems and gives good results compare with traditional Bees algorithm Introduction The bee colony optimization-based algorithm is a stochastic population metaheuristic that belongs to the class of swarm intelligence algorithms.In the last decade, many studies based on various bee colony behaviors have been developed to solve complex combinatorial or continuous optimization problems [14].Bee Colony optimization-based Algorithms are inspired by the behavior of a honeybee colony that exhibits many features that can be used as models for intelligent and collective behavior.These features include nectar exploration, mating during flight, food foraging, waggle dance, and division of labor [18]. Bee colony-based optimization algorithms are mainly based on three different models: food foraging, nest site search, and marriage in the bee colony.Each model defines a given behavior for a specific task.Bee is social and flying insect native to Europe, the Middle East, and the whole of Africa and has been introduced by beekeepers to the rest of the world [15,18].There are more than 20,000 known species that inhabit the flowering regions and live in a social colony after choosing their nest called a hive.There are between 60,000 and 80,000 living elements in a hive.The bee is characterized by the production of a complex substance, the honey, and the construction of its nest using the wax.Bees feed on the nectar as energy source in their life and use the pollen as protein source in the rearing of their broods.The nectar is collected in pollen baskets situated in their legs [18]. Generally, a bee colony contains one reproductive female called queen, a few thousand males known as drones, and many thousand sterile females that are called the workers.After mating with several drones, the queen breeds many young bees called broods.Let us present the structural and functional differences between these four honeybee elements [18, 15]: • Queen: In a bee colony, there is a unique queen that is the breeding female with life expectancy between 3 and 5 years.It is developed from special and very young larvae and eggs selected by workers, from which the colony produce a new queen to become sexually mature, after killing the old unfertilized one.It is an exclusive development that will be raised in special queen cells with richprotein secretion.The main role of the queen is the reproduction by the egg laying.It mates with 7-20 drones in a reproductive operation called mating flight.It stores the sperms in her spermatheca and then lays up to 2000 eggs per day.The fertilized eggs become female (worker) and the unfertilized eggs become male (drones). • Drones: Drones represent the males varying between 300 and 3000 in the hive.Drones are developed when the queen lays unfertilized eggs and they play the role of fertilizing with a receptive queen generally in the summer and exceptionally in the autumn.The drone has a life expectancy of 90 days.It dies after a successful mating. • Workers: Workers are female bees but are not reproductive.They live from 4 to 9 months in the cold season and their number reaches up to 30,000.However, in summer, they live approximately for 6 weeks when their number attains up to 80,000.The worker is responsible for the beehive defense using its barbed stinger.Consequently, it dies after stinging.One can enumerate the worker activities by the day criterion as follows: cell cleaning (day 1 -2), nurse bee (day 3-11), wax production (day 12-17), guard honeybees (day 18-21), and foraging honeybees (day 22-42).honey sealing, pollen packing, fanning honeybees, water carrying, egg moving, queen attending, drone feeding, mortuary honeybees, and honeycomb building. • Broods: The young bees are named broods.They born following the laying of eggs by the queen in special honeycomb cells called: the brood frames.Thereafter, the workers add royal jelly on the brood heads.Few female larvae are selected to be future queens.In this case, they are flooded by royal jelly.The unfertilized eggs give birth to the broods.The young larvae are spinning by cocoon, capping the cell by the older sisters; it is the pupa stage.Then, they reach the development stage in which they receive nectar and pollen from foragers until leaving the beehive and spending its life as forager.The foraging behavior (nest site selection, food foraging) and the marriage behavior in a bee colony are the main activities in the life of a bee colony that attract researchers to design optimization algorithms. In the other side, Evolutionary Algorithms (EA) are stochastic population metaheuristics that have been successfully applied to many real and complex problems (epistatic, multimodal, multiobjective, and highly constrained problems).This paper presents an improved tool to solve NP-complete problems, which is called Mutationbased Bees Algorithm (MBA).MBA is an intelligent swarm-based algorithm which depends on Bees Algorithm and Mutation Operator.The intelligent swarm-based algorithm will be described in section 2. Bees algorithm details in section 3. The proposed improved algorithm will be presented in section 4. Section 5 contains the experiments of solving Traveling Salesman Problem and Job Scheduling Problems using proposed algorithm.Section 6 consists of conclusions that are related to suggested algorithm. The bubble column is widely used in industry as a simple and relatively inexpensive means of achieving intimate gas-liquid contact.Gas is bubble into a deep pool of liquid in cocurrent or countercurrent flow and is dispersed as a bubble swarm of high PDF created with pdfFactory Pro trial version www.pdffactory.comThe output from such a reactor is obviously influenced by gas hold-up and interfacial area and by internal circulation of liquid induced by the bubbles. Intelligent Swarm-Based Algorithm Swarm-based algorithms mimic nature's methods to drive a research towards the optimal solution.A key difference between Swarm-based algorithms and direct search algorithms such as hill climbing and random walk is that Swarm-based algorithms use a population of solutions for every iteration instead of a single solution.As a population of solutions is processed in an iteration, the outcome of each iteration is also a population of solutions.If an problem has a single optimum solution, Swarmbased algorithm population members can be expected to converge to that optimum solution [13]. However, if an NP-hard problem has multiple optimal solutions, a Swarm-based algorithm can be used to capture them in its final population.Swarm-based algorithms include the Ant Colony Optimization (ACO) algorithm [7], the Genetic Algorithm (GA) [10], the Bees Algorithm (BA) [12] and the Particle Swarm Optimization (PSO) algorithm [8]. Common to all populationbased search methods is a strategy that generates variation in the solution being sought.Some search methods use a greedy criterion to decide which generated solution to retain.Such a criterion would means accepting a new solution if and only if it increases the value of the objective function.A very successful non-greedy populationbased algorithm is the ACO algorithm which emulates the behavior of real ants.Ants are capable of finding the shortest path from the food source to their nest using a chemical substance called pheromone to guide their search.The pheromone is deposited on the ground as the ants move and the probability that a passing stray ant will follow this trail depends on the quantity of pheromone laid [14]. The Genetic Algorithm is based on natural selection and genetic recombination.The algorithm works by choosing solutions from the current population and then applying genetic operators -such as mutation, crossover [10], controlled mutation and conjugation [1] -to create a new population.The algorithm efficiently exploits historical information to speculate on new search area with improved performance [10]. The successful applications of the Ant Systems in the complex engineering and management problems are certainly encouraging.At the same time, these successes act as a great inspiration to attempt to explore bees' behavior as a source of ideas and models for development of various artificial systems [9].The highly organized behavior enables the colonies of insects to solve problems beyond capability of individual members by functioning collectively and interacting primitively amongst members of the group.In honey bee colonies, this behavior allows honey bees to explore the environment in search of flower patches (food sources) and then indicates the food PDF created with pdfFactory Pro trial version www.pdffactory.comsource to the other bees of the colony when they return to the hive.Such a colony is characterized by selforganization, adaptation and robustness [6].Particle Swarm Optimization (PSO) algorithm is an optimization procedure based on the social behavior of groups of organization, for example the flocking of birds or the schooling of fish.Individual solutions in a population are viewed as "particles" that evolve or change their positions with time.Each particle modifies its position in search space according to its own experience and also that of a neighboring particle by remembering the best position visited by itself and its neighbors, thus combine local and global search methods [8]. The Bees Algorithm -Bees in Nature A colony of honey bees can extend itself over long distances (more than 10 km) and in multiple directions simultaneously to exploit a large number of food sources [21,19].A colony prospers by deploying its foragers to good fields.In principle, flower patches with plentiful amounts of nectar or pollen that can be collected with less effort should be visited by more bees, whereas patches with less nectar or pollen should receive fewer bees [3,5,12]. The foraging process begins in a colony by scout bees being sent to search for promising flower patches.Scout bees move randomly from one patch to another.During the harvesting season, a colony continues its exploration, keeping a percentage of the population as scout bees [19]. When they return to the hive, those scout bees that found a patch which is rated above a certain quality threshold (measured as a combination of some constituent, such as sugar content) deposit their nectar or pollen and go to the "dance flower" to perform a dance known as the "waggle dance" [21]. This mysterious dance is essential for colony communication, and contains three pieces of information regarding a flower patch: the direction in which it will be found, its distance from the hive and its quality rating (or fitness) [5,14,21].This information helps the colony to send its bees to flower patches precisely, without using guides or maps.Each individual's knowledge of the outside environment is gleaned solely from the waggle dance.This dance enables the colony to evaluate the relative merit of different patches according to both the quality of the food they provide and the amount of energy needed to harvest it [5].After waggle dancing on the dance floor, the dancer (i.e. the scout bee) goes back to the flower patch with follower bees that were waiting inside the hive.More flower bees are sent to more promising patches.This allows the colony to gather food quickly and efficiently [12]. While harvesting from a patch, the bees monitor their food level.This is necessary to decide upon the next waggle dance when they return to the hive [5].If the patch is still good enough as a good source, then it will be advertised in the waggle dance and more bees will be recruited to the source -The Bees Algorithm As mentioned, the Bees Algorithm is a swarm-based algorithm inspired by the natural foraging behavior of honey bees to find the optimal solution [12,13].Figure (1) [12,13] shows the pseudo code for the algorithm in its simplest form.The algorithm requires a number of parameters to be set, namely: number of scout bees (n), number of sites selected out of n visited sites (m), number of best sites out of m selected sites (e), number of bees recruited for the best e sites (nep), number of bees recruited for the other (m-e) selected sites (nsp), initial size of patch (ngh) which includes site and its neighborhood and stopping criterion.The algorithm starts with the n scout bees placed randomly in the search space.The fitness of the sites visited by the scout bees is evaluated in step 2. The bees having the highest fitness values are selected as "elite bees" in step 4. The algorithm then performs several searches around the neighborhoods of the elite bees and of the other bees in steps 5-7.The fitness values may alternatively be used to calculate the probability of selecting the bees.The algorithm employs more bees to follow the elite bees than the other bees in order to perform a more detailed search around the neighborhood of the points visited by the elite bees, which represent more promising solutions.Differential recruitment within scouting is also an important operation of the bees' algorithm. Both scouting and differential recruitment are utilized in nature. In step 7, however, only one bee with the highest fitness value will be selected for such a site to generate the next bee population while there is no such a restriction in nature.It is necessary here to reduce the number of points to be visited.In order to explore new potential solutions, the remaining bees in the population are randomly assigned around the search space in step 8.These steps are repeated until the stopping criterion is satisfied.The colony will have two parts to its new population at the end of each iteration.The first part will comprise the representatives from each selected patch and the second part will comprise other scout bees assigned to perform random searches [12,13]. Proposed Approach : Mutation-Based Bees Algorithm The choosing of elite and other sites for neighborhood search consider the main problem in the Bees Algorithm, because there is no perfect strategy to select the neighborhood sites (elite and other).The proposed approach mixes between Bees Algorithm and Mutation operator to solve this problem firstly and to increase the performance of Bees Algorithm secondly.The proposal uses the mutation operators of Genetic Algorithm in the Bees Algorithm as a tool to determine the neighborhood of the elite sites.Figure (2) shows the pseudo code of the Mutation-based Bees Algorithm. The proposed algorithm requires a number of parameters to be set, namely: number of scout bees (n), number of sites selected out of n visited sites (m), number of best sites out of m selected sites (e), number of Traveling Salesman Problem The traveling salesman problem is a classical optimization problem. Optimization problems involve finding a maximum or minimum value of a mathematical function, usually subject to some sort of constraints expressed as mathematical function [2,20].The traveling salesman problem is easy to describe: a salesman must visit a series of cities.Each city should be visited only once.After a final city is visited, the salesman returns to the starting city.The distance between each city is known.What is the shortest possible tour the salesman can make?Several experiments have been executed on traveling salesman problem using 5, 10, 15…., and 30 nodes. Figure (3) illustrates the time execution of our experiments with different operators. Job Scheduling Problem Scheduling, in the widest sense, is concerned with the allocation of scarce resources to tasks over time.Scheduling problems are central to production and manufacturing industries, but also arise in a variety of other things [4].This paper focuses on shop scheduling problems, where jobs have to be processed on one or several machines such that some objective function is optimized.In case jobs have to be processed on more than one machine, the task to be performed on a machine for completing a job is called an operation.All the machinescheduling models considered in this paper holds that (1) the processing time of all jobs and operations is fixed and known beforehand, and (2) the processes of all jobs and operations can not be interrupted. In job shop scheduling problems we are given a finite set O of operations that is partitioned into a set of subsets M = {M1,…,Mm}, where each Mi corresponds to the operations to be processed by machine i, and into a set of subsets J = {J1,…,Jn}, where each set Jj corresponds to the operations belonging to job j.Each operation is assigned a non-negative processing time, preemption is not allowed.In the job shop scheduling problems the precedence constraints among all operation of a job exist and they induce a total ordering of the operations of each job [4]. Several experiments have been executed on job shop scheduling problem using different cases.Operator for choosing the elite and other sites for neighborhood search.7. Assign bees to the selected sites and calculate their fitness.8. Choose the fittest bee from each site.9. Recruit remaining bees to search randomly and calculate their fitness.10.End While The absorption may be accompanied by a chemical reaction. Figure ( 4 ) illustrates the time execution of our experiments with different operators.Conclusions Mutation based Bees Algorithm is an improved swarmbased algorithm that is based on important swarm-based algorithm (Bees) and mutation operator.The com Eng. & Tech. Journal, Vol.28, No.19, 2010 Proposal of Mutation-Based Bees Algorithm (MBA) to Solve Traveling Salesman & Jobs Scheduling Problems 5835 The worker ensures the habitual activities of the bee colony such as PDF created with pdfFactory Pro trial version www.pdffactory. Tech. Journal, Vol.28, No.19, 2010 Proposal of Mutation-Based Bees Algorithm (MBA) to Solve Traveling Salesman & Jobs Scheduling Problems 5839 PDF created with pdfFactory Pro trial version www.pdffactory.com
4,034.8
2010-09-01T00:00:00.000
[ "Computer Science" ]
THE INFLUENCE OF THE SIZE OF HEMATITE PARTICLES ON THE PROPERTIES OF POLYETHYLENE/HEMATITE COMPOSITES The incorporation of metal oxide particles into polymer matrix in many cases leads to improved characteristics of the material: thermal stability, mechanical strength, light absorbing or antibacterial properties etc. This study aims to develop polyethylene/hematite composites prepared with hematite (α-Fe2O3) particles of different size. Such types of composites have not been yet thoroughly studied but have a potential to reveal improved properties in comparison to the pure polymer. Polyethylene is a material with a broad field of application, primarily as packaging material for food and other products. Hematite is non-toxic, thermally and chemically stable low-cost metal oxide. There is also a demanding task to prepare composites with nonaggregated hematite particles and their fine dispersion in polyethylene matrix. In this work two types of hematite with well-defined shape and uniform size were used for preparation of the composites: hematite HC1 with cubic particles of average size of 2 μm and HS2 with spherical particles sized of about 100 nm. The mass fraction of hematite in the composites was 0.25, 0.5 and 1 %. Prepared polyethylene/hematite composites were characterized by thermogravimetric analysis (TGA), differential scanning calorimetry (DSC) and diffuse reflectance UV-Vis-NIR spectroscopy. The mechanical properties were also studied. The results show that such composites have improved properties in comparison to a pure polyethylene, especially the composites with hematite HS2 due to its particles of lower size and large surface. Obtained results reveal that such composites may be a promising material for wide range of applications. INTRODUCTION An increase in the use of polymer materials recently demands their new or improved types and roles in many different fields. Polymer composites represent an outstanding choice for development of materials because they unite the properties of polymer matrix and filler which will ultimately result with completely new or improved material types. Fillers are usually used to improve polymer's properties, especially mechanical properties, or to replace the part of polymer matrix with less expensive substance. Furthermore, fillers are nowadays also used to develop some functional properties of polymer composites: UV blocking, water absorbtion, flavour releasing, oxygen scavenging, antimicrobial activity etc. The choice of additives must especially take care of health and safety, i.e. it demands the use of nontoxic components [1]. Hematite (α-Fe2O3) represents good option for application in preparation of polymer composites due to its non-toxicity, thermal and chemical stability, low cost and ageing resistance. Because of such suitable properties, the polymer/hematite composites may possess improved thermal, mechanical and magnetic properties as well as catalytic characteristics [2]. Development of advanced functional polymer materials includes sometimes additives that contribute to improved UV blocking property, which is very important in prevention of material's degradation and in prolonging its lifetime [3]. To obtain a composite material with good and balanced properties, it is very important to use the filler with the particles of uniform shape and size as well as to avoid their aggregation in the polymer matrix. This study aims to develop polyethylene/hematite composites with the hematite particles of different size and to analyze their influence on the overall composite properties. The reason for this research was also that polyethylene/hematite composites have not been yet thoroughly studied but their development may be a promising choice for creation of polymer materials with advanced applications. Materials For the preparation of the polyethylene/hematite composites LDPE polymer granulate (Dow Chemical) was used. Hematite particles HC1 and HS2 were synthesized in Laboratory for Synthesis of New Materials, Division of Materials Chemistry, Ruđer Bošković Institute. Uniform hematite particles were prepared according to slightly modified synthesis procedure reported by Sugimoto et al. [4]. Preparation of the composites Polyethylene/hematite composites denoted as LDPE/HC1 and LDPE/HS2 are prepared by mixing in a Brabender kneader, at 180 °C over a period of 3 minutes, 45 rpm. The content of hematite in the samples was 0.25, 0.5 and 1 %. The obtained composite materials were prepared for further characterization by pressing into the foils and plates. The pressing was carried out by hydraulic press Dake, model 44-226 at a temperature of 190 °C. UV-Vis-NIR spectroscopy. Diffuse reflectance UV-Vis-NIR spectra of polymer composite films were recorded at 20 °C using a Shimadzu UV-3600 UV-Vis-NIR spectrophotometer with integrating sphere. Barium sulfate was used as a reference material. Thermogravimetric analysis (TGA). The thermal stability of the obtained polymer composites was determined by the thermogravimetric analyzer TA Instruments Q500. Mass specimens of 10 mg were analyzed in the nitrogen stream at a heating rate of 10 °C/min in the temperature range of 25 to 800 °C. Differential scanning calorimetry (DSC). The thermal properties were determined on Mettler Toledo DSC822e device. Samples were heated from 25 °C to 180 °C and then they were cooled down to -150 °C (heating/cooling rate was 10 °C/min), two heating and cooling cycles were performed. Determination of mechanical properties. Mehanical properties were determined on Zwick 1445 universal device. Samples were 100 mm long and 10 mm wide (~ 1mm thick). The stretching speed was 50 mm/min. UV/Vis spectroscopy Ultraviolet (UV) radiation induces harmful influences to polymer materials. It may significantly reduce the properties and lifetime of polymers. In order to prevent such degradation effects, different types of UV stabilizers must be used. Some of them include metal oxides like titanium or zinc oxide. There is also a potential of hematite to act as an UV stabilizer i.e. UV blocking agent due to its UV absorption capability [5]. From UV-Vis-NIR spectra of polyethylene/hematite, Figures 3 and 4, it can be seen that some differences are observed in comparison with the pure polyethylene. The composites show absorption in ultraviolet region and in the visible light region. The particles of lower size (HS2), due to their large surface, enhance the UV absorbance capacity of the composites. For that reason, such prepared polyethylene/hematite composites may be considered as suitable materials for making different packaging types with good UV blocking properties. Thermogravimetric analysis It is known that hematite may improve the thermal stability of some types of polymers e. g. polystyrene [6] but there is a lack of information regarding use of hematite in polyethylene matrix. For that reason, this study brings the results of the thermogravimetric analysis of polyethylene/hematite composites, presented in Table 1. The results show that LDPE is degraded in one step. Samples of LDPE/hematite composites also degrade in one step but on higher temperatures, what proves improvement of their thermal stability in comparison to the pure LDPE. LPDE composites with HS2 (hematite particles of smaller size, average size of about 100 nm) showed a significant increase of T95% and Tmax. T95%, which is the temperature at which 5 mas. % of the sample is decomposed (initial decomposition temperature), is improved even up to 16 °C. The temperature of maximum rate of decomposition,Tmax, for that composites, is also improved (even up to 24 °C) compared to the pure LDPE. Differential scanning calorimetry (DSC) The results of DSC analysis, presented in Table 2, show that LDPE is a crystalline polymer with a glass transition of -129.75°C. The addition of hematite to the LDPE matrix leads to a slight increase in the degree of crystallinity. The increase in the degree of crystallinity can be attributed to the action of filler as a nucleating agent [7]. Determination of mechanical properties Hematite may significantly improve the mechanical properties of some types of polymer matrix [8]. The results of the mechanical properties of LDPE and the composite samples, expressed as dimensionless tensile strength and dimensionless elongation at break, are presented in Figures 5 and 6. Mechanical properties show the improvement of the tensile strength for all composite samples in comparison to the pure LDPE. The elongation at break is improved for the samples LDPE + 1% HC1 and LDPE + 0.25% HS2. CONCLUSION The results of UV-Vis-NIR absorption measurements show that all prepared composites (LDPE/HC1 and LDPE/HS2) are suitable for making different packaging materials with good UV blocking properties. The LDPE + 0.25% HS2 composite can be considered as the most thermally stable sample because it shows high temperature stability despite the low filler content. The results show that LDPE/hematite composites have improved UV blocking, thermal and mechanical properties in comparison to a pure polyethylene, especially the composites with hematite HS2 due to its particles of lower size and large surface. Such composites may be a promising material for wide range of applications.
1,950.2
2021-01-01T00:00:00.000
[ "Materials Science" ]
IEEE 802.11-Based Wireless Sensor System for Vibration Measurement Network-based wireless sensing has become an important area of research and various new applications for remote sensing are expected to emerge. One of the promising applications is structural health monitoring of building or civil engineering structure and it often requires vibration measurement. For the vibration measurement via wireless network, time synchronization is indispensable. In this paper, we introduce a newly developed time synchronized wireless sensor network system. The system employs IEEE 802.11 standard-based TSF-counter and sends the measured data with the counter value. TSF based synchronization enables consistency on common clock among different wireless nodes. We consider the scale effect on synchronization accuracy and evaluated the effect by taking beacon collisions into account. The scalability issue by numerical simulations is also studied. This paper also introduces a newly developed wireless sensing system and the hardware and software specifications are introduced. The experiments were conducted in a reinforced concrete building to evaluate synchronization accuracy. The developed system was also applied for a vibration measurement of a 22-story steel structured high rise building. The experimental results showed that the system performed more than sufficiently. Introduction Rapid progress of wireless network technology and embedded sensor technology has been integrated into wireless sensor network and various prospective applications are expected to emerge.Among the many sensing network applications, particularly promising one is the structure health monitoring, which monitors the structural health of buildings and civil engineering structures [1].Since measuring objects such as a bridge and a building are usually huge and installing very long signal cables requires high installation cost.Additionally, long cables leave wires vulnerable to ambient signal noise corruption, thus wireless data transmission is highly beneficial.Structure health monitoring often requires measuring vibration data such as acceleration and velocity.Measured data are analyzed by the modal analysis method to obtain the resonance frequency, damping ratio and spectrum response [2]. For wireless vibration measurements, time synchronization is very important because the vibration measurement for the modal analysis requires simultaneous multipoint sensing data which are often transmitted via multihop relayed wireless devices.Due to the queuing process and stochastic media access method, the data transmissions are randomly delayed.As a result, even if each sensor node acquires data and sends them exactly at the same instant, the arrival time of the data does not match.To avoid it, the received data needs to be adjusted so as to maintain the time consistency on a common time axis.Because when the data are used for modal analysis, a time difference may be misunderstood as a phase shift.To maintain precise time consistency among wireless nodes, time synchronization is indispensable. In this paper we propose a synchronization method for wireless sensor network system, which utilizes IEEE 802.11based timing synchronization function (TSF).The function is a mechanism for synchronizing local timer counter of each wireless device, which is originally used for contention control among wireless node.By embedding the value of TSF counter in a packet with measured data, one can solve the time skew problem on the receiver side. In the following section, we briefly review the related works.Section 3 describes the adverse effects on vibration sensing caused by a time delay in wireless network and denotes the reason why time synchronization is needed.Section 4 describes a probability of timing synchronization beacon transmission.The section also describes stochastic analysis and simulation studies on the scalability of sensor network.In Section 5, we describe a newly developed wireless sensing system and also describe its hardware and software components.In Section 6, experimental evaluations of time synchronization accuracy and vibration measurement data in a high rise building are presented. Related Works Time synchronization of network is indispensable to manage transmission timing and to avoid wasteful collisions, therefore several technologies such as GPS, radio ranging have been used to provide global synchronization in networks.GPS-based synchronization offers very precise synchronization, however it is not always available indoor place [3].In wired network, the Network Time Protocol (NTP) has been developed that has kept the Internet's clocks ticking in phase [4]. RT-Link [5] uses media access control (MAC) based on slotted ALOHA and it employs independent AM carriercurrent radio device for indoor time synchronization.However, carrier-current AM is only allowed on the school campus in the United States, so long as the normal FCC Part 15. [6] Berkeley MAC (B-MAC) supports carrier sense multiple access (CSMA) with low power listening where each node periodically wakes up after a sample interval and checks the channel for activity for a short duration.The main concern of these methods is battery life, because time synchronization is one of the most important factors to determine radioactive period, which directly affects battery life.There have been many studies on time synchronization which mainly aimed for conserving battery energy of network nodes. Meanwhile, IEEE 802.11 [7] standard devices have a timing synchronization function (TSF) by default.We propose to utilize the function for synchronization among nodes to determine the sampling data interval and the time stamping of measured data.IEEE 802.11 is one of the de facto standards of the wireless local area network and so is easily obtained with industrial-level reliability.Furthermore, the modulation employs Direct Sequence Spectrum Spread (DS-SS) and Orthogonal Frequency Division Multiplexing (OFDM) which offer robustness against phasing and noise.Those are advantages against RT-Link and Berkeley Mote which use noise susceptible wireless modulation.Some studies pointed out the scalability issue on IEEE 802.11TSF [8] and new protocols such as SATSF [9] and MATSF [10] were proposed to achieve very accurate clock synchronization; however they are still on the research level and not implemented on market ready products.Even though the current 802.11TSF is involved with the scalable issue, it depends on the scale and the requirement of accuracy of the application.Our targeted application uses less than 100 nodes and required accuracy is around 1 to 10 miliseconds.Additionally, thanks to the progress of the wireless technology, the synchronization error rates are decreased and consequently accuracy is improved comparing to the results shown in [8]. We made simulation-based analyses and verified that even the original 802.11TSF is accurate enough for vibration measurement. Time Delay on Wireless Network The data transmission via a wireless network requires certain amount of time elapsing from the moment when the transmitter sends the data to the moment when the data arrives at the receiver.In the building vibration measurement applications, sensors usually need to be placed at several points on different floors in a building.A measurement station (host computer) is placed at a certain location in the building.In such cases, especially in a high rise building, the wireless radio waves of each sensor node do not directly reach the station.Thus, the multihop network path is required and the packets are transmitted in relays of the wireless sensor nodes. Therefore, even if all the nodes intend to send packets at exactly the same time, the arrival times may be different.This fact results serious problem when measured data are used for modal analysis, because time delay may cause unexpected fake mode.Therefore we somehow need to maintain consistency of received data on the common time axis. To resolve this issue, we propose to send the vibration data together with the time stamp at the moment when the data is measured.After the data are received at the host PC, they can be rearranged along the common time axis based on the time stamp.This procedure is valid provided that all clocks of the nodes are matched.However, accuracy of the quartz crystal oscillator is affected by many factors such as temperature and change of current or voltage [11].Frequency stability of oscillators used for PCs is mostly around 10 −4 (100 ppm).Imprecision of 100 ppm corresponds to a 1 millisecond (10 msec) error in 10 seconds, or an 18-degree phase error for 100 Hz vibration.Even though the resonant frequency of a building is low (typically less than 10 Hz), a 1 msec difference between the fastest clock and the slowest clock is more than negligible amount.Therefore, periodical time synchronization is indispensable and accuracy of the synchronization is the matter of concern. in stochastic manner, that is, the synchronization contains random factors.As a result, the accuracy can only be evaluated by probability analysis.Before describing the detail of the analysis, we briefly describe the mechanism of TSF.Suppose there are several nodes within the radio range, which can communicate each other and they form independent basic service set (IBSS).Each node ticks its own TSF counter (64 bit) by 1 nanosecond.The nodes in the IBSS contend a right to send a beacon and one or more nodes becomes eligible to send a beacon.In each beacon transmission period, several slots are prepared and each node is randomly assigned into a certain slot as shown in Figure 1, where no. 1 or no. 2 denotes the number of node.Each slot may hold none or one or more nodes.In the beginning of each beacon transmission period, the nodes in the least slot number send a beacon which contains TSF counter value.If there exist only one node in the slot, the beacon packet is successfully sent (Figure 1(a)) and the other node adjust their TSF counter only if the received value of the TSF counter is larger than the received value.If the received value is smaller, the nodes do not adjust own counter. Synchronization Issue on Scalability On the other hand, if there are two or more nodes in the same slot as shown in Figure 1(b), they start transmitting a bacon packet in the same time, thus the beacon packets collide and transmission results to fail.If it fails, the nodes in the second least slot number transmit a beacon.In Figure 1(b) case, node #1 sends a beacon without collision.These sequences are repeated as long as available slots remain.It may happen that no successful beacon transmission during the beacon period as shown in Figure 1(d), they need to wait for next beacon period (typically 100 mili-seconds later). In Figure 1 case, we suppose 5 nodes and 7 slot spaces; however one may easily imagine that the chance of successful beacon transmission decreases if the number of nodes increases, while the number of slots is limited.The successful rate depends on the number of nodes and slots.Because the slot allocation is randomly determined, analysis based on probability theory is required. Huang and Lia [8] pointed out the issue and showed analyses and simulation results on the synchronization error.The paper offered great contribution for the probabilistic analysis; however the specification of the device is obsolete. For example, the bit-rate in the simulation is fixed to 1 Mbps because it was the maximum speed at that time. In order to match up to current technology and give a prospective aspect of 802.11-based time synchronization, we made analysis on scalability effect of IEEE 802.11 a/b/g standard based sensor stations.In the followings, we describe the probability theory-based formulas of success rate of the beacon transmission.Then we show results of the numerical analysis in case of various bit rates, modulations and number of wireless stations. Probability of Successful Beacon Transmission. In IEEE 802.11 standards, the number of the slots is 2 • aCWmin + 1 and each node is scheduled to transmit a beacon at the beginning of one of the slots, where aCWmin is the minimum contention window for the media.The value of aCWmin is 31 in Direct Sequence Spread Spectrum (DSSS) and 15 in Orthogonal Frequency Division Multiplexing (OFDM). Let us suppose the length of a beacon is L b (bit), the transmission bit rate is T r (Mbps) and the time length of a slot time is S t (μ sec), the integer number of slots occupied by a beacon (N s ) is obtained in (1), where N s can be also obtained by roundup function of Once a beacon transmission starts, other nodes need to be quiet for the time length of N s slots.If a beacon transmission fails, they resume counting down the backoff timer and contending for the remaining slots.If the collisions occur by m times in series and all available slots are consumed, that is, corresponds the condition in (2) becomes true, the beacon transmission trials fell through As stated above, the beacon transmission is not deterministic, thus we need to analyze it by stochastic manner. First of all, let us reconfirm the definition of successful beacon transmission.We define that a beacon is transmitted successfully if at least one node transmits a beacon successfully during a beacon transmission period.Suppose that the IBSS consists of n nodes and let W be twice the minimum contention window (W = 2 • aCWmin).Let p(n, W) be the probability that at least one of the n nodes succeeds in a beacon transmission, then p(n, W) is given by the recursive formula shown in (3): ( The first term corresponds to the probability of the event that there is no beacon transmission in slot 0, while there is a successful beacon transmission in window [1, W].The second term corresponds that there is a successful beacon transmission in slot 0. The third term q(n, W) is the probability that there is unsuccessful transmission in slot 0, but at least one beacon transmission in window [1, W].The formula for q(n, W) is shown in (4): where C n k is the combination number defined in (5): 1) is determined based on bit rate T r (bps), the length of a beacon L b (bit) and slot time S t (μsec).Values of these parameters and their ranges are shown in Table 1. The slot time for 802.11g is 20 (μsec) when there is 802.11bnode within the radio range, while the slot time of a pure 802.11g network is 9 (μsec).Length of a beacon is depends on the size of contained information.For example, length of the service set ID is between 2 and 34 bytes, the length of supported rates and the extended supported rate are also variable.In the simulation, we assumed the length of a beacon is 110 bytes which is the real beacon size of a prototype system. Figures 2, 3, and 4 shows the results of calculation of the probability rate of successful beacon transmission p(n, W). The horizontal axis is bit rates and each plot line shows the case of different number of nodes. Figure 2 is the result of 802.11b.Figure 3 is the result of 802.11g with 20 μsec slot time and Figure 4 is the result of 802.11 a/g with 9 μsec.As shown in three graphs, the rates increase in larger bit rates.On the other hand, the rate decreases with growth of number of nodes.Another aspect of the results shows that the short slot time deteriorates the beacon successful rate p(n, W), which is observed in comparison of Figures 3 and 5 (802.11g with short slot time).This is because number of occupied slots increases for shorter slot time or low bit rate, and chance of beacon transmission become smaller. In the case of 802.11a/g with 9 μsec, p(n, W) drops down to 0.26 for 100 nodes, which is the minimum rates among the results.Considering these results, we made numerical simulations to evaluate the synchronization accuracy. Simulations on Synchronization Accuracy.In the TSFbased synchronization, a node which receives a beacon adjusts its TSF counter to the time stamp of the received beacon if the value of the time stamp is later than the node's TSF counter.(It is important to note that clocks only move forward and never backward).Possibility of chance to send a beacon is equal, thus the node whose clock is fastest seldom gain a chance to adjust one's clock to others.As a result, the offset of TSF (the deference of TSF counter between the fastest node and the slowest node) becomes large as the number of nodes in the IBSS grows. The offset of TSF also strongly depends on the configuration of nodes in the segment.Therefore we classified them into two configuration cases. Case 1.All nodes exist within the range that radio waves can reach (Figure 5(a)), that is, each node can send a beacon directly to any other nodes. Case 2. All nodes are arranged so that radio waves can reach two adjacent nodes only (right and left nodes).Figure 5(b) illustrates the allocation of nodes.End-to-end communication is possible only through multihop relays. Numerical simulations were set up for the two configuration cases.In addition, the probability of beacon transmission which is analyzed in the previous subsection is taken into account in Case 1, while it is not in Case 2. Because a beacon transmission is contended with only adjacent nodes in Case 2, the beacon transmission rate is always almost 100%.We run the simulations assuming that TSF clock frequencies were uniformly distributed in the range of ±50 ppm (±5.0 × 10 −5 ) and the beacon interval was assumed to be 100 msec.Each simulation trial was conducted for corresponding to 180 seconds elapse of time.In each trial, the TSF clock speed of each node was randomly selected and the total number of trials was 1000 times, which was large enough for data convergence.During the simulation, we recorded the maximum offset which corresponds to the difference between the fastest clock and the slowest clock at each beacon interval.(Notice that the slowest clock does not indicate a clock of specified node.) Figure 6 shows the results of Case 1 simulation of pure 11g (or 802.11a) with 6 Mbps which is least successful rate of transmission.Figure 6(a) is the case when the total number of nodes is 5 with successful rate 0.969, Figure 6(b) is the case of 20 nodes with successful rate 0.924 and Figure 6(c) is the case of 100 nodes with successful rate 0.262, cf.p(n, W) in Figure 4. As shown in the figures, the accuracy of time synchronization gets worse as the number of nodes increases.In 5node case, 99.9% of the data maintained less than 36 μsec, 87 μsec in 20-node, and 428 μsec in 100-node case.The median value of 5-node was 6 μsec, that of 20-node case is 15 μsec and 75 μsec in 100 case. Figure 7 shows the results of Case 2. Figure 7(a) is the case of 5 nodes; Figure 7(b) is that of 20 nodes and Figure 7(c) shows 100-node case.The median value in 5-node case was 14 μsec, 92 μsec in 20-node case, and 475 μsec in 100-node case.In 5-node case, 99.9% of the data maintained less than 104 μsec, 303 μsec in 20-node, and 1388 μsec in 100-node case. Comparing the results of Cases 1 and 2, daisy chain configuration like Case 2 is inferior in synchronization accuracy even though the beacon successful rate is less than 30%. In vibration measurement for a building or a civil structure, wireless device allocation is likely to be a mixture of Cases 1 and 2. As illustrated in Figure 8, several wireless nodes are located on the same floor and one of them relays packets to the nodes on a different floor.The relay nodes (Case 2) are located in the staircase area.Because the radio power is much attenuated when passing a reinforced concrete The target synchronization accuracy is to maintain within 1 msec.From the simulation results, synchronization accuracy was maintained even when the total number of nodes less than 100 nodes. Development of Wireless Measurement System We developed a wireless embedded measurement system which can be applicable for vibration measurement.Figure 9 is a photo of basic components of the vibration measurement system which consists of a sensor node, a relay node and a host PC.The sensor node and relay node have identical architecture.Size of a node device is 120×70×50 (mm).Main components and their functions of the wireless node are briefly shown in Table 2.The A/D converter has 16CH single end input and 6CH of them are supplied via the BNC jacks.The sampling rate of A/D depends on the measurement object.In the building measurement case, it is at most 100 Hz.It is for conducting the over sampling with digital filters, even though the resonance frequency of a building is less than 10 Hz.The power consumption of a node is typically 7.5 W. the end-to-end communication packets were relayed via the relay node.Wired signal lines were connected to all nodes for the reference purpose and rectangular wave voltage was supplied on the lines to generate hardware interrupt.Every moment when the voltage of square wave rose, an interrupt occurred exactly at the same moment in three wireless nodes.And TSF count was recorded in the hardware interrupt handler routine.We evaluated the synchronization accuracy by comparing recorded TSF counter of three nodes.The frequency of square wave was set to 10 Hz, that is, the TSF counter value was recorded every 100 msec.One measurement was continued for 15 minutes thus total number of TSF record sets was 9,000.After the measurement, the maximum offset between recorded set was calculated.We made totally 15 times of 15-minute measurements, thus total number of samples were 135,000. Figure 12 shows the result of the maximum offset of TSF count, which corresponds to synchronization accuracy.As shown the figure, the median value was 27 μsec.The result is close to the simulation result of Case 2 in 5 nodes.The simulation analysis did not take account for any processing time and bit error rate.Therefore, the experimental results must be more realistic when implementing on a real hardware.Nevertheless, the result shows sufficiently accurate synchronization and it maintained good enough accuracy against targeted 1 msec accuracy. Vibration Measurement in a High Rise Building.We also conducted experimental measurements by using the developed wireless system.The vibration measurement was taken placed in a steel structured 22-story building.Figure 13 shows the allocation of the sensor node, relay nodes and host PC in the measurement room.A velocimeter (velocity sensor) on 22th floor is connected to the measurement node by a coaxial signal cable.The distance between the sensor node and the host PC was about 100 m and the path includes areas partitioned by a steel door and some over-the horizon corners.Thus, we allocated two relay nodes for hooking up the ad-hoc network. The velocity data was also recorded with conventional wired measurement.Figure 14 shows the measured data of the developed wireless system and Figure 15 shows the data of the conventional wired data logger.The data are damped vibration wave obtained during the forced shaking test.When comparing these two results, the response near the peak of sinusoidal wave is a little bit jaggy in Figure 14 while it is smooth in Figure 15.This is because the difference of the cut-off frequency of the low pass-filters.Except that the data plot almost exactly match.The resonant frequency obtained from both data was 0.425 (Hz) and damping ratio was 0.672 (%).These two values obtained by both methods (wired/wireless) were also matched up to three significant figures. Conclusion For the vibration measurement via wireless network, time synchronization is indispensable.In this paper, we proposed a new time synchronized wireless sensor network system which employed IEEE 802.11 standard-based TSF counter.It ensured consistency on the common clock among different wireless nodes.The scale effect on the accuracy when the size of nodes increased was evaluated by simulation studies and the result outlined the size which maintains the offset within 1msec.We also described a newly developed wireless sensing system and showed experimental evaluations, which were conducted in a reinforced concrete building.The system was also applied for the vibration measurement of a 22story steel structured high rise building.The experimental results showed good performance enough for vibration measurement purpose. Radio wave can reach to all nodes Wirelss node (b) Radio wave can reach to only the adjacent nodes Figure 7 : Figure 7: Histogram of the maximum offset value in Case 2. Figure 8 : Figure 8: Measurement configuration in a building as mixture of Cases 1 and 2. Figure 9 : Figure 9: Basic components of the developed vibration measurement system. Figure 12 : Figure 12: Maximum clock offset with three wireless nodes. Table 2 : Hardware specifications of a wireless node.
5,544
2010-03-25T00:00:00.000
[ "Engineering", "Computer Science" ]
Beta-blockers disrupt mitochondrial bioenergetics and increase radiotherapy efficacy independently of beta-adrenergic receptors in medulloblastoma Summary Background Medulloblastoma is the most frequent brain malignancy of childhood. The current multimodal treatment comes at the expense of serious and often long-lasting side effects. Drug repurposing is a strategy to fast-track anti-cancer therapy with low toxicity. Here, we showed the ability of β-blockers to potentiate radiotherapy in medulloblastoma with bad prognosis. Methods Medulloblastoma cell lines, patient-derived xenograft cells, 3D spheroids and an innovative cerebellar organotypic model were used to identify synergistic interactions between β-blockers and ionising radiations. Gene expression profiles of β-adrenergic receptors were analysed in medulloblastoma samples from 240 patients. Signaling pathways were explored by RT-qPCR, RNA interference, western blotting and RNA sequencing. Medulloblastoma cell bioenergetics were evaluated by measuring the oxygen consumption rate, the extracellular acidification rate and superoxide production. Findings Low concentrations of β-blockers significantly potentiated clinically relevant radiation protocols. Although patient biopsies showed detectable expression of β-adrenergic receptors, the ability of the repurposed drugs to potentiate ionising radiations did not result from the inhibition of the canonical signaling pathway. We highlighted that the efficacy of the combinatorial treatment relied on a metabolic catastrophe that deprives medulloblastoma cells of their adaptive bioenergetics capacities. This led to an overproduction of superoxide radicals and ultimately to an increase in ionising radiations-mediated DNA damages. Interpretation These data provide the evidence of the efficacy of β-blockers as potentiators of radiotherapy in medulloblastoma, which may help improve the treatment and quality of life of children with high-risk brain tumours. Funding This study was funded by institutional grants and charities. Introduction Medulloblastomas (MB) are embryonal tumours of the cerebellum and the most common malignant brain tumours of childhood. They have been classified into four main subtypes. WNT MB, has the most favorable clinical prognosis but accounts for only 10% of cases. SHH MB, and the other Non-WNT/Non-SHH subgroups (Group 3 and Group 4) are somewhat more aggressive and more frequently metastatic, with a poorer prognosis. 1À3 The current multimodal treatment combines surgery, radiotherapy and chemotherapy. 4,5 Overall, long-term survival is now achieved in 60-75% of patients but it comes at the expense of serious and often long-lasting side effects that can reduce independence and significantly alter the quality of life of survivors. 6,7 High-risk MB are treated with radiation therapy with a cumulative dose of 54 Gy for irradiation of the posterior cerebellar fossa and additional 36 Gy for craniospinal irradiation. Although these doses are not always sufficient to control tumour progression, they cannot be increased as both acute toxicities and the cognitive and endocrinological sequelae would be too important in the long-term. 7,8 Since these sequelae are even greater in young patients, radiotherapy is contraindicated in children under 3-5 years of age depending on countries. 9,10 Therefore, new treatment options for MB patients are needed to improve the response to radiotherapy, with the aim of increasing the therapeutic benefits of ionising radiation (IR) and/or reducing its doses and associated deleterious side effects while maintaining its efficacy. Drug repurposing consists in using already-approved drugs for indications that differ from those for which the drugs were originally developed. Toxicity and pharmacokinetic profiles are well documented, so that repurposed drugs can directly enter Phase II clinical trials. By reducing the time, expenses and risks associated with the development process, drug repurposing is an attractive strategy in anticancer therapeutics. 11À13 One of the promising pharmacological classes to be repurposed is the b-adrenergic antagonists, or b-blockers. They are widely known for their regulatory properties in cardiovascular dysfunctions. 14 To date, the use of propranolol for the treatment of severe hemangiomas of infancy represents one of the most successful examples of drug repurposing, with higher efficacy and fewer toxic side effects than the previous standard of care. 15 Since then, our preclinical and clinical studies have shown that b-blockers can increase the efficacy of chemotherapy in drug-refractory cancers, 16,17 including paediatric tumours such as neuroblastoma. 18 b-blockers can impair fundamental biologic processes underlying tumour progression, such as cell proliferation, migration, tumour angiogenesis and metastasis. b-blockers have also been shown to sustain the response of irradiated gastric adenocarcinoma, colon adenocarcinoma or non-small cell lung cancer (NSCLC) in vivo, 19À23 and improve survival outcomes in adult patients with intracranial meningiomas and NSCLC. 24,25 These recent examples provide a strong rationale to combine b-blockers with radiotherapy in paediatric solid tumours, where Research in context Evidence before this study In young patients with medulloblastoma, the current multimodal treatment allows 70% of children to survive up to 5 years but is often accompanied by serious and long-lasting side effects. Drug repositioning represents an attractive strategy to fast-track the development of new low-toxicity therapeutic options. Preclinical and clinical studies have shown that b-blockers can increase the efficacy of chemotherapy in drug-refractory cancers, including paediatric tumours such as neuroblastoma. However, far less is known about the ability of b-blockers to potentiate radiotherapy, which has never been studied in paediatric cancers. Moreover, the literature is divided regarding the mechanisms responsible for the anti-tumour properties of b-blockers. Here, we propose to analyse the combination of b-blockers with radiotherapy in models of high-risk medulloblastoma. Added value of this study We provide the evidence that low concentrations of propranolol, carvedilol and nebivolol improve the efficacy of ionising radiation in medulloblastoma cell lines, patient-derived tumour cells and spheroid micromasses, including those poorly responsive to radiation. In response to the ever-increasing need to find alternatives to animal experimentation, we have developed an innovative organotypic cerebellum model that confirmed the benefits of the combinatorial treatment. Although we showed that patient medulloblastoma biopsies exhibit detectable expression of b-adrenergic receptors, the efficacy of b-blockers in medulloblastoma cells does not result from the inhibition of the canonical targets but is instead driven by a rapid disruption of the mitochondrial bioenergetics. This leads to a sustained accumulation of superoxide radicals that potentiate the DNA damages caused by ionising radiation. Implications of all the available evidence Given the few druggable molecular targets identified in high-risk medulloblastoma and the fact that young age of patients limits treatment options, our work proposes an alternative approach in which drug repurposing could be quickly translated to the clinic to improve the efficacy of radiotherapy. In addition, as the dose of ionising radiations can be significantly reduced by adding b-blockers, this may help limit treatment long-term side effects and improve the quality of life of children with medulloblastoma. Lastly, our work highlights the interest of exploiting the ability of selected repositioned drugs to inhibit mitochondrial bioenergetics to design new therapeutic combinations with radiotherapy. this type of combination has never been evaluated. Here, we provide evidence that b-blockers can improve the efficacy of IR in MB cell lines and PDX-derived cells, by disrupting mitochondrial bioenergetics, independently of the b-adrenergic receptors. A mtDsRed plasmid has been transfected in each cell line using Lipofectamine 2000 (Invitrogen, ref. 11668019) following the manufacturer's protocol. Stable transfectants were obtained after geneticin selection (0¢8 mg/mL, Gibco, ref. 10131035) and two cycles of fluorescence-activated cell sorting (FACS). To establish b-blocker resistant cell lines, ONS-76 cells were exposed to increasing doses of propranolol (from 10 to 200 µM), carvedilol or nebivolol (from 2¢5 to 20 µM), over 3 to 4 months. The resistant cell lines were named: ONS-76 RP, ONS-76 RC and ONS-76 RN, respectively, and were maintained in the same culture conditions as the parental ONS-76 cells (i.e., ONS-76 WT). Material and methods The murine SHH MB cell lines were obtained from spontaneous medulloblastoma arising from Patched1 +/-C57BL/6 mouse model (RRID: MGI:2159769), as previously described. 26 All the cells were tested for the absence of mycoplasma contamination (MycoAlert TM , ref. #LT07-418, Lonza) at least once a month. Patient-derived xenografts culture Patient-derived xenografts (PDXs) were generated from primary human MB samples and were maintained into the subscapular fat pad of Nude mice (RRID: MGI:5649750) as previously described. 27 G3-PDX3, G3-PDX7 and SHH-PDX12 correspond to group 3 ICN-MB-PDX-3, group 3 ICN-MB-PDX-7 and SHH ICN-MB-PDX-12, respectively. For in vitro cultures, tumour cells were purified from the PDX using an enzymatic dissociation followed by a Percoll density gradient separation and cultured as previously described. 26 Drugs and reagents The b-blockers were resuspended in dimethyl sulfoxide (DMSO Irradiation of MB cells, spheroids and organotypic cultures Exposure to IR of the different culture models was performed in the Radiotherapy Department of Pr. Cowen (Timone Hospital, AP-HM, France). Water-equivalent RW3 phantom with a chamber adaptation plate was used for therapy dosimetry. Cells, spheroids and organotypic cultures were exposed to doses ranging from 1¢8 Gy to 10 Gy, using the Synergy MLCi Elekta Ò linear accelerator with a beam of 6 MV and a flow rate of 400 UM/min. The PDX cells were irradiated in the RadExp platform of the Curie Institute on the X-Rad 320 equipment (Precision X-ray irradiation). Cell growth and survival assays Cell viability assays were performed as previously described. 18 Briefly, the human MB cells were seeded in flat bottom 96-well microplates (2,000 cells/well for DAOY, ONS-76 and UW228-2; 9,000 cells/well for HD-MB03 and D283 Med; 12,000 cells/well for D341 Med) for 24 h. Cells were then exposed to b-blockers alone or in combination with IR for 72 h. Metabolic activity was detected by addition of Alamar Blue and spectrofluorimetric analysis using a PHERAstar Ò FS multi-Plate Reader (BMG LABTECH; λ ex 540 nm / λ em 590 nm). IC 50 values were determined as previously described. 28 For IncuCyte experiments (RRID:SCR_019874), SHH MB tumour cells were plated in 96-well plates (5,000 cells/well for murine SHH-MB and 7,500 cells/well for ICN-MB-PDX-12), pre-coated with poly-D-lysine (EMD Millipore, ref. A-003-E) and Matrigel (BD Biosciences, ref. 354234). The next day, tumour cells were treated with a range of concentrations of b-blockers or the control, as indicated in figures. Propidium iodide (PI, Sigma Aldrich; 0¢3 mg/ml) was also added to the medium to evaluate cell death. Then, the plates were scanned for phase contrast and PI staining during 72-96 h, using the IncuCyte Ò live cell analysis system with a 4X objective. Proliferation was measured using quantitative kinetic processing metrics from time-lapse image acquisition and showed as percentage of culture confluence over time. For the PI staining, the percentage of PI positive cells was divided by the percentage of cell confluence for each well, thus indicating the level of dead cells in each well. For the CellTiter-Glo Ò Luminescent Cell Viability Assay, Group 3 MB tumour cells were cultured in neurospheres in round bottom 96-well plates (5,000 cells/ well). Tumour cells were then treated either 1) once with a range of concentrations of b-blockers or 2) daily with b-blockers and/or IR for five consecutive days. Then, the cell viability was evaluated 72 h later using the CellTiter-Glo Ò Luminescent Cell Viability Assay according to the manufacturer's instructions (Promega Corporation, ref. G7570). Spheroid growth assay DsRed-expressing MB cells were plated in round bottom 96-well microplates (1,200 cells/well for HD-MB03 À 1,500cells/well for UW228-2 and DAOY À 2,000 cells/well for ONS-76, D283 Med and D341 Med) in a culture medium containing 10% FBS and 20% methyl cellulose (Sigma-Aldrich, ref. M7027) for 72 h. Spheroids were then daily treated with b-blockers and/or IR for five days. Spheroid growth was quantified over time by acquisition of DsRed fluorescence signal using the PHERAstar Ò FS multi-plate reader (λ ex 580 nm/λ em 620 nm À "well scanning" 10 £ 10). Images were captured with the JuLI TM Stage live imaging system (Nano-Entek). Cerebellar organotypic model development and analysis To establish organotypic cultures of cerebellar tissues, mouse cerebellums were surgically harvested and sectioned into 250 µm thick slices using a vibrating blade microtome (RRID:SCR_016495). A spheroid formed from DsRed-expressing MB cells was then grafted onto each cerebellum slice. These organotypic co-culture models were then placed on inserts with 0,4 µm pore size membranes (Falcon Ò , ref. and 1% PS. After daily exposure to IR and/or b-blockers for 5 consecutive days, tumour growth and invasion within the cerebellum slices were analysed over time, using the JuLI TM Stage imaging system and the PHERAstar Ò FS multi-plate reader (λ ex 580 nm/λ em 620 nm -fluorescence signal acquisition with a 15 £ 15 matrix scanning mode). Sample preparation and immunohistochemistry Samples were fixed overnight at 4°C with 4% formaldehyde and prepared for paraffin inclusion using automated tissue processor ASP 300 (RRID:SCR_018916). Dehydration, clarification, and infiltration steps were performed by successive absolute ethanol, histolemon and paraffin baths. After FFPE-embedding, samples were cut at 3µm-thickness with HM340E microtome (Thermo Scientific). Hematoxylin Eosin Safran staining was performed using automated H&E staining Dako CoverStainer. Ki-67-and gH2AX-immunohistochemistry was carried out with rabbit anti-Ki67 antibody (RRID: AB_443209) and with mouse anti-gH2AX antibody (Merck Milliopore, ref. JBW301) on a Ventana Discovery XT (RRID:SCR_018643). After deparaffinisation, antigen retrieval was performed with Citrate-based buffer pH 6¢5 (RiboCC Solution, CC2, ref. 760-107). The primary antibodies were incubated for 20 min at 37°C then an OmniMap anti-Rabbit HRP Detection Kit (ref. 760-149) was used with DAB. Finally, the counterstaining was done with hematoxylin and slides were cleaned, dehydrated and coversliped with permanent mounting media. The microscopic analysis of the tissues was carried out by the pathologists of the Neuropathology Department (Timone Hospital, AP-HM, France). Measurement of superoxide production MB cells were seeded on 96-well microplates (2,000 cells/well for ONS-76 and 9000 cells/well for HD-MB03) for 24 h and exposed to IR and/or b-blockers for 6 h. 3D spheroids of MB cells were formed 3 days before treatment, and exposed to IR and/or propranolol for 6 h. Superoxide anion production was assessed by adding 10% V/V of WST-1 reagent (Roche, ref. 11644807001) in the wells for 30 min at 37°C. Absorbance was measured at 450 nm with a PHERAstar Ò FS multi-plate reader. To normalise superoxide production to the cell number in each condition, cells were fixed with 1% glutaraldehyde and stained with a solution of 1% (W/V) crystal-violet in 20% methanol (Sigma-Aldrich). The dye has finally been solubilised in DMSO to measure absorbance at 600 nm. Colony formation assay Ninety-six-well microplates were coated with 1% agarose for 24 h. Two hundred and fifty ONS-76 cells and 500 HD-MB03 cells per well were then plated in a 10% Matrigel Ò -containing medium (Corning, ref. 354234) for 24 h and exposed to b-blockers and/or IR. Photos of the colonies were captured with the JuLI TM Stage imaging system and quantified using the Image J Ò software, 7 and 10 days after treatment initiation for ONS-76 and HD-MB03 cells, respectively. Maximal mitochondrial respiration was measured after injection of FFCP. OCR-linked ATP production was calculated with difference between basal and maximal respiration values, while glycolytic reserve was calculated as the difference between oligomycin-enhanced and glucose-mediated ECAR values. To normalise the data to cell number, cells were fixed with glutaraldehyde 1%, stained with violet crystal in 20% methanol (Sigma-Aldrich) and solubilised with DMSO to measure absorbance at 600 nm with PHERAstar Ò FS multi-plate reader. A calibration range established for each type of cell was finally used to convert the absorbance values into cell numbers. RNA sequencing The MB cells ONS-76 WT, ONS-76 RP, ONS-76 RC and ONS-76 RN were homogenised using a Buffer RLT (Qiagen, ref. 79216) and DNA-free cell lysates were obtained using genomic DNA purification columns (Qiagen). Extraction of total RNA was performed using the RNeasy Plus Mini kit (Qiagen, ref: 74134), according to the protocol supplied by the manufacturer. RNA was quantified using a NanoVue TM Plus spectrophotometer (GE Healthcare Life Sciences). RNA-Seq libraries were generated from 600 ng of total RNA using TruSeq Stranded mRNA Library Prep Kit and TruSeq RNA Single Indexes kits A and B (Illumina), as previously described. 30 The final cDNA libraries were checked for quality and quantified using capillary electrophoresis. Libraries were then sequenced on an Illumina HiSeq4000 sequencer (RRID:SCR_016386) as single end 1 £ 50 base reads. Image analysis and base calling were performed using RTA 2.7.3 and bcl2fastq 2.17.1.14. Reads were preprocessed using Cutadapt v1.10 31 in order to remove adapter, polyA and low-quality sequences (Phred quality score below 20), reads shorter than 40 bases were discarded for further analysis. Reads mapping to rRNA were also discarded (this mapping was performed using Bowtie v2.2.8. 32 Reads were then mapped onto the hg38 assembly of human genome using STAR v2.5.3a. 33 Gene expression was quantified using htseq-count v0.6.1p1 34 and gene annotations from Ensembl release 99. Statistical analysis was performed using R 3.3.2 and DESeq2 1.16.1 Bioconductor library. 35 Read counts for ADRB1, ADRB2 and ADRB3 expression in primary MB samples from patients were produced by aligning paired end RNA-seq (»90 M read/ sample Illumina HiSeq2500; RRID:SCR_020123) reads to HG19 genome using STAR-align. 33 Read counts were produced using HT-SEQ-count. DESeq 2 (R/Bioconductor) was used to normalise reads to library size and variance stabilised data (VSD) was generated using the vsd function. Statistical testing for differential expression across groups was performed using an ANOVA test. Ethics Tumour samples from individuals with confirmed medulloblastoma diagnosis were used for RNA-seq analysis. These were provided as part of UK CCLGapproved biological study BS-2007-04 and/or with approval from Newcastle North Tyneside Research Ethics Committee (study reference 07/Q0905/71); informed, written consent was obtained from parents of all patients younger than 16 years. All animals for PDX were housed in the animal facility of the Institut Curie, in accordance with the recommendations of the European Community (2010/63/ UE) for the care and use of laboratory animals. Experimental procedures were specifically approved by the ethics committee of the Curie Institute CEEA-IC #118 (approval number: 03130.02, C91471108 and Authorisation APAFiS# 26879-2020081315161665-v1 given by National Authority) in compliance with the international guidelines. Cerebellar explants were obtained from the animal facility of the Faculty of Pharmacy, in accordance with the recommendations of the European Community (approval number: E 13 055 20). Role of funders The study sponsors did not have any role in study design, in the collection, analysis, interpretation of data, in the writing of the manuscript or in the decision to submit it for publication. b-blockers inhibit the proliferation and survival of MB cell lines and patient-derived tumour cells To determine the anti-proliferative properties of three different b-blockers with different selectivity profiles for adrenergic receptors À i.e., non-selective b-blocker propranolol, mixed a/b-blocker carvedilol and b1-selective antagonist nebivolol À, we first used a panel of six human MB cell lines characteristic of group 2 SHH (UW228-2 and DAOY) and non-WNT/non-SHH group (HD-MB03, ONS-76, D283 Med and D341 Med) tumours. All tested b-blockers inhibited the proliferation of MB cells, irrespective of their group (Figures 1ac), with IC 50 values ranging from 60-120 µM for propranolol, 12-15 µM for carvedilol and 13-15 µM for nebivolol (Table 1). We further showed that the activity of b-blockers results from both the inhibition of cell growth and the induction of cell death in murine SHH-MB cells ( Figure S1a-f). To evaluate the three b-blockers in more clinically relevant cellular models, we cultured primary cells from group 3 and SHH patient-derived xenografts MB tumours (G3-PDX7 and SHH-PDX12, respectively). We confirmed the dose-dependent efficacy of propranolol, carvedilol and nebivolol in inhibiting cell survival of these PDX-derived cells (Figures 1d-f), as well as their ability to inhibit cell proliferation and induce cell death ( Figure S1g-l). b-blockers enhance IR-mediated inhibition of MB cell proliferation and clonogenicity To study the combination between b-blockers and radiotherapy in MB cells, we first tested a single co-treatment of IR at 2, 5 or 10 Gy and low concentrations À IC 20 À of propranolol, carvedilol or nebivolol. Results showed that the addition of the b-blockers led to a two-fold reduction in the dose of IR while maintaining the same activity in HD-MB03 cells (Figure 2a). For example, irradiation at 2 Gy combined with IC 20 of propranolol is as effective in reducing cell survival as irradiation alone at 5 Gy. IR potentiation by propranolol was also found in ONS-76 cells (Figure 2b) and in the three other tested MB cell lines ( Figure S2a-c). Similar effects were observed with low concentrations of carvedilol or nebivolol combined to IR in the different MB cell lines (Figures 2a-b and Figure S2a-c). To better explore the potential of these combinations in MB cell radiosensitivity, we conducted clonogenic assays. HD-MB03 and ONS-76 cells were exposed to propranolol and/or IR at 1¢8 Gy, which is the daily radiation dose the most widely used in the clinic. As expected, the number of colonies was reduced by IR by 64 § 5 % and 60 § 5 % in HD-MB03 and ONS-76 cultures, respectively (Figures 2c-d). Our results also demonstrated that propranolol decreased the clonogenicity of MB cells, in a dosedependent manner and significantly enhanced the efficacy of IR (Figures 2c-d and Figure S2d). For instance, the clonogenic capacity of HD-MB03 and ONS-76 cells exposed to the combination of IR with propranolol IC 20 was reduced by 86 § 3 % and 82 § 3 %, respectively (Figures 2c-d, p < 0¢001 vs control). To confirm the interest of such a combination in 3D tumour micromasses, we developed tumour spheroid models from MB cells stably expressing DsRed. For five consecutive days, spheroids were exposed to daily low doses of b-blockers alone or in combination with IR at carvedilol and nebivolol sustainably potentiate IR in HD-MB03 spheroids, as compared with IR alone (Figures 3a-d). While IR did no longer significantly impact the spheroid growth at day 21 (2317 § 60 % growth in irradiated versus 2206 § 79 % growth in control spheroids; p > 0¢05), the co-treatment with propranolol, carvedilol and nebivolol decreased the spheroid growth to 724 § 5 %, 335 § 2 % and 292 § 5 %, respectively (p < 0¢001, Figures 3a-d). Similar results were obtained in ONS-76 spheroids ( Figure S3a). In addition, b-blockers were able to restore IR efficacy in UW228-2 and D283 Med spheroids that were unresponsive or minimally responsive (Figure S3b-c) and they could further increase IR efficacy against the highly radio-sensitive D341 Med spheroids ( Figure S3d). Finally, to evaluate the efficacy of daily co-treatment on primary MB cells, we established 3D neurospheres from the G3-PDX7 cells. Our results confirmed that low concentrations of propranolol, carvedilol, or nebivolol highly potentiated the effects of IR (Figure 3e). The use of a second group 3 PDX model (G3-PDX3) further validated the relevance of combining b-blockers in co-treatment with daily radiotherapy in MB ( Figure S3e). Altogether, our results demonstrated that b-blockers can improve the efficacy of IR in in vitro MB models. Fractionated IR is potentiated by daily low doses of b-blockers in cerebellar organotypic models To evaluate the potential of the b-blockers and IR combination in more clinically relevant conditions, we developed an organotypic cerebellar model in which MB spheroids stably expressing DsRed were grafted into slices of healthy mouse cerebellum. These innovative cultures were daily exposed to IR (1¢8 Gy) and/or very low concentrations of propranolol (IC 10 i.e., 25 µM) for five consecutive days. After seven days, our data showed that monotherapies reduced the growth of HD-MB03 tumour masses by 23 § 5% and 27 § 5% in organotypic models subjected to IR and propranolol, respectively (p < 0¢001 and p < 0¢05, respectively; Figures 4a-b). The combinatorial treatment resulted in a reduction of 38 § 6% (p < 0¢001 compared with control), which significantly increased the efficacy of IR (p < 0¢001, Figures 4a-b). The potentiating effect persisted over time, with the combination reducing tumour growth by 57 § 6% after 14 days (p < 0¢001 vs control and p < 0¢05 vs IR; Figures 4a-b). The benefits of combining propranolol with IR were also confirmed in organotypic cerebellar models transplanted with ONS-76 tumour masses ( Figure S4a-b). After 14 days, the organotypic cultures were fixed, sectioned, and subjected to HES and Ki67 staining (Figure 4c). Microscopic analysis of these labelling patterns showed that the combinatorial treatment did not induce histological lesions in the nontumour cerebellar tissue, including in the MB periphery. Furthermore, gH2AX staining of the organotypic models showed that the co-treatment with IR and propranolol did not induce DNA damage in the non-tumour tissue either ( Figure 4c, Table 2, and positive control in FigureS4c). This suggests that the combination is effective in significantly reducing MB tumour mass without inducing additional damage to the cerebellum. b-blocker efficacy and potentiation of IR are independent of b-adrenergic receptors in MB cells The strong synergism between IR and b-blockers in MB stresses the need for a better understanding of the underlying mechanism(s). Since b-blockers antagonise the b-adrenergic receptors (b-ARs) in the cardiovascular system, we first evaluated the expression pattern of Articles b-AR genes ADRB1, ADRB2 and ADRB3 in MB tumours from a cohort of 240 patients (Figures 5a-c). While there are significant differences in expression of b-AR isoforms across MB groups (each p < 0¢001), the median expression of ADRB2 is the highest, followed by ADRB1, and WNT MB are the only samples that express high levels of ADRB3. Kaplan Meier and Cox regression analyses revealed that high expression (>median) of ADRB1 and ADRB2 were associated with a good prognosis in a cohort of 222 patients (Figures 5d-f). We then quantified b-AR mRNA levels in the six human SHH and non-WNT/non-SHH group MB cell lines studied. Consistently with the results obtained with patient samples, ADRB2 and ADRB3 are the major and the minor isoforms across the panel of cell lines, respectively ( Figure 5g). Interestingly, despite being as sensitive as the other cell lines to the b1-selective antagonist nebivolol (Figure 1c), neither the DAOY nor the D341 Med cell lines express ADRB1 (Figure 5g). This suggests that the efficacy of b-blockers in MB cells may not rely on the canonical b-adrenergic pathway. To confirm this hypothesis, we silenced ADRB1 and ADRB2 in HD-MB03 and ONS-76 cells using RNAi technology ( Figure S5a-b), b3-AR not being a target of any of the three b-blockers tested here. Our data showed that the efficacy of propranolol, carvedilol and nebivolol alone or in combination with IR was not impacted by b-AR silencing (dotted vs. solid lines, Figure 5h and Figure S5c-g). Moreover, b-AR siRNA did not improve the effects of IR alone in HD-MB03 cells, regardless of the dose used (Figure 5h and Figure S5c-d). In ONS-76 cells, b-AR silencing even significantly reduced the effects of IR (p < 0¢05; Figure S5e-g). These results indicate that b-ARs are neither involved in b-blocker-induced cytotoxicity nor in radio-sensitisation of MB cells and support the idea that b-blockers trigger an alternative signaling pathway to potentiate radiotherapy in MB cells. Response to b-blockers is associated with inhibition of energy metabolism in MB cells Our previous work in triple-negative breast cancer has highlighted the ability of propranolol to affect energy metabolism pathways in tumour cells. 36 To determine whether b-blockers disrupt the energy metabolism in MB cells, we characterised their bioenergetic profiles by measuring the mitochondrial respiration via the oxygen consumption rate (OCR) and the glycolytic activity via the extracellular acidification rate (ECAR). b-blocker treatment induced a significant drop in mitochondrial respiratory functions in ONS-76, HD-MB03, UW228-2 and DAOY cells, regardless of their initial bioenergetic status (Figure 6a and Figure S6a). Indeed, our data showed a decrease in both basal and maximal respiration after 24 h treatment with propranolol, carvedilol and nebivolol, for concentrations ranging from IC 20 to IC 80 (Figures 6b-c). As a result, ATP production was strongly reduced in all four MB cell lines exposed to the three b-blockers, even at the lowest concentrations ( Figure 6d and Figure S6b, d, f). In addition, we demonstrated that incubation with propranolol, carvedilol and nebivolol led to a decrease in the glycolytic reserve in ONS-76, HD-MB03, DAOY and UW228-2 cells (Figure 6e and Figure S6c, e, g). Thus, treatment with b-blockers results in a metabolic catastrophe that deprives MB cells of their adaptive bioenergetics capacities. To better understand the importance of bioenergetics in response to treatment, we generated b-blockerresistant ONS-76 cells by exposing them to increasing concentrations of propranolol, carvedilol or nebivolol for 16 weeks ( Figure S7a). The resulting cell lines i.e., ONS-76 RP, ONS-76 RC and ONS-76 RN, were crossresistant to all b-blockers ( Figure S7b). By qRT-PCR, we showed that the expression of b-AR genes was not altered in these resistant cells ( Figure S7c). RNA sequencing further indicated that cell resistance could not be explained by the downregulation of key genes of the b-AR downstream signaling and its transcriptional targets ( Figure S7d). Although four of the ten isoforms of adenylate cyclase are overexpressed in ONS-76 RP cells (ADCY1, 2, 5 and 8; p < 0¢001), this pattern of overexpression was not found in ONS-76 RC and ONS-76 RN and therefore may not be the common factor behind the cross-resistance of the cell lines to beta-blockers. Analysis of the metabolic energetic activities in the three b-blocker resistant cell lines showed that they had higher mitochondrial OCR and ATP production than the sensitive parental ONS-76 WT cells (p < 0¢001, Figures 7a-b and Figure S7e), but no significant changes in glycolytic capacity (Figures 7c-d). However, as illustrated with ONS-76 RC cells exposed to IC 20 of propranolol, carvedilol or nebivolol, resistant cells were able to counteract both the b-blocker-mediated suppression of ATP production by mitochondria and glycolytic reserve (Figures 7e-f). Lastly, we measured the effects of the combination of b-blockers and IR on the bioenergetics of MB cells. This analysis was performed at short time point (6 h) and with low doses of propranolol (IC 10 and IC 20 ) to better examine the effects of the combinatorial treatment. Propranolol alone and IR alone (1¢8 Gy) did not significantly affect ATP production nor glycolytic reserve of HD-MB03 cells in these conditions (Figures 7g-h). Nevertheless, IR combined with IC 20 propranolol inhibited the two processes by 39 § 5 % and 63 § 11 % respectively, as compared with IR monotherapy (p < 0¢05, Figures 7g-h). This potentiation was also confirmed in ONS-76 cells (Figure S7f-g). Our results thus support the importance of bioenergetic disturbances in the response of MB cells to b-blockers and their combination with radiotherapy. b-blockers enhance IR-induced oxidative stress and consequently increase DNA damage in MB cells Reactive oxygen species (ROS) are the major effectors of IR, contributing substantially to radiation-induced DNA damage and cancer cell death. 37 Given the effects of cotreatment on mitochondrial energy metabolism, we first determined whether the combination therapy could disrupt redox balance, by assessing the superoxide ion levels. Six hours post-irradiation, an expected increase in superoxide relative level of 34 § 4 % was observed in IR HD-MB03 cells compared to control cells (p < 0¢001, Figure 8a). Propranolol, at 10 µM (IC 5 ) and 25 µM (IC 10 ), also increased the production of ROS by 36 § 12 % and 31 § 9 %, respectively (p < 0¢01, Figure 8a). The combination of IR with these low doses of propranolol led to an additional upregulation of ROS levels, up to 64 § 4 % and 59 § 5 %, respectively (Figure 8a, p < 0¢001). These results were confirmed with carvedilol at 5 (IC 10 ) and 7¢5 µM (IC 20 ) in HD-MB03 cells ( Figure S8a), as well as in MD 3D spheroids ( Figure S8b, c). A significant overproduction of superoxide ions of 96 § 6 % (p < 0¢001) and 78 § 7 % (p < 0¢01) was also found in ONS-76 cells exposed to IR and combined with IC 10 of propranolol or IC 10 of carvedilol ( Figure 8b). However, in the b-blocker-resistant cells ONS-76 RC, both propranolol and carvedilol were unable to amplify the effects of IR on superoxide production (Figure 8b). Potentiation of IR efficacy by the two b-blockers was also significantly reduced in these cells (Figure S8b), supporting a tight link between ROS production and response of MB cells to the combinatorial treatment. As the overproduction of ROS may contribute to an increase in cyclooxygenase 2 (COX-2) expression, which has been associated with the acquisition of a secondary radioresistance by tumour cells, 38 we ensured that such a feedback loop was not triggered in MB cells. By analysing COX-2 relative expression level in HD-MB03 cells 24 h after treatment, we showed that it was reduced from 1¢7 § 0¢3 in 1¢8 Gy irradiated cells to 0¢8 § 0¢1 and 0¢7 § 0¢1 in cells subjected to IR combined with IC 5 and IC 10 propranolol respectively (p < 0¢05, Figure 8c). The inhibition of IR-mediated increase in COX-2 expression by the combinatorial treatment was confirmed in ONS cells ( Figure S8c). Lastly, we evaluated the phosphorylation level of H2AX, being an early response to DNA double-strand breaks that here may be caused following ROS exposure. In HD-MB03 cells, 4 h after treatment, IR triggered the expected accumulation of yH2AX, as did low concentrations of propranolol (Figure 8d). Our results also showed a significant increase in gH2AX relative level from 2¢7 § 0¢4 in irradiated cells to 8¢6 § 0¢9 and 6¢3 § 0¢7 in cells exposed to the co-treatment with IC 5 and IC 10 propranolol, respectively (p < 0¢001, Figure 8d). By scavenging the superoxide ions (Figure 8a), Mito-TEMPO (MT) counteracted the increase in gH2AX level by the combinatorial treatments À which dropped to 2¢0 § 0¢4 and 2¢ 6 § 0¢2, respectively À (p < 0¢01 and 0¢05 vs co-treatments, respectively, Figure 8d). Likewise, scavenging of free radicals by Troxerutin (TROX; Figure 8a) led to a significant reduction in IR-mediated gH2AX accumulation (p < 0¢05 vs co-treatments, Figure 8d). Taken together, our results suggest that b-blockers can specifically modulate mitochondrial bioenergetics and ROS production in MB cells, thus priming them for IR-induced oxidative stress and DNA damage. Our results therefore show that the efficacy of the combination of IR with b-blockers is, at least in part, based on a strong inhibition of MB cell bioenergetics, linked to the triggering of an endogenous oxidative stress. Discussion In recent years, many advances have been made in the management of children with MB. Nevertheless, a real concern remains the long-term sequelae due to the early exposure to toxic treatments. 7,10 Drug repurposing appears to be a major tool to rapidly find effective and well-tolerated therapeutic approaches in oncology. 11,13 It might especially be an alternative strategy to manage rare cancers such as paediatric tumours. Cardiovascular regulators, anti-helminthic drugs and non-steroidal anti-inflammatory drugs have recently shown to reduce MB tumour cell progression in vitro and in vivo. 39 Here, we evaluated in MB models propranolol, carvedilol and nebivolol, which are lipophilic b-blockers that can cross the blood-brain barrier, and enter the cerebrospinal fluid and intracranial tissue. 40 Our results showed that the three b-blockers potentiate the efficacy of IR in a panel of MB cells, PDX cells and spheroid micromasses, including those poorly responsive to radiation. These results are consistent with the recent study from Chaudhary et al., that described a propranolol-mediated sensitisation to IR in non-small cell lung cancer cells in vitro. 20 Enhanced effectiveness of IR at reducing the growth of gastric adenocarcinoma in vivo when combined with propranolol was also shown recently. 23 Retrospective clinical studies have also shown that the combination of -blockers and radiotherapy did not result in increased toxicities in patients with lung cancer 20,41,42 and brain tumors such as meningioma. 24 The interest of combining radiotherapy with b-blockers is further supported by the fact that -blockers are largely known to be good brain protectors that can be used for instance after head trauma including in children. 43,44 In response to the ever-increasing need to find alternatives to animal experimentation, we have developed an innovative organotypic cerebellum model in which MB tumour progression has been analysed over time. These ex vivo tissue cultures are described as highly relevant models to study the evolution of pathologies and to test their response to different therapeutic strategies, including in MB. 45 We further showed that the dose of IR can be significantly reduced while maintaining treatment efficacy in MB cells by adding b-blockers. As the severity of cognitive damages in patients correlates with radiation doses, 10,46 this suggests that combining b-blockers with IR may help limit treatment side effects. One of the advantages of repurposing b-blockers as anti-cancer agents is that they can be translated to the clinic without the need for extensive preclinical studies, including in vivo experiments. For instance, propranolol was first used in a clinical setting in combination with metronomic chemotherapy in patients before it was later confirmed to be active in vivo in mouse models. 17,47,48 In addition, an ongoing clinical trial (NCT04682158) exploring the combination of propranolol with chemo-radiation is based on in vitro experiments and retrospective clinico-epidemiological experience in patients who received b-blockers for noncancer purposes in combination. 49 Another potential example is based on multiple myeloma for which clinical trials have been completed (NCT02420223) or recently initiated (NCT02420223) without myelomaspecific in vivo data but based but again on in vitro and clinic-epidemiological experiments. 50 The results of the present article can thus provide a strong basis for initiating an early phase clinical trial. The literature is divided regarding the mechanisms responsible for the anti-tumour properties of b-blockers. Inhibition of the b-adrenergic signaling pathway has been suggested to be involved in propranolol activity in pancreatic cancer cells. 51 Studies in angiosarcoma cells provide a good illustration of the conflicting hypotheses. Amaya et al. proposed the involvement of the b-adrenergic pathway in the mechanism of action of propranolol, 52,53 whereas a recent study by Overman et al. argued the opposite and showed a key role for the SOX18 protein in the response to the b-blocker. 52,53 In the paediatric tumours neuroblastoma and hemangioma, the results agree that b-ARs are not responsible for the anti-tumour efficacy of b-blockers, showing that the R-enantiomer of propranolol À which has very low affinity for b-ARs À has the same efficacy as the S-enantiomer that is highly affine for the receptors. 53À55 Although patient MB biopsies showed detectable b-AR expression, we demonstrated herein that their silencing did not alter the efficacy of propranolol, carvedilol and nebivolol in MB cells, suggesting that the efficacy of b-blockers in MB cells may not result from the inhibition of the canonical targets. We and others have reported that propranololexposed cancer cells were sensitised to the metabolic stress induced by metformin, rapamycin, 2-deoxy-D-glucose or dichloroacetate. 36,56À58 Here, we showed that the activity of the b-blockers in MB cells was driven by a rapid disruption of the mitochondrial bioenergetics, which led to a sustained accumulation of ROS. This is consistent with the alteration of the mitochondrial fusion/fission balance that we previously observed in neuroblastoma cells treated with propranolol. 18 The significance of cancer cell energy metabolism in response to b-blockers is further strengthened by the fact that the lack of impact on mitochondrial and glycolytic pathways results in resistance of MB cells to these repurposed drugs. Efficacy of radiation therapy relies on its ability to cause DNA breaks and to subsequently trigger cell death. The DNA damages mainly result from the generation of ROS, such as superoxide and hydroxyl radicals, during H 2 O radiolysis. 37 In the present study, we showed that b-blockers potentiate IR-mediated DNA damages in MB cells by increasing superoxide accumulation. Our results are consistent with the fact that pharmacologic depletion of glutathione, which belongs to the cell antioxidant system, significantly results in radiosensitisation of cancer stem cells. 59 Recently, Gddoped titania nanoparticules that target mitochondria to enhance ROS accumulation were also shown to sensitise breast cancer cells to radiotherapy-induced apoptosis in vitro and in vivo. 60 Increasing ROS levels in MB tumour cells during radiotherapy may thus significantly enhance the efficiency and decrease the dosage of radiation. COX-2 overexpression has been associated with resistance to IR in prostate, lung and oral squamous cancer cells. 61À63 Conversely, COX-2 inhibitors can synergise with IR in inducing apoptosis, 63,64 including in MB stem-like cells. 65,66 COX-2 inhibition has been suggested as a potential strategy in MB to decrease the production of prostaglandin E2 (PGE2) and ultimately promote tumour cell death. 67 Here, we showed that propranolol prevented the increase in COX-2 expression mediated by IR, but the involvement of the PGE2 pathway in improving response of MB cells to combinatorial therapy remains to be better characterised. To conclude, our work highlights the interest of channeling the ability of b-blockers to inhibit mitochondrial bioenergetics to design new therapeutic combinations with radiotherapy that lower the dose while maintaining anti-tumour activity. Given the few druggable molecular targets identified in non-WNT MB and the fact that young age of patients limits treatment options, our work proposes an alternative approach in which drug repurposing could be quickly translated to the clinic to improve the efficacy of radiotherapy and/or decrease its toxicity. Data sharing statement Our RNA-seq information and raw data from ONS-76 cells are publicly available in GEO database (GEO accession numbers: GSE191165). The other data and materials are available from the corresponding authors upon reasonable request. Declaration of interests The authors have declared no conflict of interest. mouse colony management, Hua Yu for mouse model monitoring and Sophie Heinrich from the RadExp platform for irradiation experiments assistance. We also thank the ICEP platform of IPC/CRCM for their help in immunohistochemistry experiments. This work was supported by research funding from charities (RESOP, La Marie-Do, AROU, La Compagnie Apr es la Pluie, Soci et e Française des Cancers de l'Enfant, Courir pour la Vie -Courir pour Curie, Ligue Contre le Cancer -Corse Sud) and institutions (Cancer Research UK, Canceropôle PACA, Institut National du Cancer and R egion Sud). Sequencing was performed by the GenomEast platform, a member of the 'France G enomique' consortium (ANR-10-INBS-0009). Supplementary materials Supplementary material associated with this article can be found in the online version at doi:10.1016/j. ebiom.2022.104149.
9,713.2
2022-07-08T00:00:00.000
[ "Biology", "Medicine" ]
Application of bacteriophage φPaP11-13 attenuates rat Cutibacterium acnes infection lesions by promoting keratinocytes apoptosis via inhibiting PI3K/Akt pathway ABSTRACT Acne vulgaris caused by antibiotic-resistant Cutibacterium acnes (C. acnes) infection is difficult to treat conventionally. Phages have been suggested as a potential solution, but research on the mechanism of phage treatment is inadequate. This research investigates the underlying molecular mechanisms of phage φPaP11-13 attenuating C. acnes-induced inflammation in rat models. We found that rats infected with C. acnes had higher average ear thickness, greater enrichment of inflammatory cells as shown by hematoxylin–eosin (HE) staining, and fewer TUNEL (TdT-mediated dUTP Nick-End Labeling)-positive keratinocytes visualized by IF staining. Moreover, an increase of IGF-1 and IGF-1 receptor (IGF-1r) was detected using the immunohistochemical (IHC) staining method, Western blot (WB), and quantitative real-time PCR (qRT-PCR) when infected with C. acnes, which was decreased after the application of phage φPaP11-13. By applying the IGF-1 antibody, it was demonstrated that the severity of C. acnes-induced inflammation was relevant to the expression of IGF-1. Through WB and qRT-PCR, activation of the PI3K/Akt pathway and a down-regulation of the BAD-mediated apoptosis pathway were discovered after C. acnes infection. Subsequently, it was shown that the activation of the PI3K/Akt pathway against BAD-mediated apoptosis pathway was alleviated after applying phage φPaP11-13. Furthermore, applying the IGF-1r inhibitor, Pan-PI3K inhibitor, and Akt inhibitor reversed the changing trends of BAD induced by C. acnes and phage φPaP11-13. This study demonstrates that one of the critical mechanisms underlying the attenuation of acne vulgaris by phage φPaP11-13 is lysing C. acnes and regulating keratinocyte apoptosis via the PI3K/Akt signaling pathway. IMPORTANCE Cutibacterium acnes infection-induced acne vulgaris may cause severe physical and psychological prognosis. However, the overuse of antibiotics develops drug resistance, bringing challenges in treating Cutibacterium acnes. Bacteriophages are currently proven effective in MDR (multiple drug-resistant) Cutibacterium acnes, but there is a significant lack of understanding of phage therapy. This study demonstrated a novel way of curing acne vulgaris by using phages through promoting cell death of excessive keratinocytes in acne lesions by lysing Cutibacterium acnes. However, the regulation of this cell cycle has not been proven to be directly mediated by phages. The hint of ternary relation among "phage–bacteria–host" inspires huge interest in future phage therapy studies. of the sebaceous gland (4), changes in differentiation of keratinocytes (5), infection by Cutibacterium acnes (C.acnes), and resultant immune cascades (6).Although acne is commonly seen and non-fatal, inappropriate treatment can lead to severe psychological and social problems (7), including but not limited to anxiety, depression (8), and even higher suicidal tendencies (9). Cutibacterium acnes (C.acnes), also known as Propionibacterium acnes (10), is considered one of the significant contributors to the onset of acne vulgaris (11).Antibiotic-resistant C. acnes infection may result in failure of acne treatment, disruption of the skin microbiota, and widespread dissemination of antibiotic-resistant strains (12).To reduce antibiotic resistance, monotherapy of topical antibiotics is no longer recommended (13).Instead, according to the American Academy of Dermatology guidelines, topical retinoids and benzoyl peroxide are recommended in combination with antibiotics (14).However, with the widespread application of antibiotics for treating acne vulgaris, C. acnes resistance is becoming a more serious problem.Antibiotic-resist ant C. acnes strains were discovered worldwide, primarily resistant to erythromycin and tetracycline (15).It was also demonstrated that C. acnes develops a more pronounced resistance to tetracyclines (16).Furthermore, in 2014, tetracycline-, clindamycin-, and erythromycin-resistant C. acnes strains were isolated from acne patients who had not accepted antibiotic therapies (17).In addition, the long-term use of tetracycline for acne treatment has been shown to cause specific side effects, which reduce the effectiveness of therapy by affecting patient adherence (18,19).Non-antibiotic antimicrobial dressings were developed to reduce wound infection (20).However, with the increasing antibiotic resistance and multiple adverse effects, the demand for alternative therapies for acne vulgaris becomes more imperious. Phages are viruses that naturally control bacterial populations by infecting and lysing bacteria with high specificity to a single species or strain of bacteria (21).Phage therapy, as an alternative to antibiotic therapies, is gathering renewed interest of researchers.Previous studies have shown that phages are highly efficient in treating antibiotic-resist ant bacterial infections (22)(23)(24)(25).It was reported that bacteriophage might have more effects in treating acne than antibiotics due to its amplification ability (21).In addition to alleviating inflammation, phages were reported to be able to modulate the gut metabolome by knocking down their targeted bacteria, providing more potential ways to treat infectious diseases (26).Phages had been applied in treating Pseudomonas aeruginosa infections in burn wounds and urinary tract infections in those undergoing transurethral prostatic resection, and phage treatment was non-inferior to antibiotic therapy (27,28).C. acnes phage was first identified by Brzin in 1964 (29).In vitro experiments demonstrated the ability of C. acnes phage to control the growth of C. acnes by lysing C. acnes (30).Animal experiments and clinical trials have been performed to demonstrate the therapeutic effects of C. acnes phage in mouse models and acne patients (31,32), revealing the significant potential of phages in treating antibiotic-resist ant bacteria.However, the molecular mechanisms of phage treatment for antibioticresistant C. acnes infections are inadequately studied, making it highly detrimental to the understanding of phage therapy. Our study aimed to verify the efficacy of phage and explore the specific molecular mechanisms in treatment of acne vulgaris by phage through rat models infected by antibiotic-resistant C. acnes strain Pacne11-13 and treated by phage φPaP11-13. Bacteria and phage preparation The clinical Cutibacterium acnes strain Pacne11-13 was isolated from clinical samples of acne vulgaris patients and stored in the clinical lab of Xinqiao Hospital, Chongqing, China (33).The C. acnes strain was isolated, and the identification was conducted following the methods recommended by The Clinical and Laboratory Standards Institute ( CLSI (34).C. acnes Pacne11-13 was resuscitated and cultured in brain-heart infusion liquid mediums (BHI, pH 7.4, Oxoid, UK) at 37°C under anerobic conditions.The colony-forming unit (CFU) was measured through the plate count method (35).Bacteriophage φPaP11-13 was isolated from sewage collected in the medical wastewater treatment center in Xinqiao Hospital and reported previously (33).Phage φPaP11-13 against C. acnes was purified and identified with the double-layered agar method following the previous report (36).Bacteriophage titers (pfu/mL, plaque-forming units) were measured through the spot test described in the former study (36).Transmission electron microscopy (TEM, Hitachi HTT700, Japan) was used to observe and photograph phage morphology. Spectrophotometric assay The optical density of the bacterial culture was measured at 600 nm wavelength (OD600) with a spectrophotometer (721N, Sunny Hengping Instrument, China) to monitor the growth of C. acnes.C. acnes Pacne11-13 was cultured in a BHI medium (pH 7.4, Oxoid, UK) at 37°C under anerobic conditions to the exponential phase (bacterial culture time = 30 hours, OD600 = 0.6890).Subsequently, with a sample size of 5, C. acnes cultured in flasks filled with 100 mL of the medium were respectively added with 10 µL phage (1 × 10 8 pfu/mL) or 10 µL phosphatebuffered saline (PBS), co-cultured for 6 hours and measured with the spectrophotometer. Animal experiments Approval for the animal experiments was obtained from the Medical Ethics Committee of the Second Affiliated Hospital (Xinqiao Hospital) of Army Medical University, PLA (approval number: AMUWEC2021580), and the animal experiments were conducted abiding by the accepted accord.Adult male Sprague-Dawley rats were purchased from the Experimental Animal Center at the Army Medical University.The acne rat models were established by injecting C. acnes (1.5 × 10 8 CFU) into the ventral middle part of the rats' right ears (37).Once the acne models were established, with the sample size calculated following the previous study (38), the acne rats were equally divided into the C. acnes model group (C.acnes, n = 5), the antibiotics administration group (C.acnes +Antibiotics, n = 5), and the phage administration group (C.acnes +Phage, n = 5) through a randomized block design.The antibiotic administration group, the phage administration group, and the C. acnes model group were respectively locally injected with antibiotics (doxycycline and minocycline, 1:1, 0.40 mg in total), phage (3 × 10 11 pfu), and the same volume of normal saline 6 hours after the models were established.Through a randomized block design, the control (Control, n = 5) group was set by injecting the same volume of normal saline, whereas the phage control group (Phage, n = 5) was set by locally injecting phage (3 × 10 11 pfu) to confirm that no extra effects of phage were included.Exogenous IGF-1 protein (ab198570, Abcam) and IGF-1 antibody (ab63926, Abcam) were injected topically into the ear lesions of acne inflammation rat models to investigate the effect of IGF-1 on C. acnes-induced inflammation.All rat tissue samples were taken immediately after euthanasia of the rats. Hematoxylin-eosin (HE) staining Sections of tissues from rats' right ears were fixed with 4% formalin, embedded in paraffin, and sliced perpendicularly to the long axis.The 4-μm-thick slides were stained with hematoxylin-eosin (39), and the images were captured under an optical microscope (CX-21, OLYMPUS) for histopathological analysis.Data of ear thickness were measured by visualizing HE staining sections with pre-set graduated scales.Inflammatory cells were manually identified and counted using ImageJ software (ImageJ 1.53 k, National Institutes of Health, USA) under fields of 200 × 200 pixels randomly captured in each slide at 100 × magnification. Bacterial load quantification About 5 mm × 5 mm full-thickness ear tissue specimens from rats' right ears were weighed and homogenized in PBS.Then, the supernatant was collected and diluted in a series of concentration gradients from 10 −1 to 10 −7 .Then, 0.01 mL dilutions of each gradient were evenly spread on BHI mediums and cultured at 37°C under anerobic conditions for 24 hours.The experiment was repeated three times for each sample.The bacterial load was expressed in CFU/g (CFU, colony-forming units). Immunohistochemical (IHC) staining Immunohistochemistry was conducted on paraffin sections fixed in paraformaldehyde.Primary anti-IGF-1 antibody (1:100, ab9572, Abcam), primary anti-IGF-1r antibody (1:75, ab39675, Abcam), and secondary goat anti-rabbit antibody labeled with horseradish peroxidase (HRP, 1:200, AS-1107, Aspen) were used for immunohistochemistry.Briefly, paraffin sections were dewaxed and rehydrated.Antigen retrieval was performed, and the slides were cooled naturally, washed three times with PBS, and incubated in a 3% hydrogen peroxide solution for 10 minutes.The samples were then incubated with 5% BSA after natural drying and subsequently with the primary antibody at 4°C overnight and with HRP-labeled secondary antibody for 50 minutes at 37°C, followed by reaction with the freshly prepared DAB solution and counterstaining with hematoxylin.To score, three fields were randomly captured in each slide at 200 × magnification and scored using ImageJ software (ImageJ 1.53 k, National Institutes of Health, USA).The staining intensity scores of different areas in each field were categorized into 0 (negative), 1 (low-positive), 2 (positive), and 3 (high-positive) using the IHC Profiler plugin embedded in ImageJ software (40).The final IHC score of each field was expressed by adding staining intensity scores of four areas in each field weighted by percentages of the corresponding proportions. Immunofluorescence (IF) staining IF staining was conducted on paraformaldehydefixed paraffin sections.Primary cytokeratin 10 antibody (1:200, 18343-1-AP, Proteintech Group) and secondary CY3labeled goat anti-rabbit antibody (1:50, AS-1109, Aspen) were used for IF staining.TdT-mediated dUTP nick-end labeling (TUNEL) staining was performed with a TUNEL staining kit (In Situ Cell Death Detection Kit, Roche).In brief, paraffin sections were dewaxed, rehydrated, and washed three times with PBS.Antigen retrieval was performed by heating the slides in EDTA buffer in a microwave oven until boiling.After washing thrice with PBS, the samples were incubated with 3% hydrogen peroxide for 10 minutes.After being washed three times in PBS and drying, the samples were permeabilized with 0.5% Triton x-100 in PBS for 10 minutes and incubated with 5% BSA for 20 minutes.Subsequently, the samples were incubated with the primary antibody overnight at 4°C and washed thrice with PBS.The samples were then incubated with the secondary CY3-labeled antibody for 50 minutes at 37°C and washed thrice with PBS.The samples were incubated in TdT and dUTP mixed in a ratio of 1:9 for 60 minutes at 37°C, followed by washing thrice with PBS.The samples were counterstained with DAPI for 5 minutes and observed under a fluorescence microscope (BX63, OLYMPUS).The percentage of TUNEL-positive cells was evaluated using ImageJ software (ImageJ 1.53 k, National Institutes of Health, USA) under 100 × magnification. Quantitative real-time PCR (qRT-PCR) Total RNA was extracted from rats' ear tissues using TRIzol reagent (Invitrogen) following the standard process (41).Reverse transcription was performed using the PrimeScriptRT reagent Kit with gDNA Eraser (TaKaRa) following the manufacturer's instructions.Quantitative real-time PCR was performed using a qRT-PCR reagent kit (SYBR Premix Ex Taq, TaKaRa) on a StepOne Real-Time PCR System (Life Technologies).GAPDH was used as the endogenous normalization control, and relative expressions were calculated by the 2 -ΔΔCt method.Primer sequences for qRT-PCR were designed with the Primer-BLAST online tool, according to the design principles proposed in a previous study (42,43).Original gene sequences were obtained from the NCBI gene database (44).The primers used for qRT-PCR were purchased from GeneCreate (Wuhan, China).Primer sequences are shown in Table 1. Statistical analysis Data were expressed as mean ± SD from at least three independent replicates.Statistical analyses were conducted using SPSS software (SPSS 24.0, IBM), and a two-tailed Student t-test was adopted to analyze the differences between the two groups.Differences were considered statistically significant at P < 0.05. The OD600 value of C. acnes treated with phage φPaP11-13 for 6 hours was significantly lower than that of C. acnes with PBS (Fig. 1), indicating that phage φPaP11-13 was capable of lysing C. acnes effectively.IHC staining was conducted to evaluate the expression level and expression location of IGF-1 and IGF-1r in rats' ear tissues.IHC staining showed a pronounced up-regulation of IGF-1 and IGF-1r in the C. acnes group compared with that in the control group, as well as a remarkable down-regulation of IGF-1 and IGF-1r in the C. acnes +Phage group.Notably, IGF-1 and IGF-1r were primarily expressed in similar areas near the skin's epidermis, indicating a close relationship between the expression of IGF-1 and that of IGF-1r (Fig. 3A).IHC staining was analyzed using ImageJ software to examine the relative expression of IGF-1 and IGF-1r.IGF-1 and IGF-1r were upregulated in the C. acnes group compared to the control group and significantly down-regulated in the C. acnes +Phage group.IGF-1 and IGF-1r expressions were significantly more down-regulated in the C. acnes +Phage group than in the C. acnes +antibiotics group (Fig. 3B and C).WB analysis was applied to detect the protein level of IGF-1 and IGF-1r.Similar to the results of IHC staining, the WB analysis showed that the overexpression of IGF-1 and IGF-1r protein induced by C. acnes was significantly reduced in the C. acnes +phage group compared with the C. acnes +Antibiotics group (Fig. 3D through F).QRT-PCR was performed to detect the relative mRNA expression of IGF-1 in rats' ear tissues.It was shown by qRT-PCR that the relative expression of IGF-1 was significantly lower in the C. acnes +Phage group than in the C. acnes +antibiotics group (Fig. 3G), which was in accordance with the results shown in the WB analysis. There was a significant positive correlation between the concentration levels of IGF-1 protein and the severity of inflammation caused by C. acnes Artificially added IGF-1 antibody and IGF-1 protein were applied to established rat inflammatory models infected with C. acnes to explore the effect of IGF-1 on the severity of inflammation caused by C. acnes.It was shown by HE staining that after the infection of C. acnes, the application of IGF-1 protein significantly increased the average thickness of rats' ears compared to the application of IGF-1 antibody, while the number of inflammatory cells exhibited no noticeable difference (Fig. 4A through C). As a result of the downregulation of the PI3K/Akt pathway, BAD proteins and downstream mitochondria apoptotic pathway were upregulated in the C. acnes +Phage group compared with the C. acnes group WB analysis was performed to detect the expression level of PI3K/Akt relevant proteins (Fig. 5A).The expression level of PI3K was significantly elevated in the C. acnes group than in the control group.Subsequently, after the application of phage in the C. acnes + Phage group, the activation of PI3K was significantly down-regulated compared to that in the C. acnes group, which was substantially more pronounced than that in the C. acnes + Antibiotics group (Fig. 5B).As a classical protein downstream of PI3K, Akt expression was correspondingly upregulated in the C. acnes group compared to that in the control group.However, no statistically significant change of Akt expression was shown in the C. acnes + Phage group compared to that in the C. acnes group, but a declining trend was observed (Fig. 5C).Nonetheless, the expression level of P-Akt (Thr308) showed a similar expression trend to PI3K protein, with no significant change in the expression level of P-Akt (Ser473) (Fig. 5D and E pathway was activated in C. acnes-infected tissues, and this activation was inhibited in the C. acnes + Phage group.Moreover, phage exhibited higher activity against C. acnes than antibiotics; therefore, the activation of the PI3K/Akt signaling pathway induced by C. acnes was lower in the group treated with phage φPaP11-13 compared to that treated with antibiotics. As an apoptosis-related protein regulated by the PI3K/Akt signaling pathway, the expression level of BAD protein was significantly down-regulated in the C. acnes group compared with that in the control group.Subsequently, the BAD protein expression level showed some up-regulation trend in the C. acnes + Phage and the C. acnes + Antibiotic groups, but there was no statistical difference (Fig. 5F).QRT-PCR was conducted to evaluate the relative mRNA expression of BAD, Bcl-2, caspase-9, and caspase-3.Consis tent with the results of WB analysis, the relative mRNA expression of BAD was signifi cantly decreased in the C. acnes group compared with that in the control group.After phage application in the C. acnes + Phage group, the relative mRNA expression of BAD was significantly upregulated, which was significantly higher than that in the C. acnes + Antibiotics group (Fig. 5G).Respectively, at transcriptional and translational levels, the results of qRT-PCR and WB analysis on the expression of BAD together indicated that phage φPaP11-13 suppressed the C. acnes-induced decline in BAD protein expression by lysing C. acnes. The relative mRNA expression of Bcl-2 was statistically significantly but slightly downregulated in the C. acnes group and upregulated in the C. acnes + Phage group and the C. acnes + Antibiotics group (Fig. 5H), indicating that the expression of Bcl-2 may be upregulated at the transcriptional level by negative feedback after a decrease in activity induced by binding with BAD protein, which was upregulated in the C. acnes + Phage group.The relative mRNA expression of caspase-9 and caspase-3 showed similar trends, in which both were significantly down-regulated in the C. acnes group compared to the control group and were significantly upregulated in the C. acnes + Phage group and the C. acnes + Antibiotics group compared to the C. acnes group, with no statistical difference in expression between the C. acnes + Phage and the C. acnes + Antibiotic groups (Fig. 5I and J).The results of the relative mRNA expression of caspase-9 and caspase-3 suggested that phage φPaP11-13 can inhibit C. acnes-induced down-regulation of apoptosis by lysing C. acnes.The relative mRNA expression of NF-κB, MAPK, MEK, Phospho-MEK, ERK, and Phospho-ERK exhibited no apparent trends (Fig. 5K through 5Q), which indicated that these signal factors were not remarkably regulated in the infection of C. acnes. IGF-1r inhibitor, Pan-PI3K inhibitor, and Akt inhibitor attenuated IGF-1induced hypo-expression of BAD through the PI3K/Akt signaling pathway WB analysis was performed to detect the expression level of PI3K/Akt relevant proteins after applying the IGF-1r inhibitor, Pan-PI3K inhibitor, and Akt inhibitor (Fig. 6A).In contrast to the remarkable upregulation of Akt expression after infection with C. acnes shown in Fig. 5C, the upregulation of Akt in the C. acnes group was attenuated after applying the Pan-PI3K inhibitor (Fig. 6E).More apparently, the significant up-regulation of P-Akt (Thr308) shown in Fig. 5C was weakened after the application of the Pan-PI3K inhibitor, showing no statistical upregulation of P-Akt(Thr308) in the C. acnes group (Fig. 6F), suggesting that PI3K was in the upstream of Akt.It was shown that no significant change occurred to the expression of PI3K after applying the Akt inhibitor (Fig. 6D), further suggesting that Akt was downstream of PI3K in the signaling pathway.Neither Pan-PI3K inhibitor nor Akt inhibitor reversed the expression trends of IGF-1 and IGF-1r (Fig. 6B and C).The application of the IGF-1r inhibitor exhibited inhibition of the expres sion of P-Akt (Thr308) in the C. acnes group in comparison with the up-regulation of P-Akt (Thr308) in the C. acnes group without the IGF-1r inhibitor shown in Fig. 5D (Fig. 6F), indicating that IGF-1/IGF-1r protein was closely related with the activation of the PI3K/Akt signaling pathway.It was shown that all IGF-1r inhibitor, Pan-PI3K inhibitor, and Akt inhibitor reversed the down-regulation of BAD induced by C. acnes (Fig. 6G), suggesting that BAD was in the downstream of the IGF-1/IGF-1r protein and PI3K/Akt signaling pathway. DISCUSSION Our research demonstrated that the bacteriophage φPaP11-13 attenuates C. acnes infection in rat acne models and upregulates keratinocyte apoptosis via the PI3K/Akt signaling pathway by lysing C. acnes.Animal experiments proved that phage application took a similar or even more potent effect than antibiotics in reducing inflammation and suppressing C. acnes-induced inhibition of keratinocyte apoptosis, revealing the prospective application value of phage as an alternative to antibiotics. The pathogenesis of acne is known as the overgrowth of C. acnes in skin lesions and resultant inflammatory immune cascades, as well as changes in skin sebum and the sebaceous gland (5).Until now, the global burden of acne remains high (45).Conventional treatment for acne includes topical retinoid, benzoyl peroxide, topical antibiotics, and oral antibiotics (14).With the increasing problem of antibiotic resistance, anti-bacterial therapy for acne needs to be optimized (46).Bacteriophage's ability to eliminate bacterial infection was first put forward in the 1910 s (47).After a century of research, phage is now considered a potential alternative to antibiotics (48).However, the precise mechanism of phage treatment for acne vulgaris remains unclear.To explore the underlying molecular mechanisms of phage attenuating C. acnes-induced inflamma tion, we established rat inflammatory models and conducted subsequent experiments. C. acnes phage was described previously to have similarly sized heads with 50 nm diameter and 150 nm long and flexible tails and was shown to have broad ability to kill clinical isolates of C. acnes (49).In vitro experiments demonstrated that C. acnes phage could lyse C. acnes (30).Likewise, we observed phage's lysing ability against C. acnes through the spectrophotometric assay.Moreover, previous animal experiments showed that C. acnes phage could reduce the size of acne inflammatory lesions on mice's backs (31) as well as the bacterial load and inflammation in acne lesions (50).Correspondingly, we observed a reduction of the thickness and inflammatory cells in rats' ears after applying phage φPaP11-13 to acne lesions in rats' ears infected by C. acnes.Our study and previous studies together demonstrated that phages could reduce inflammation by lysing bacteria and reducing the bacterial load of acne lesions. Topical growth factors were reported to have a profound association with skin healing (51).Previous studies had demonstrated that IGF-1 was elevated in acne patients (52,53), indicating that IGF-1 and downstream signaling pathways were associated with the severity of acne.Consistent with previous studies, we observed an increase of IGF-1 and IGF-1r in rat acne models and a strong correlation between the concentration of IGF-1 and the severity of inflammation, and phage was able to reduce C. acnes-induced over-expression of IGF-1 by lysing C. acnes. As mentioned before, the etiology of acne vulgaris is also closely related to hyperkera tosis (4), which is caused by the hypo-apoptosis of keratinocytes.A previous study has proven that the proliferation of keratinocytes was enhanced by IGF-1 via the activation of IGF-1r in the development of acne vulgaris (54).Correspondingly, our experiments showed that keratinocyte apoptosis was significantly down-regulated by C. acnes and that phage φPaP11-13 was capable of restoring the hypo-apoptosis induced by C. acnes, suggesting that the application of phage could influence keratinocyte apopto sis through lysing C. acnes to attenuate C. acnes-induced inflammation.However, the specific mechanisms were not reported before.As reported previously, the dysregulation of the PI3K/Akt pathway was relevant to acne vulgaris (55), which is consistent with our observation that the expression of PI3K and Akt was significantly up-regulated after the infection of C. acnes.We also found a pronounced reduction of the expression of BAD and its downstream apoptosis-related proteins: caspase-9 and caspase-3, after the infection of C. acnes.Furthermore, after the infection of C. acnes, the application of the Pan-PI3K inhibitor inhibited the up-regulation of Akt and P-Akt (Thr308), whereas the Akt inhibitor no significant impact on PI3K, indicating that PI3K was in the upstream of Akt.Neither the Pan-PI3K inhibitor nor the Akt inhibitor reversed the upregulation of IGF-1, but the application of the IGF-1r inhibitor significantly inhibited the activation of P-Akt (Thr308), revealing that IGF-1 was in the upstream of the PI3K/Akt pathway.All of the IGF-1r inhibitors, Pan-PI3K inhibitors, and Akt inhibitors reversed the down-regula tion of BAD induced by C. acnes, suggesting that BAD was downstream of the IGF-1 and PI3K/Akt signaling pathway.In summary, it was demonstrated that C. acnes reduced keratinocyte apoptosis by down-regulating BAD protein and downstream apoptosis-rel evant proteins by regulating the IGF-1/PI3K/Akt signaling pathway.However, phage φPaP11-13 inhibited C. acnes-induced over-activation of the IGF-1/PI3K/Akt signaling pathway and hypo-apoptosis of keratinocytes by lysing C. acnes. In conclusion, our study aimed to investigate the therapeutic effect of phage application in rat inflammatory models infected with C. acnes and its underlying molecular mechanisms of alleviating inflammation.With in vitro experiments and animal experiments, we demonstrated that the infected lesions of C. acnes in a rat inflammatory model were significantly attenuated by phage and that phage had a modulatory effect in keratinocyte apoptosis by lysing C. acnes to reduce the over-proliferation of keratino cytes in rats' ear tissues, revealing the great potential of phage as an alternative therapy to antibiotics.However, phage therapy also has potential risks that cannot be ignored and require further research.For example, the optimal therapeutic dose, administration time, treatment frequency, duration, evaluation of efficacy, and other aspects that are close to clinical practice have not been well-studied (56).In addition, phage capsid proteins may elicit unknown immune responses as an antigen (57), posing a risk to the therapy.It was discovered that bacteria have developed resistance to phages as they have already developed to antibiotics (58,59), through bacteria evolution (60).The aforementioned discovery shows that although our experiments illustrate that phage φPaP11-13 has some effect on treating C. acnes infection, it cannot be applied to actual clinical therapy for the time being.There is still room for further research on phage treatment of C. acnes infection, which is also our next research direction. C. acnes + Phage group presented more apoptosis of keratinocytes and less inflammation in acne lesions than the C. acnes group TUNEL staining was performed to detect cell apoptosis in rats' ear tissues.The percent age of TUNEL-positive cells in tissues in the C. acnes group was significantly lower than that in the control group.However, the percentage of TUNEL-positive cells was significantly higher in the C. acnes +Phage group than in the C. acnes group (Fig.2A and B).It is worth noting that compared with the control group, CK10 expression was slightly elevated in the C. acnes group, and the location of TUNEL-positive cells in the C. acnes +Phage group largely overlapped with that of the CK10-positive region, suggesting that the increased apoptosis in the C. acnes +Phage group primarily occurred in keratinocytes.Besides, phage significantly reduced bacterial load in tissues infected with C. acnes and produced a more pronounced effect in reducing bacterial load than antibiotics (Fig.2C).HE staining was conducted to evaluate the severity of inflammation in rats' ear tissues.HE staining showed a significant increase in average ear thickness and a significant local enrichment of inflammatory cells in the C. acnes group than in the control group, both of which indicated the development of inflammation after infection; however, both the increased average ear thickness and count of inflammatory cells induced by C. acnes were significantly reduced after the application of phage in the C. acnes +Phage group (Fig.2D through F). FIG 2 ( 7 C FIG 2 (A) Representative images of IF staining of rats' ear tissues in the control group, the C. acnes group, and the C. acnes +Phage group at 200 × magnification.The TUNEL method was performed to detect cell apoptosis in tissues, and TUNEL-positive cells were stained green.CK10 was stained in red.The nuclei were stained with DAPI in blue.(B) Comparison of the percentage of TUNEL-positive cells among groups presented in IF staining images.(C) The bacterial count of (Continued on next page) FIG 3 ( FIG 3 (A) Representative images of IHC staining of IGF-1 and IGF-1r in the control group, the C. acnes group, the C. acnes +Phage group, the C. acnes +Antibiotics group, and the phage group at 400 × magnification.(B) IHC scores of IGF-1 in IHC staining images (C) Measurement of IHC scores of IGF-1r in IHC staining images.(D) The protein bands of IGF-1, IGF-1r, and GAPDH in the control group, the C. acnes group, the C. acnes +Phage group, the C. acnes +Antibiotics group and the phage group are shown by the Western blot method.(E) The expression levels of IGF-1 protein were measured by the WB method.GAPDH served as the internal control.(F) The expression levels of IGF-1r protein were measured by the WB method.GAPDH served as the internal control.(G) Relative expression of mRNA of IGF-1 in the control group, the C. acnes group, the C. acnes +Phage group, the C. acnes +Antibiotics group, and the phage group detected by the qRT-PCR method.GAPDH served as the internal control.Ns, no significance; * P < 0.05; ** P < 0.01. FIG 4 ( FIG 4 (A) Representative images of HE staining of ear tissues from the control group, the C. acnes group, the C. acnes +IGF-1 antibody group, and the C. acnes +IGF-1 protein group at 100 × magnification.(B) Measurements of the average thickness of rats' ears in the control group, the C. acnes group, the C. acnes +IGF-1 antibody group, and the C. acnes +IGF-1 protein group.(C) Determination of inflammatory cell count of tissues of rats' ears in the control group, the C. acnes group, the C. acnes +IGF-1 antibody group, and the C. acnes +IGF-1 protein group.Ns, no significance; * P < 0.05; ** P < 0.01. FIG 5 ( FIG 5 (A) The protein bands of PI3K, Akt, Phospho-Akt (Thr308), Phospho-Akt (Ser473), BAD, and GAPDH in the control group, the C. acnes group, the C. acnes +Phage group, the C. acnes +Antibiotics group and the phage group showed by the Western blot method.(B)-(F) The expression levels of PI3K, Akt, Phospho-Akt (Thr308), Phospho-Akt (Ser473), and BAD proteins measured by the WB method.GAPDH served as the internal control.(G)-(H) Relative expression of the mRNA of BAD, Bcl-2, caspase-9, and caspase-3 in the control group, the C. acnes group, the C. acnes +Phage group, the C. acnes +Antibiotics group, and the phage group detected by the qRT-PCR method.GAPDH served as the internal control.(K) The protein bands of NF-κB, MAPK, MEK, Phospho-MEK, ERK, Phospho-ERK, and GAPDH in the control group, the C. acnes group, the C. acnes +Phage group, the C. acnes +Antibiotics group, and the phage group showed by the Western blot method.(L)-(Q) The expression levels of NF-κB, MAPK, MEK, Phospho-MEK, ERK, and Phospho-ERK proteins measured by the WB method.GAPDH served as the internal control.Ns, no significance; * P < 0.05; ** P < 0.01. TABLE 1 The primers for qRT-PCR (Continued)tissues from rats' ears in the control group, the C. acnes group, the C. acnes +antibiotics group, and the C. acnes +Phage group.(D) Representative images of HE staining of rats' ear tissues from rats' ears in the control group, the C. acnes group, the C. acnes +Phage group, and the phage group.(E) Measurements of the average thickness of rats' ears in HE staining images.(F) Determination of the inflammatory cell count of tissues of rats' ears in HE staining images.Ns, no significance; * P < 0.05; ** P < 0.01. ), indicating that the activation of Akt induced by C. acnes in this signaling pathway was primarily regulated via the Thr308 phosphorylation site instead of Ser473 in Akt protein.These results suggested that the PI3K/Akt signalingFIG 2
7,469
2024-01-10T00:00:00.000
[ "Medicine", "Biology" ]
EAP Students’ Perceptions of Extensive Listening Compared to other language skills, listening is a language skill that is often ignored and forgotten in English for Academic Purposes (EAP) classes. Thus, there should be more room for teaching listening in EAP classes. Extensive listening (EL) could be one alternative that English teachers can do to give more room for teaching listening. This descriptive study investigated 19 EAP students' perceptions of an EL program. Research data showed that most students have positive perceptions of EL. EL provides a fun but meaningful activity for students. Most of the students agree that EL can improve their listening fluency and vocabulary and expose them to various English accents. In addition, they state that EL helps them become more confident to talk to other people in English and they want to do EL in the future although nobody asks them. Therefore, EL is a promising program to be implemented in EAP classes. Introduction In oral communication, understanding what our interlocutor is trying to say to us or vice versa is difficult. It usually leads us to a misunderstanding which makes the message of the communication cannot be conveyed successfully. This indicates that mastering language skills is important to support the process of communication and to anticipate misunderstanding. If our listening skill is weak, responding to our interlocutor will become a challenge. According to Rost (2011), listening is important in second language acquisition because processing language in real time happens through listening. However, to be fluent in listening is not easy. That means listening should be trained, and language teachers should help students develop their listening fluency. Although many teachers believe that developing all four language skills is essential for students, the attempt to teach all four language skills in the classroom does not always happen equally. Compared to the other language skills, listening is often neglected by language teachers in the second or foreign language teaching context. Nunan (1997) describes listening as "Cinderella skill" in second language learning because listening is often neglected or forgotten by many language teachers. Speaking and the other language skills are considered more important than Acces article distributed under the terms of the Creative Commons Attribution license (https://creativecommons.org/licenses/by-sa/4.0/) listening. Thus, there is only a small room for teaching listening in language classrooms. Meanwhile, Nunan (1997) states that listening is the foundation of speaking. In other words, if a speaker is weak in listening, she/he might not be able to respond to her/his interlocutor or she/he might misinterpret the message conveyed by her/his interlocutor. Spear-Swerling (2016) asserts that listening comprehension has a big impact on students' success in formal schooling. It means that being able to listen is important and it is necessary to give more room towards listening in the context of language teaching and learning. Therefore, communication can be done well because the message is successfully conveyed and understood by the persons involved in the communication. Based on my informal discussion with my students, some of them mentioned that they rarely learned to listen in their previous English classes. If there was an attempt to learn listening in their classroom, sometimes the recordings were too easy or too difficult for them. They felt that there was no learning process when the recordings were too easy, and they would not be motivated to learn listening if the recordings were too difficult. As a result, they considered that learning listening was not meaningful, and they could not enjoy the process of learning English. Renandya and Farrell (2011) in their article have also pointed this experience when Jing Erl (a pseudonym) stated that she could not understand the recording being played by her teacher because it was too fast and difficult for her. This indicates that the recordings do not match the learners' language proficiency level and it can demotivate them to learn listening. Thus, they find it difficult to understand others in their real-world communication. As a result, many language learners will think that learning English is difficult, and thus their listening fluency does not improve as what is expected. To overcome the problem discussed above, it is necessary to teach listening in a fun but meaningful way. Extensive listening (EL) is believed as one promising alternative that language teachers can employ in their classroom to give more room for learning and teaching listening in that way. According to Ivone & Renandya (2019, p.237), EL is "a language teaching and learning approach that encourages language learners to be exposed to a large amount of easily comprehensible and enjoyable materials presented in the target language over an extended period". Extensive Reading Central (2019) on their website adds that when someone is doing an extensive listening activity, it means that he/she listens to a lot of comprehension texts smoothly. In addition, there will be no tasks which follow EL. Through EL, the listeners will get both linguistics inputs (such as grammar and vocabulary) and non-linguistics inputs (such as the knowledge or information) from the texts she/he has listened to. Furthermore, Chang (2012) points out that EL can promote autonomy in listening. This is beneficial for language learners as they can be involved in an extensive listening program both inside (while the class is running) and outside the classroom (after the class has finished, for example at home). Therefore, extensive listening is a promising program which makes listening not a "Cinderella skill" anymore. In addition, EL will also be a promising program to help language students learn listening because listening is not an easy process and should be trained. Being able to listen to someone's talk does not mean that someone can listen to the sound of the talk. Listening should go beyond that. A listener should have good listening fluency in which the listener can comprehend what his/her interlocutor is saying fast, accurately, and without spending much effort (Rost, 2011). In other words, a language learner must develop his/her listening fluency. Brown (2000, p.29) emphasized that: "Communicative goals are best achieved by giving attention to language use and not just usage, to fluency not just accuracy, to authentic language and contexts, and to the students' eventual need to apply classroom learning to unrehearsed contexts in the real world". Looking at the necessity of developing listening fluency, English language teachers should give more room for listening in English teaching and learning. Renandya and Farrell (2011, p.56) state that "listening is best learnt through listening". According to Harmer (2003), students should be given English exposures if they want to learn English. Renandya and Jacobs (2016) present research results that students' vocabulary, students' ability to comprehend both spoken and written communication, and students' general language proficiency increase while they are given many language exposures. In other words, the input is considered important in second language learning. In the context of developing students' listening fluency, the inputs can be from face-to-face communication, cassette recordings, television, or radio. Nowadays, teachers and students can take advantage of technology development to access listening sources through the internet. This means that students can access them anytime and anywhere. Moreover, developing students' listening fluency can be extended outside the classroom wall. However, listening is difficult. Renandya and Farrell (2011) mention several reasons which make listening difficult, namely speech speed, speech variety, the blurriness of word boundaries, and the fact that listening must be processed in the real time. Moreover, each student might have different listening proficiency level. If a teacher plays a recording in the classroom, for example, some students might consider that the recording is too easy, while some might think that the recording is too difficult. In other words, the recording played in the classroom will not always be suitable for the students' language proficiency level. Wulanjani (2019) asserts that students might feel worried when joining a listening class. To respond to this case, extensive listening (EL) appears to be a good alternative to facilitate students to develop their listening fluency. When students are doing extensive listening, they listen to listening input for pleasure but meaningful (Renandya & Farrell, 2011). According to Renandya and Jacobs (2016), extensive listening encourages students to listen to a lot of listening materials that are motivating and match students' abilities linguistically, by focusing on meaning rather than sentence formula. Moreover, Waring (2008) emphasizes that the main purpose of extensive listening is to improve listening fluency. Renandya and Jacobs (2016) state that extensive listening can help students increase their speaking speed, know the oral vocabulary, and speaking, reading, and confidence in using language. Additionally, Takaesu (2013) argues that extensive listening can improve students' listening fluency and encourage them to become autonomous listeners through authentic materials which can be accessed easily in the real world. Thus, it is justifiable to again says that Acces article distributed under the terms of the Creative Commons Attribution license (https://creativecommons.org/licenses/by-sa/4.0/) extensive listening seems promising to facilitate students to develop their listening fluency and it is confirmed by some research results. Some studies revealed positive results of the implementation of extensive listening in language classes. A study conducted by Lee and Cha (2017) showed that extensive listening using a listening log can increase students' confidence while they have to listen to their interlocutors in communication. Moreover, Chang and Millett (2014) found that extensive listening helps students develop their listening fluency. Takaesu's study (2013) also revealed the fact that extensive listening is advantageous for students. After conducting a survey to 468 university freshmen, Takaesu (2013) found out that extensive listening can increase students' listening skills and make the students be accustomed to various English accents after they did extensive listening using TED Talks. More recently, a study conducted by Chang, Millett, and Renandya (2018) revealed that supportive extensive listening practice helps learners comprehend a text in faster speech rates. They employed three modes of intervention, namely listening only (LO), reading only (RO), and reading while listening plus listening only (RLL). However, the students who conducted an extensive listening practice by reading while listening plus listening only (RLL) had a better listening fluency than the students who did an extensive listening practice by listening only (LO) and reading only (RO). In line with the studies conducted outside Indonesia, several studies conducted in an Indonesian context revealed that extensive listening is surely beneficial to be employed in a language classroom (Mahmudah, 2015;Fauzanna, 2017;Saputra & Fatimah, 2018;Setyowati & Kuswahono, 2018). Those studies indicate that extensive listening can be a promising activity to develop students' listening skills and fluency. Considering that extensive listening has been proven as an activity which can help students develop their listening skills, this present study tries to see students' perceptions of EL as one instrument to evaluate a program in an EAP class. Method This descriptive study investigates the participants' perceptions of extensive listening (EL) activities they had ever done and the usefulness of EL for their listening fluency development. The participants of this study were 19 students of Academic Listening-Speaking class where English is taught as a second language in an intermediate level. They took the class to prepare themselves before studying in an international undergraduate study program. They had to achieve a certain TOEFL score in order to be accepted in the study program of the university. Table 1 presents the demographic data of the participants. they came to Indonesia to study in one private institution in Indonesia. Therefore, EL was a new way of learning listening for them. To facilitate the students to develop their listening fluency, they were required to do EL activities for 25 weeks by listening to a TED Talks video one time per week outside the class. TED Talks videos were chosen because the videos consisted of various topics which were spoken by people from around the world. Besides, they were all teenagers which would probably enjoy listening to TED Talks videos with various topics about teenagers and general life. In addition, TED Talks videos represent real-life communication where English is spoken by human beings, not animated, and adjusted. This means that the students were exposed to authentic listening materials with various types of English accents. According to a study conducted by Anggraeni and Indriani (2018), using TED Talks or TED-ED to teach listening is beneficial because there are many authentic videos with various topics presented by speakers from around the world in TED-ED. In addition, it provides illustration which might help students understand the talk better, and it can develop students' critical thinking. In the program. the students could freely choose the topic of the video by themselves. After they listened to a video, they had to summarize the content and share it to their friends in the following week orally. This activity was done to monitor whether the students are doing EL or not. In addition, the teacher also asked one or two students to share the message of the video they had listened to in front of the class. The objective of this research was to investigate the students' perceptions about the extensive listening program which they have ever joined. Regarding the ethical considerations, an ethical approval to conduct this study was provided by Mochtar Riady Institute for Nanotechnology Ethics Committee (No. 017/MRIN-EC/ECL//X/2018). To gather the data, the researcher employed a simple survey questionnaire consisted of 12 close-ended questions using a Likert scale (1= strongly disagree, 2=disagree, 3=agree, 4=strongly agree) approach and 2 openended questions. The questions were compiled based on theories about extensive listening mentioning that EL can be a fun and meaningful activity to develop students' listening fluency (Ivone & Renandya, 2019;Renandya & Farrell, 2011;Renandya & Jacobs, 2016;Waring, 2008). In addition, the questions in the questionnaire were also composed based on previous studies' results about extensive listening revealing the perceived benefits of extensive listening for students, such as facilitating students to become more autonomous and confident to learn listening and knowing various English accents (Anggraeni & Indriani, 2018;Chang & Millett, 2014;Chang et al., 2018;Lee & Cha, 2017;Takaesu, 2013).The questionnaire was distributed to all students in the class (21 students), but there were only 19 questionnaires returned to the researcher. A descriptive statistical analysis was employed to analyze the data of this study. The data are presented in the next section. Findings and Discussion The data of this study were presented in two parts. The first part discussed the students' general perceptions of EL activities they did. Meanwhile, the second part elaborated the students' perceptions of the benefits they gained after they did EL. Acces article distributed under the terms of the Creative Commons Attribution license (https://creativecommons.org/licenses/by-sa/4.0/) Students' General Perceptions of EL Activities As explained above, the students were asked to listen to a TED Talks video once a week for 25 weeks. The data about the students' general perceptions of EL activities they had done are presented in Table 2. Table 2, it is revealed that most of the students have favorable perceptions of the implementation of EL. Most students like and happy about doing EL activities and believe that extensive listening is fun that was confirmed by questions 1, 2, and 3. These findings indicate that the students can really find pleasure when listening to recorded exposures in English was what Renandya & Farrell (2011) have stated. In addition, it motivates them to find the joy of learning English, especially listening. This fact is supported by the data that several students stated that they will continue to do EL activities for English practice although they are not required to do it although not all of them want to do it in the future. This means that EL trains them to be autonomous English learners. Table 2 also shows that TED Talks could be one good resource for language teachers and learners if they want to engage in extensive listening activities. There was only one student believed that TED Talks is not a good resource for EL. The open-ended questions revealed that TED Talks could be one good resource not only for improving listening skills but also for improving other aspects of lives, as stated by four students below: The reason is because of the fun, new information, and other new information that we might get from it. Another reason is because the English Level in Ted Talks is an international Standard English that it will be very good for us to keep in track with a normal English speaker. From the time, I would suggest watching TED Talks 5 to 10 minutes per day is more than enough. As people get bored easily, this is why it would be better if we do it every day rather than start with a lot and ended up with nothing." From the data, it is revealed that TED Talks videos offer many benefits for the students. The videos certainly give a chance for students to develop their listening skills. Furthermore, the speakers in TED Talks open students' view about the world that they can learn about many new things. In addition, the speakers of TED Talks inspire the students to be a good public speaker. This implies that EL also contributes to the students' speaking skills, and it is suitable to be implemented in a speaking-listening class. These results are also in line with the findings of a study conducted by Anggraeni and Indriani (2018) that using TED Talks/TED-ED in listening class is beneficial for students. However, one interesting thing to be considered is the duration of practicing EL. There were 7 students thought that doing EL once a week was not enough. This indicates that the current extensive listening program should be improved in terms of the duration for the students to do the EL. They should do EL for more than one time per week. To conclude, the students generally had positive perceptions of EL, they suggested that everyone engaging in EL do it more than one week, and TED Talks videos could be good resources for EL practice. Acces article distributed under the terms of the Creative Commons Attribution license (https://creativecommons.org/licenses/by-sa/4.0/) The Benefits of EL for the Students This section presents the benefits that the students have got from doing EL for 25 weeks. Table 3 below presents the students' perceived benefits of doing extensive listening activities. Table 3 shows that the majority of the students both agree and even strongly agree that they could take some advantages from doing extensive listening activities. This asserted that the students believe that EL is a fun and meaningful activity to develop English listening skills. More than 95% of the students stated that EL helps them realize that learning listening is meaningful. Furthermore, all of the students believed that EL can develop their English vocabulary and listening fluency showed by the data about their agreement that they could understand what other people are saying better after they did EL activities for 25 weeks. In addition, the students can be exposed to various English accents through EL activities. It implies that EL can potentially raise students' awareness of world Englishes. As a result, EL not only makes them more confident when they use English to communicate with other people but also will raise understanding between interlocutors because they appreciate others' type of English. The data taken from the open-ended questions which pointed out more on the variety of accents and vocabulary development confirmed those findings. Based on what Student 12 stated, it is revealed that EL promotes students' autonomy where they can have listening practice independently and also do a selfreflection regarding their practice. Thus, it is again confirmed that when students are doing EL activity, they listen to the listening input for pleasure but it is still meaningful (Renandya & Farrell, 2011). Furthermore, it confirms the results of previous research investigating EL (Chang & Millett, 2014;Chang et al., 2018;Lee & Cha, 2017;Mahmudah, 2015;Fauzanna, 2017;Saputra & Fatimah, 2018;Setyowati & Kuswahono, 2018;Takaesu, 2013) that extensive listening offers many benefits for students' listening development. The findings of this study implied that generally the students felt satisfied with the implementation of the extensive listening program. In addition, many students were aware of their need to improve their listening fluency outside the classroom through extensive listening activities. Furthermore, extensive listening serves a fun yet meaningful activity which then makes it worth doing. As several students stated that doing EL one time per week was not enough, the current EL program needs improvement. A language teacher could encourage the students to do EL more often without giving too much burden on them. Conclusion and Suggestions While listening skills could still be a "Cinderella skill" in English as a second or foreign language classrooms, extensive listening provides a room for teaching and learning listening. Extensive listening provides fun yet meaningful activities for students. In addition, extensive listening facilitates students to improve their listening fluency, increase their vocabulary, and expose them to various English Acces article distributed under the terms of the Creative Commons Attribution license (https://creativecommons.org/licenses/by-sa/4.0/) accents. In addition, they will be more confident to speak up with others in English as their English, especially listening skills, develops. There are many resources to practice EL. TED Talks can be used as one promising resource of extensive listening because it provides authentic input spoken by inspiring speakers who deliver interesting topics which can increase students' general knowledge. The students believe that listening to TED Talks videos can make their lives better. Thus, using TED Talks for EL does not only develop students' listening skills but also develop their life skills. All in all, extensive listening serves many benefits for language learners and it gives a more fun yet meaningful room for listening skills teaching and learning. This study revealed the students' perceptions of an extensive listening program that they generally had positive perceptions about EL. However, the results of this study cannot be used to generalize the usefulness of implementing EL in an EAP class although they confirm both the theories of EL and previous studies' findings. Therefore, further empirical studies investigating more convincing influences of EL towards the students' listening fluency and involving more participants are needed. Employing an experimental study to know to what extent EL develops students' listening fluency could be worth doing. In addition, future researchers can conduct research on how EL can potentially raise students' awareness of different types of English (world Englishes).
5,189.8
2020-04-23T00:00:00.000
[ "Education", "Linguistics" ]
Competing in Customer Driven Markets: A Holistic Approach by In order to become market-driven, companies need to identify the right market signals, build sensing capabilities, define demand-shaping products, and successfully translate the demand signal to create an effective response. In turn, firms expect to develop and maintain a competitive advantage for a longer period of time, the so-called sustainable competitive advantages. This discussion of competition and gaining sustainable competitive advantage has been evolved with a long history. However, in the discipline of marketing, the discussion of competition is mostly developed around the three generic approaches for developing competitive position according to Michael Porter, even though recent marketing textbooks have slightly discussed some marketing implications beyond the company perspective. It is worth noting that supply chain books mainly discuss the beyond firm operation, though their concern on competition is marginal. However, this book, titled Competing in Customer Driven Markets: A Holistic Approach, discusses a different perspective of competition, challenging the traditional perspective that limits mostly to the firm level. The new perspective of competition discussed in this book has a broader outlook extending the scope of competition to the supply network level. BOOK REVIEW In order to become market-driven, companies need to identify the right market signals, build sensing capabilities, define demand-shaping products, and successfully translate the demand signal to create an effective response. In turn, firms expect to develop and maintain a competitive advantage for a longer period of time, the so-called sustainable competitive advantages. This discussion of competition and gaining sustainable competitive advantage has been evolved with a long history. However, in the discipline of marketing, the discussion of competition is mostly developed around the three generic approaches for developing competitive position according to Michael Porter, even though recent marketing textbooks have slightly discussed some marketing implications beyond the company perspective. It is worth noting that supply chain books mainly discuss the beyond firm operation, though their concern on competition is marginal. However, this book, titled Competing in Customer Driven Markets: A Holistic Approach, discusses a different perspective of competition, challenging the traditional perspective that limits mostly to the firm level. The new perspective of competition discussed in this book has a broader outlook extending the scope of competition to the supply network level. This book argues that the firm level competition is not sufficient and is outdated in the current business world as none of the firms are self-sufficient and selfcontained; hence competition should be concerned in a more holistic approach; in fact, the entire supply chain/network should compete as a whole with other supply chains/networks of competitors in the respective industry. With this argument the conventional thinking of marketing and in particular, achieving sustainable competitive advantages by a firm is highly challenged, thus the scope of competition has moved far beyond the capacity of a firm. It means that, no matter how superior, how established and how capable it is, an individual firm on its own is not strong enough to identify, create, deliver and communicate an outstanding customer value from that of competitors to the respective target market. This book is different from other supply chain management books and competition related books. It is because, the book on one side brings supply chain management approach to the competition and on the other side, use of supply chain management strategies as competitive sources are discussed with relevant empirical evidence. Even though some scholarly books have discussed that firm-level strategies should support the supply network strategies, they have not specifically discussed how supply network strategies should be used for the competition that should first be addressed at the firm level and then at the supply network level. This book has discussed such perspective of competition with the support of empirical evidence from one of the globally important industries, i.e. international clothing industry, from the Asian context. This is a unique book in many aspects. Firstly, it has expanded the conventional perspective of competition from firm level to supply chain level; secondly it has consolidated the knowledge of supply chain management with the competition-related knowledge in marketing; thirdly, this book has discussed empirical evidence in the Sri Lankan context. Further, the simple but comprehensive approach that it has used adds value to the book as it makes the book reader-friendly for the target audience, namely, undergraduate and postgraduate students and the practitioners from the corporate sector. It is worth noting that this book is intended to be an expansion to traditional texts in competition, not a substitute. The book consists of four chapters. The first chapter highlights the need for competition beyond the firm capacity with the insights of the current business world. It also discusses why a supply chain/network outlook is needed for a new perspective of competition. It's followed by the discussion of the supply chain/network approach for the holistic perspective of competition. In the holistic perspective of competition, three new approaches of competition are discussed: competition among the companies in the same echelon in a single supply chain; competition among companies in different echelons of a single supply chain; competition between supply chains. The third level gives the holistic approach of competition, which is in high demand today. The second chapter matches the supply network perspective of competition with the business context. It first provides insights on the nature of competition in the current business context followed by the discussion of Michel Porter's five forces model of competition in a given industry context. Here, the international clothing industry, with the evidence from the Sri Lankan clothing industry, has been considered as the empirical base for the five forces model. Then, the chapter introduces new types of competitions that have a holistic approach to competition; in this case, three types of competition, namely, static competition, dynamic competition and foresight competition. Then, the chapter explores how competition, consequently competitive approaches/ structures, should be changed according to the nature of the markets. Four types of markets are considered here as monopoly market, monopolistic competition, oligopoly, and pure competition. This discussion is novel as the nature of competition has been discussed not only in the pure competition but also in other markets where it is conventionally believed that there is no competition. After introducing the background information on the novel holistic approach of competition and the need of supply chain perspective into the competition from the first two chapters, the third chapter aims for the overview of the supply chain management. This chapter is important for readers who are in the marketing discipline, as the concept of supply chain management is not strongly developed among them. In this chapter, the concept of supply chain management is first defined and then the scope of same is represented. It is followed by the discussion of the supply chain as a management philosophy, as a set of activities to implement a management philosophy, as a set of the management process and as a business strategy. Then, the prerequisites for supply chain management are described. In the fourth chapter, the supply chain management strategies that can be used as competitive strategies are discussed in detail. It includes supply chain process integration, the importance of focus on core competencies, customer relationship management, strategic supplier relationships, information sharing, utilising 3PL/4PL (3 rd Party Logistic/4 th Party Logistic) providers, designing and maintaining suitable supply chain/network structure, co-operation with competitors and postponement. The four chapters in the book are methodologically arranged and tight to each other. In particular, the holistic approach of competition introduced at the first chapter is clearly and logically justified and relevant strategies are comprehensively discussed with required empirical evidence in the remaining chapters. The summary at the end of each chapter gives a clear and comprehensive picture of the phenomenon being discussed. The author communicates the main points by using real-world examples, anecdotes and cases that captivate interest and attention. For anyone interested in understanding how sustainable competitive advantages should be developed with a holistic approach, Competing in Customer Driven Markets: A Holistic Approach is a book that can be recommended.
1,871.8
2020-06-30T00:00:00.000
[ "Business", "Economics" ]
Tripole-mode and quadrupole-mode solitons in (1 + 1)-dimensional nonlinear media with a spatial exponential-decay nonlocality The approximate analytical expressions of tripole-mode and quadrupole-mode solitons in (1 + 1)-dimensional nematic liquid crystals are obtained by applying the variational approach. It is found that the soliton powers for the two types of solitons are not equal with the same parameters, which is much different from their counterparts in the Snyder-Mitchell model (an ideal and typical strongly nolocal nonlinear model). The numerical simulations show that for the strongly nonlocal case, by expanding the response function to the second order, the approximate soliton solutions are in good agreement with the numerical results. Furthermore, by expanding the respond function to the higher orders, the accuracy and the validity range of the approximate soliton solutions increase. If the response function is expanded to the tenth order, the approximate solutions are still valid for the general nonlocal case. solitary wave in a nematic liquid crystal 33 , based on the previous work for the NLS equation performed by Kath et al. 34 . The work in refs 33 and 34 does more than find the steady solitary wave, it also finds the evolution to this solitary wave from an initial condition 33,34 . Malomed presented a general review of these variational methods in nonlinear fiber optics and related fields 35 . Recently, Aleksić et al. analytically investigated the fundamental solitons based on the variational approach in (2 + 1)-dimensional NLCs 36 . Especially, MacNeil et al. obtained exact solutions of the nematicon equations in (1 + 1) and (2 + 1) dimensions for fixed parameter values, and they also got approximate solutions based on the variational approach method 37 . Furthermore, Panayotaros and Marchant addressed the existence of a solitary wave solution of the nematic equations mathematically 38 . It has been proven that in the media with an exponential-decay nonlocal response, the soliton bound states are stable if the solitons contain fewer than five-poles 21 . Namely the fundamental, dipole-mode, tripole-mode and quadrupole-mode solitons can all propagate stably in such media. The fundamental and dipole-mode solitons in nonlinear media with an exponential-decay nonlocal response have been investigated analytically by various mathematical methods, such as the classical Lie-group method 39 , the perturbative analysis method 40 , and the variational approach method 36,37,41,42 . But so far, no one gives analytical expressions of tripole-mode and quadrupole-mode solitons in NLCs. Presenting an analytical solution is helpful for one to get a good understanding of the dynamics of nonlocal solitons. In this paper, based on the variational approach, we study the tripole-mode and quadrupole-mode solitons in nonlinear media with an exponential-decay nonlocal response. The approximate expressions of such solitons are obtained and the characteristics of them are investigated in detail. The approximate results are confirmed by the numerical ones which are obtained using the iterative numerical technique based on the NNLSE directly. Since a surface soliton in nonlocal nonlinear media can be regarded as a half of a bulk soliton with an antisymmetric amplitude distribution 43,44 , the results on quadrupole-mode solitons here may also be helpful for the investigation of the surface dipole nonlocal solitons. Results Variational method for NNLSE and tripole-mode soliton solutions. First of all, let us roughly recall the derivation of the dimensionless NNLSE for the (1 + 1)-dimensional NLCs based on refs 1, 31 and 45. Considering only one transversal dimension 45 , an external quasistatic electric field E LF is applied in the transversal direction to control the initial tilt angle of the NLCs. The evolution of an optical beam Q(X, Z) in the paraxial approximation and optically induced reorientation angle perturbation Ψ X Z ( , ) of the liquid crystal molecules can be described as follows 1, 31 where k 0 and k are, respectively, the wave numbers in vacuum and the NLCs. Δε HF and Δε LF are, respectively, the anisotropy of the liquid crystal at the optical frequency and the anisotropy of the liquid crystal at the frequency of the quasistatic electric field. ε 0 is the vacuum permittivity. K is the relevant elastic constant taken equal for splay, bend, and twist. Introducing the normalization: x = X/W 0 , z = Z/Z R , q = Q/Q 0 , and ψ = Ψ Ψ / 0 , where W 0 is the full width at half maximum of t he amplitude of t he opt ic a l b e am, , one can obtain the following dimensionless equations where σ denotes the degree of nonlocality which is expressed as 31 In Eq. (4), the term ∂ 2 ψ/∂z 2 has been canceled since , which is a part of the coefficient multiplying the derivative ψ zz in the non-dimensional director equation 1,3 . Based on Eq. (4), combined with Fourier transformation and the convolution theorem, one can find where I = |q| 2 is the intensity of the optical beam, and the normalized nonlocal response function R takes an exponential-decay function as follows Scientific RepoRts | 7: 122 | DOI:10.1038/s41598-017-00197-6 Several kinds of solitons in (1 + 1)-dimensional nonlinear media with a spatial exponential-decay nonlocality has been investigated in the past years 46,47 . In experiments 4,[9][10][11] , the typical values of the parameters are: W 0 = 10 μm, K = 10 −11 N, Δε HF = 0.64ε 0 , Δε LF = 15ε 0 , E LF = 10 4 V/m. For the visible wavelengths, ε 0 is 8.85 × 10 −12 in MKS units, and the diffraction length is about 1 mm. With the above parameters, the degree of nonlocality is about 12 , i.e. σ ≈ 12 2 , which belongs to the sub-strongly nonlocal case. If σ → w m and ψ → Δn, the dimensionless NNLSE, which governs the beam propagation in (1 + 1)-dimensional nonlinear media with an exponential-decay nonlocal response, can be rewritten phenomenologically as follows 17,31 where x and z denote, respectively, the normalized transversal and longitudinal coordinates; q denotes the complex amplitude of the optical beam; Δn denotes the nonlinear perturbation of the refractive index of the nonlocal medium; w m denotes the characteristic length of the nonlocal material response. If w m → 0, Eq. (8) is simplified to the well-known nonlinear Schrödinger equation in local nonlinear media; if w m~wR (where w R is the second-order moment width of multipole solitons), it represents the general nonlocal case; and if w m → ∞, it represents the strongly nonlocal case 48,49 . In experiments, the magnitude of w m can be controlled by changing the pretilt angle of NLCs via a bias voltage 1,3,17 . Based on the above derivation, Eq. (8), together with Eq. (9), denotes a NNLSE with an exponential-decay nonlocal response 31 , which can be restated as follows, where the Lagrangian density can be expressed as (12), the reduced variational problem can be obtained as follows 0 where the average Lagrange is expressed as For the strongly nonlocal media (especially for the Snyder-Mitchell model), the higher-order Gaussian solitons (such as Hermite-Gaussian solitons, Laguerre-Gaussian solitons, Ince-Gausian solitons etc.) are the exact soliton solutions 12,15,50,51 . For (1 + 1)-dimensional nonlinear media with a spatial exponential-decay nonlocality, we consider that solitons come into being from the Hermite-Gaussian beams. In our previous research, we have proven that the first-order Hermite-Gaussian function can be used to describe the dipole solitons in such nonlocal media 52 . Hence, we take the second-and third-order Hermite-Gaussian functions as the trial functions of tripole and quadrupole solitons, respectively. The trial solution of tripole-mode solitons takes the following form where a is the amplitude, θ(z) is the phase, and w is the width of a Gaussian soliton. Because of the complexity of the intensity distribution, the second-order moment beam width is adopted to describe the width of multipole solitons. The second-order moment beam widths is defined as Thus the width of a tripole-mode soliton is = w w 10 R . For the sake of convenience in the following discussion, we introduce a nonlocal parameter α to define the degree of the material nonlocality, i.e., α = w m /w R . The larger the nonlocal parameter, the stronger the degree of nonlocality. In theory, based on Eqs (11), (13), (15) and (16), one can obtain the expression of [ ]  . However, the integrals in the averaged Lagrangian based on the trial function could not be calculated explicitly due to the inability to find closed form integrals. Fortunately, for the strongly nonlocal case, we can calculate it by expanding the response function. If it is expanded to the second order, one can get Substituting Eqs (13), (16) and (18) into Eq. (15), the expression of [ ]  is obtained as follows . Based on the corresponding Euler-Lagrangian equations, one can get As we know, for the soliton case, θ′(z) = βz, where β is the propagation constant. Combining Eqs (20) and (21) It is evident that Eqs (22)(23)(24) are valid only when π < w ww 32 2 29 , a becomes an imaginary number, β < 0, and P < 0, which is impossible in physics. Figure 1 shows the propagation constant of tripole-mode solitons versus the soliton powers. In Fig. 1(a), the degree of nonlocality is 7, which belongs to the strongly or at least sub-strongly nonlocal case. It is found that the approximate result is in good agreement with the numerical one which is obtained directly based on Eqs (8) and (9) using the iterative numerical technique 53 . When w m is fixed at 10, the approximate result is also in good agreement with the numerical ones as shown in Fig. 1(b). Figure 1(b) also shows that the approximate result it is invalid when β < 3.68 as the variational solution (22)-(24) breaks down, as discussed after Eq. (24). The reason for the invalidity is that the response function is only expanded to second order [see Eq. (18)], which leads to the inaccuracy. By expanding the respond function to higher orders, one can improve the accuracy of the approximate solutions, which will be discussed in the following Section. In addition, one can find from Fig. 1 that the slope of the power versus propagation constant is positive, which implies that the soliton propagation is stable. It is also found that if w m takes a larger value, the valid region of β also becomes larger. The accuracy of approximate results is only dependent on the degree of nonlocality. The stronger the degree of nonlocality, the more accurate the approximate results. As a result, when the degree of nonlocality is still fixed at 7, the analytical result is always accurate independent of the soliton width [see Fig. 2(a)]. Nevertheless, if w m is fixed, the degree of nonlocality decreases with the increase of the soliton width, so the validity of approximate results is getting declined continuously [see Fig. 2(b)]. The variational approximation shows bistability in Fig. 2(b), while the numerical solution does not. This is an important point as it shows that the variational approximation can predict behaviour which is not actually present. Caution should then be exercised with variational approximations. Figure 3 shows the profiles of the tripole-mode soliton with different soliton powers. It is found that the approximate solutions agree well with the numerical solutions for β = 30 and 20. When β decreases to 7, the approximate solution becomes a little inaccurate, and when β decreases to 4, it becomes even worse. The degrees of nonlocality are 3.19, 2.78, 1.93, and 1.59 for β = 30, 20, 7, and 4, respectively. It is found that when the nonlocality degree is 1.59, i.e. β = 4, an obvious deviation appears between the variational solution and the numerical one. The reason is that the Taylor series truncation (18) is starting to break down for this low w m . So we can conclude that if the response fuction is expanded to the second order, the approximate solutions are valid only for the strongly nonlocal case. Quadrupole-mode soliton solutions. For the quadrupole-mode solitons in (1 + 1)-dimensional NLCs, we take the following ansatz solution . Comparing with the SMM which is valid for the strongly nonlocal case, it is found that the soliton powers are different. The soliton powers for tripole-mode and quadrupole-mode solitons in (1 + 1)-dimensional NLCs are not equal, and it is larger for quadrupole-mode solitons than tripole-mode solitons with the same parameters. However the multipole solitons and the higher-order solitons are all the same in SMM [12][13][14][15][16] . Furthermore, for the the strongly nonlocal limit (i.e. , which indicates the nonlocality degree decreases. As a results, the approximate result becomes invalid gradually. Especially when β < 7.17, approximate results do not exist any more [see Fig. 4(b)]. Figure 4 shows that the slope of the power versus propagation constant is positive, similar to the case of tripole solitons, which implies a stable propagation of solitons. Figure 6 presents the profiles of the quadrupole-mode soliton with different soliton powers and propagation constants. It is found that the approximate solutions agree well with the numerical solutions for β = 50 and 30. When β decreases to 12, the approximate solution has a little inaccuracy, and when β decreases to 8, it becomes even worse. We also calculate the nonlocal parameter α, and find that it is equal to 3.02, 2.53, 1.85, and 1.61 for β = 50, 30, 12, and 8, respectively. More accurate approximate solutions. In the above sections, the response function is only expanded to the second order. So with the degree of nonlocality decreasing, the approximate results become invalid gradually. In order to get a more exact approximate solution, one can expand the response function to the higher orders. The higher orders the response function is expanded into, the more exact the approximate solutions are. Of course, the calculations are more complicated. As an example, we expand R(x) to the fourth order, i.e., . In the previous section, it is found that the approximate tripole-mode solution is inaccurate when β = 4, and even it is not obtained when β < 3.68. In order to show the improvement of the approximate solutions obtained by expanding the respond function to the fourth order, Fig. 7 shows the comparison between the approximate results and the numerical ones. It is found that the approximate solutions are in good agreement with the numerical ones when β = 4 and even β = 2. For the case of β = 2, the degree of nonlocality is about 1.23, which already belongs to the general nonlocality. When β = 1, the approximate solution has a little inaccuracy, and when β = 0.8, it becomes even worse. The degrees of nonlocality for β = 1 and β = 0.8 are, respectively, 0.91 and 0.76. Furthermore, if R(x) is expanded to the tenth order, i.e., and It should be noted that Eqs (35)(36)(37) are valid only when A + B > 0. Figure 8 shows the profiles of the tripole-mode soliton obtained by expanding the response function to the tenth order. When β = 0.5, 0.35 and 0.12, the corresponding degrees of nonlocality are, respectively, 0.753, 0.657 and 0.417, which all belong to the case of the general nonlocality. Therefore, the approximate solutions can be improved by expanding the response function to the higher orders. Although it has been proven that in a nonlinear medium with an exponential-decay nonlocal response, the soliton bound states are stable if the solitons contain fewer than five-poles 21 . In order to confirm the validity of our results, we take the tripole-mode soliton as an example and simulate the propagation based on Eqs (8) and (9) directly. Here we take the split-step Fourier method 54 to simulate the soliton propagation. As expected, for the strongly nonlocal case, the approximate results obtained by expanding the response function to the second order are accurate, and the solitons can stably propagate for a long distance [see Fig. 9(b)]. With the decrease of the nonlocality degree, the approximate results become inaccurate gradually, and irregular oscillations occur during propagation [see Fig. 9(a)]. However, if the nonlocal response function is expanded to the tenth order, the accuracy and the validity range of the approximate solutions increase. The approximate soliton can still keep a relatively stable propagation [see Fig. 9(d)], but it can not even be obtained if the response function is only expanded to the second order. In Fig. 9(c), because the degree of nonlocality is weaker than that in Fig. 9(d), the irregular oscillations appear more obviously. The oscillations of Fig. 9(c,d) are just typical behaviour for NLS-type equations. For such equations an initial condition near a solitary wave will evolve to the solitary wave with the amplitude and width displaying decaying oscillations. All these oscillations shows the degree of accuracy of the variational solutions. Therefore, we can conclude that by expanding the respond function to the higher orders, the accuracy of the approximate soliton solutions is improved. Corresponding to the cases of Figs 9(a,b) and 10 illustrates the propagation of the tripole-mode solitons by expanding the response function to the tenth order, which shows a stable propagation of solitons. Especially, when β = 4, the approximate soliton obtained by expanding the response function to the tenth order is still valid [see Fig. 10(a)]. Contrarily, it is already invalid when the response function is expanded to the second order. As another example, Fig. 11 illustrates the stable propagation of a quadrupole-mode soliton, which confirms the validity of the approximate variational quadrupole-mode soliton solutions. At last, for the completeness, we also give the solutions of quadrupole-mode solitons as follows when the response function is expanded to the tenth order. Figure 9. Propagation of tripole-mode solitons in the presence of 1% white input noise. The profiles of Fig. 3(a,d) are employed as the input shapes of solitons in (a,b), and the profiles of Fig. 8(a,c) are employed as the input shapes of solitons in (c,d). Discussion By applying the variational approach, we obtain the approximate analytical expressions of tripole-mode and quadrupole-mode solitons in nonlinear media with an exponential-decay nonlocal response. It is found that with the same parameters, the soliton power of the quadrupole-mode solitons is larger than that of the tripole-mode solitons, which is much differnt from the SMM (In SMM, the soliton powers with different multipoles are the same 12,15 ). The numerical simulations are carried out to illustrate the accuracy of the approximate solutions. The results show that the accuracy of the approximate solutions is only related with the degree of nonlocality. For the strongly nonlocal case, if the response function is expanded to the second order, the approximate soliton solutions are in good agreement with the numerical ones. With the degree of nonlocality decreasing, the approximate solutions become invalid gradually. Furthermore, by expanding the respond function to the higher orders, one can improve the accuracy of the approximate solutions. The higher orders the response function is expanded to, the more exact the approximate solutions are. If the response function is expanded to the tenth order, the approximate solutions are still valid for the general nonlocal case. Since a surface soliton in nonlocal nonlinear media can be regarded as a half of a bulk soliton with an antisymmetric amplitude distribution 43,44 , the results on quadrupole-mode solitons here may also be helpful for the investigation of the surface dipole nonlocal solitons.
4,543
2017-03-09T00:00:00.000
[ "Physics" ]
Probing vortices in 4He nanodroplets We present static and dynamical properties of linear vortices in 4He droplets obtained from Density Functional calculations. By comparing the adsorption properties of different atomic impurities embedded in pure droplets and in droplets where a quantized vortex has been created, we suggest that Ca atoms should be the dopant of choice to detect vortices by means of spectroscopic experiments. The unique environment realized in liquid 4 He clusters has opened up in recent years new opportunities for atomic/molecular spectroscopy to probe superfluid phenomena on the atomic scale [1,2]. Helium droplets represent ideal nano-scale cryostats for a variety of fundamental experiments on liquid 4 He, including the study of quantized vortices [3]. Vortices, while energetically unfavorable [4], can potentially be stabilized by atomic or molecular impurities [5]. During the free jet expansion experiments described in Refs. [1,3], it is plausible that quantized vortices may be created in some metastable state, long-lived enough to be detected. However, the question of whether 4 He droplets can sustain vortices is still not resolved, and all the high resolution spectra of embedded molecules can apparently be explained without invoking their presence. Yet, it is expected that in the near future they could be created by some extension of the present experimental techniques. This calls for identifying signatures that might reveal vortical states in helium droplets. A possible experiment to detect their presence has been described by Close et al. [3]. They have suggested that alkali atoms, that normally reside in a 'dimple' on the surface of 4 He clusters [6,7,8], may be drawn, when a vortex is present, inside the cluster along the vortex core. Spectroscopic experiments on the dopant atoms could thus provide evidence of their existence, since the line broadenings and shifts would be different in the two cases. We show in the following that alkali atoms are actually not suited for such an experiment, but rather alkaline earth (Ca) atoms may serve as probes to detect vortices. Density Functional (DF) methods [9] have become increasingly popular in recent years as a useful computational tool to study the properties of classical and quantum inhomogeneous fluids, especially for large systems, for which they provide a good compromise between accuracy and computational cost. In particular, a quite accurate description of the properties of inhomogeneous liquid 4 He at zero temperature has been obtained within a DF approach by using the energy functional proposed in Ref. [10] and later improved in Ref. [11]. This later DF, which has been successfully used over recent years to study a variety of 4 He systems like clusters and films, is the one we use in the present work. The minimization of the energy functional with respect to density variations, subject to the constraint of a given number of 4 He atoms N , leads to the equilibrium particle density profile ρ(r), thus allowing to study the static properties of the 4 He system. When dynamical properties are studied (as described in the following), we use the Time-Dependent DF (TDDF) method developed in Ref. [12], which allows to obtain both the 4 He particle density ρ(r, t) and the velocity field v(r, t). Briefly, in the static (dynamic) case, one has to solve a stationary (timedependent) non-linear Schrödinger-like equation for an 'order parameter' Ψ(r) (Ψ(r, t)), where the Hamiltonian operator is given by H = − 2m ∇ 2 + U [ρ, v]. The effective potential U is defined as the variational derivative of the energy functional, and its explicit expression is given in Ref. [12]. From the knowledge of Ψ ≡ φe iΘ one can get the density ρ(r, t) = φ 2 and the fluid velocity field v(r, t) = m ∇Θ. To model the interaction of liquid 4 He with foreign impurities we use suitable He-impurity pair interaction potentials, which will be described later on. We work in 3D cartesian coordinates, and adopt the following procedure to generate a vortex in the cluster in the most unbiased way. We consider a cluster in a rotating frame of reference with constant angular velocity ω z around the z-axis [13]. The Hamiltonian density H then acquires an additional term −ω zLz ,L z being the angular momentum component along the z-axis. We minimize Ψ for this constrained Hamiltonian, imposing Ψ to be orthogonal, during the minimization, to Ψ 0 = ρ eq (r) describing the minimum energy state of the vortex-free cluster; we have applied the method to a N = 300 droplet. To generate a vortex line, ω z must be larger than a critical value -unknown in advance-Ω c = ∆E/(N ) [14], where ∆E is the energy cost to create a vortex (which in the present case is about 70 K, see Table I, and hence Ω c ∼ 3 × 10 10 s −1 ), but not so large that one could generate a vortex array [13]. The particle density corresponding to this vortical configuration is shown in Fig. 1(a). We have calculated the circulation of the velocity field along a path enclosing the vortex core, and have exactly found the value N h/m appropriated for a quantized vortex with L z = N . Note that since the vortex is quantized, the vortical state is an eigenstate of the angular momentum along the rotation axis,L z . This means that our density profile is the same as that one would obtain by using the Feynman-Onsager ansatz, i.e. by adding to the energy functional an extra centrifugal term associated with an order parameter of the form √ ρe iΦ (Φ being the azimuthal angle), and finding the density profile by solving an equation in the real quantity ρ(r). This is the procedure used in the DF calculations of Ref. [5] to generate quantized vortex structures in helium drops -and also in Bose-Einstein condensates of trapped gases [14]-. Instead, we have not assumed a priori a quantized value for the total angular momentum, but rather we generate a fully quantized vortex state starting from a pure cluster. To use atomic impurities as probes of the presence of vortices in 4 He drops, ideally one would like to have an atom that is barely stable on the surface of a pure drop, and becomes solvated in its interior in the presence of a vortex. The question of solvation vs. surface location for an impurity atom in liquid 4 He can be addressed in an approximate way within the model of Ref. [15] where, based on calculations of the energetics of impurities interacting with liquid 4 He, a simple criterion is proposed to decide whether surface or solvated states are favored. An adimensional parameter is defined in terms of the impurity-He potential well depth ǫ and the minimum position r m , λ ≡ ρ ǫ r m /(2 1/6 σ), where ρ and σ are the bulk liquid density and surface tension of 4 He, respectively. The criterion for solvation reads λ > 1.9 for the existence of solvated states [15]. One thus needs an impurity with λ ∼ 2, and such that its most stable state is on the drop surface. Alkali atoms are known to have their stable state on the surface of liquid 4 He [6,7], and they lie in the low λ regime (λ ∼ 0.6 − 0.9) [15]. Accordingly, for alkalis a surface state should always be preferred, even in the presence of a vortex line. We have verified this point considering Na and Rb, as representative of light and heavy alkali, respectively. The alkali-He interaction is of the form proposed by Patil [16]. We have compared the stable 'dimple' states of Na and Rb atoms on the surface of the N = 300 cluster hosting a vortex line, with those of the same impurity trapped in the vortex core, exactly at the cluster center. We have found that the latter are energetically unfavored with respect to surface states, see Table I. It is worth to note that in the case of Na, our results compare well with the Path Integral Monte Carlo calculations of Ref. [17], where a binding energy of about ∼ 7 K is found for this cluster. We also note that, unlike the case of strongly bound impurities to 4 He clusters [5], which have their stable state inside the cluster, and for which there exists a critical cluster size below which the droplet+dopant+vortex complex is stable, the alkalis cannot stabilize the vortex, whatever the droplet size is [5]. There are other dopants, however, for which there is clear evidence of a surface state on liquid 4 He, i.e. alkaline earth atoms. Absorption spectra of alkaline earth atoms (Ca, Ba and Sr) attached to 4 He clusters clearly support an outside location of Ca and Sr [18], and probably also of Ba [19]. To describe the He-impurity interaction we employ an accurate ab-initio He-Ca pair potential [20] used to study 4 He N +Ca droplets up to N = 75 by Diffusion Monte Carlo techniques [21]. For such a potential, λ ∼ 2.2, which apparently indicates a solvated stable state. However, for those cases where λ is close to the solvation threshold, consideration of the shape of the potential energy surface, as well as the well depth and equilibrium internuclear distance, seems warranted [22]. The stable state of a Ca atom in a 4 He 300 vortex-free cluster is shown in Fig. 1(b). Note that, in qualitative agreement with the experimental evidence, the 'dimple' appears to be much more pronounced than in the case of alkalis [12], reflecting the stronger He-atom interaction. However, in the presence of a vortex, the stable state is in the center of the cluster, as depicted in Fig. 1(c). The surface state in this case is unstable and, as the minimization proceeds, a Ca atom initially placed on the surface near the vortex core, is gradually drawn towards it and then sucked inside, eventually reaching the stable state in the center of the drop. The value of the angular momentum for the converged 4 He configuration is again L z = N . The response of the impurity atom to the different 4 He environments shown in Fig. 1 might be determined with spectroscopic measurements, allowing to detect the presence of vortices: the observed linewidths and shifts of the excitation/emission spectra in the two cases shown in Fig. 1 should be very different, reflecting the 'bubble' environment in one case [ Fig. 1(c)], and a more open environment in the other case [ Fig. 1(b)]. Moreover, in the case of Fig. 1(b), bound-unbound transitions should be observed with a significant probability, thus implying a strong asymmetry in the observed spectra. In the above picture one is assuming that the vortex is long-lived enough to allow a Ca atom, picked up randomly by the cluster, to diffuse close to the top of the vortex core and then to be drawn inside. We have no direct proof of the stability of the cluster+vortex complex on experimental time scales. However, we have indications that the cluster+vortex+dopant complex should be stable at least on the nanosecond time-scale. This conclusion comes from very long computing time simulations, using the TDDF method [12], to study the dynamics of the cluster+vortex+impurity complex. During these simulations, the impurity atoms were allowed to oscillate inside the 'bubble' in the cluster center [see Fig.1(c)], and the vortex line was always found to be sta-ble, without showing any tendency to shrink, bend or migrate towards the surface of the cluster. The solvated state for Ca in a vortex-free cluster is a stationary but unstable configuration against any displacement of the atom off the cluster center, only a few K in energy above the stable, 'dimple' state (see Table I). This is a consequence of the borderline value of λ for this impurity, and implies that in a real experiment, a fraction of Ca atoms might be trapped inside the clusters for fairly long times, even in the absence of vortices. For these atoms, the spectroscopic signals would be similar to those coming from Ca atoms trapped in the vortex core, making it difficult to discriminate between the two cases. A line shape calculation [6] using as an input the 4 He density profiles around the impurity might help to distinguish between solvated states of Ca with and without vortex. Since the extension of the method of Ref. [6] -which is applied there to the simpler case of the monoelectronic alkali atoms-is rather involved for two-electron systems, we have not carried out such a calculation. We instead suggest additional measurements which may help to discriminate between the states shown in Fig. 1. It appears from our calculations that the energy of the impurity-cluster system is rather insensitive to the location of Ca along the vortex core, once the atom is embedded in it. Consequently, vibrational modes of the impurity along the vortex line are expected to be soft. We have confirmed this by TDDF calculations, applying to the Ca atom a small initial momentum in a given direction -radially, towards the surface of the cluster for the 'dimple' state of Fig. 1(b), and along or perpendicular to the vortex line in the case shown in Fig. 1(c)-. We then let the impurity evolve in time, allowing for the 4 He environment to dynamically follow the atom motion while the impurity oscillates around its equilibrium position. This is done in practice by numerically solving by means of a discrete Verlet algorithm, as usually done in Molecular Dynamics calculations, Newton's equation of motion for the Ca atom under the force due to the surrounding 4 He liquid In this expression V He−Ca is the pair potential describing the He-impurity interaction, and the density ρ(r, t) is updated at each time step according to the TDDF scheme for 4 He [12]. From the positions of the Ca atom as a function of time, relative to the center-of-mass of the Ca-He droplet system, different frequencies characterizing the impurity dynamics can be found from a Fourier analysis of the calculated time series. We report in Fig. 2 the calculated vibrational spectra. The intensities are in arbitrary units, and normalized so that the higher peak in each spectrum has unit height [23]. It appears that the oscillation of Ca along the vortex core is indeed characterized by a single low frequency mode, as compared with the more fragmented spectrum for vibrations perpendicular to the vortex core. The presence of the soft mode is a signature of solvation of a Ca atom inside the vortex. Indeed, such a mode should be severely damped, or even absent, for a solvated Ca atom in a vortex-free cluster, since this configuration is unstable. We also show for comparison the vibration spectrum of a Ca atom in the 'dimple' state on the surface of a cluster without vortex. The peak just below 1 K is due to the 'dipolar' vibration of the impurity inside the semispherical(spherical) cavity in which it is trapped in the 'dimple'('bubble') state. Additional peaks appear in the 'dimple' spectra because of the coupling of the Ca motion with the surface modes of the 4 He nanodroplet, which have similar frequencies (for instance, the lowest energy, l = 2 quadrupolar mode of a pure 4 He 300 droplet occurs at ∼ 0.6 K [12]). All these modes lie in the microwave frequency regime and there are experimental ideas to measure the corresponding vibrational frequencies [24]. It is worth to see how these modes are coupled, which is particularly apparent when we displace the impurity perpendicular to the vortex core (dashed line in Fig. 2). To trigger this oscillation, we have given to the the impurity a kinetic energy of about 2 K, three times as much as in the other two cases. One may see that the spectrum displays one peak corresponding to the 'dipolar' mode discussed previously, and also softer modes of characteristics similar to those found in the other two cases. This coupling is possible because the time evolution is adiabatic. Finally, we would like to emphasize that the dynamical behavior of He-impurity systems depends in a sensitive way on the details of the He-atom pair interaction. This calls for improving the available interaction potentials to strengthen the scenario described here, or to help in finding other atomic/molecular impurities which may serve as probes of the presence of vortices in 4 He droplets. Table, they are referred to the total energy of the pure, vortex-free cluster (-1384.5 K), whereas in the right part they are referred to that of the cluster+vortex configuration (-1313.4 K). The configurations marked with an asterisk are unstable stationary configurations.
3,889.2
2003-06-02T00:00:00.000
[ "Physics" ]
Enriched environment exposure during development positively impacts the structure and function of the visual cortex in mice Optimal conditions of development have been of interest for decades, since genetics alone cannot fully explain how an individual matures. In the present study, we used optical brain imaging to investigate whether a relatively simple enrichment can positively influence the development of the visual cortex of mice. The enrichment paradigm was composed of larger cages housing multiple mice that contained several toys, hiding places, nesting material and a spinning wheel that were moved or replaced at regular intervals. We compared C57BL/6N adult mice (> P60) that had been raised either in an enriched environment (EE; n = 16) or a standard (ST; n = 12) environment from 1 week before birth to adulthood, encompassing all cortical developmental stages. Here, we report significant beneficial changes on the structure and function of the visual cortex following environmental enrichment throughout the lifespan. More specifically, retinotopic mapping through intrinsic signal optical imaging revealed that the size of the primary visual cortex was greater in mice reared in an EE compared to controls. In addition, the visual field coverage of EE mice was wider. Finally, the organization of the cortical representation of the visual field (as determined by cortical magnification) versus its eccentricity also differed between the two groups. We did not observe any significant differences between females and males within each group. Taken together, these data demonstrate specific benefits of an EE throughout development on the visual cortex, which suggests adaptation to their environmental realities. Scientific Reports | (2023) 13:7020 | https://doi.org/10.1038/s41598-023-33951-0 www.nature.com/scientificreports/ visual cortex compared to laboratory rats 28 . In addition, Bartoletti and colleagues (2004) have shown that the effects of dark rearing on the rat visual cortex are prevented by an EE, which allows for proper consolidation of visual cortical connections 29 . Moreover, environmental impoverishment, such as a reduced sensory-motor stimulation during development, profoundly affects visual acuity and visual evoked potential latency development compared to standard reared mice 30 . Cang and colleagues also demonstrated that monocular visual deprivation during development induces ocular dominance plasticity 31 . These studies have highlighted the exceptional level of plasticity during development 32 . All developmental stages have critical periods identified relating to an EE, namely prenatal (maternal experience during gestation), early postnatal (pre-weaning) and late postnatal (post-weaning) (reviewed in 32 ). Although it is possible to reopen windows of plasticity in the adult mouse, as shown by periods of exercise prior to recordings assessing ocular dominance 33 , these effects are not as drastic as during development. Previous studies in humans and animal models have emphasized that an EE accelerates aspects of development, but also increases the length of the window of plasticity, affecting the pace of brain development (reviewed in 34 ). Moreover, data suggests that heritability has a spatiotemporal component with phylogenetically older areas developing first and being progressively less affected by genetics as adulthood is reached (for instance, the primary sensory cortex compared to the association cortex in humans 35 ). Genetic predispositions of brain development have been studied in monozygotic twin pairs. Analyses have often focused on total brain or region-specific volumes. However, measures of volume are influenced by both cortical thickness and surface area, which are genetically uncorrelated 36 . Recent findings show that prenatal/perinatal periods are sensitive periods for cortical surface area development 37 . Nonetheless, more work needs to be done in order to have a better understanding of brain development as, overall, surface area is often overlooked, and more so functional delimitations (reviewed in 34 ). An effective and non-invasive method of evaluating cortical visual functions is intrinsic signal optical imaging (ISOI) 38 . Retinotopic mapping is readily obtained through temporally encoded maps of hemodynamic responses to identify the primary visual cortex (V1) and the extrastriate cortex [39][40][41] . The constant developments in data acquisition and analysis from ISOI 42,43 and other imaging techniques-functional magnetic resonance 44 and calcium imaging 45 -have allowed a more thorough examination of the recordings and characterization of the cortical visual system. However, different challenges in the precision of the delimitations of the areas activated through ISOI remain given the low signal-to-noise ratio. The addition of three texture analysis techniques to our pipeline-the entropy, standard deviation and range of the signals-renders clearer border definitions from the activated and non-activated area boundaries, and therefore more robust delimitations. In the present study, we used optical brain imaging to determine the effects of an EE throughout development on the structure and function of the visual cortex of mice. Our results provide evidence that EE has a positive impact on the functional organization of the primary visual cortex (V1). Materials and methods Animals. Nine C57BL/6N pregnant mice were obtained from Charles River (Saint-Constant, Qc, Canada) 1 week before the due date. Mice were housed in a controlled environment with a 12 h light/dark cycle with food and water ad libitum. All procedures were carried out in agreement with the guidelines of the Canadian Council for the Protection of Animals, and the experimental protocol was approved by the Ethics Committee of the Université de Montréal. All methods were carried out in accordance with ARRIVE guidelines. Females with litters pertaining to the enriched environment (EE) group were placed in cages accordingly as soon as they were received (two females per cage; 6 females in total), whereas females with litters pertaining to the standard (ST) group were placed individually in smaller cages (3 females in total). ST mice (n = 12) and mice that were exposed to an EE from birth (n = 16) were compared during adulthood (P69-P114). The EE consisted in group housing in larger cages (dimensions: 50 × 38 × 20 cm) containing several toys of different materials (plastic, carton, wood), hiding places, nesting material and a spinning wheel whose positions were changed within the cage and replaced once a week each at regular intervals (changes on Mondays and replacements on Thursdays). Following weaning, ST mice were individually housed in standard cages (dimensions: 30 × 19 × 12.5 cm) with only nesting material ( Supplementary Fig. S1). Mice remained in their respective environments until the day of data acquisition. Surgical procedures. Adult animals were first weighted (see Supplementary Table S1 for mouse weights) and sedated with intraperitoneal chlorprothixene (5 mg/kg) to allow administration of a lower dose of the anesthetic. Mice were then anesthetized with intraperitoneal urethane (1 g/kg, in saline) 30 min later. The subsequent surgical procedures were as previously described in Oliveira Ferreira Souza and colleagues 46 . Briefly, atropine (0.05 mg/kg) was administered subcutaneously to reduce tracheal secretion and to counteract the parasympathomimetic effects of the anesthesia. Injectable lidocaine (0.2%) was used at incision sites, whereas lidocaine gel was used at all pressure points. For better animal condition under prolonged anesthesia, a tracheotomy was performed 47 . Animals were then placed on a stereotaxic apparatus, and a constant flow of oxygen was placed in front of the tracheal tube. Viscous artificial tears were frequently applied to avoid corneal dehydration. The scalp and connective tissue were removed to expose the occipital portion of the skull. The mouse cortex was imaged through the skull. A 10 mm wide metal imaging chamber was glued over the skull. Low melting point agarose (1% in saline) was used to fill the chamber, which was then sealed with a glass cover slip. Cardiac activity (by electrocardiogram with subdermal electrodes) and core body temperature (maintained around 37 °C using a heating pad feedback-controlled by a rectal thermoprobe) were monitored throughout the experiment. At the end of the experiments, animals were killed by an overdose of urethane. www.nature.com/scientificreports/ placed 10 cm from the bottom of the projection. The luminance of the screen ranged from 0.29 cd/m 2 (black) to 50 cd/m 2 (white). Stimuli were generated by the Vpixx software (version 3.20, Vpixx Technologies, Saint-Bruno, QC, Canada). Continuous periodic stimulation to generate retinotopic maps consisted in a vertical or horizontal 20° wide bar spanning the full length of the screen in the orientation of propagation (corrected spherically for projection on a planar screen) drifting over a gray background in four directions (0, 90, 180 and 270°) at 0.15 Hz for 800 s 40,45 . The bar contained a black-and-white checkerboard pattern flickering at 6 Hz between black and white 25° squares to better stimulate the visual system. The stimulations were presented monocularly (screen at a 60° angle from the mouse's midline). The order of stimulations was randomized. Data acquisition and processing. Images were captured using a CMOS camera (Photonfocus A1312, Switzerland) coupled to a macro lens (Nikon, AF Micro Nikkor, 60 mm). Images were sampled at 7.5 Hz (exposure time of the camera of 33 ms with all frames averaged to 7.5 Hz) at a resolution of 1312 × 1082 pixels (spatial resolution of 5.5 µm × 5.5 µm/pixel). Data acquisition was controlled by a Brain Imager 3001 system through the LDAQ software (Optical Imaging Ltd., Rehovot, Israel). Anatomic references were made under illumination at 545 ± 20 nm wavelength. Intrinsic signal recordings were performed under illumination at 630 ± 30 nm wavelength at a focus of approximately 100 µm below the cortical surface. Data analysis was performed through custom scripts in MATLAB (version R2017b, The MathWorks, Inc., Natick, Massachusetts, United States). Images were coregistered to remove movement artifacts using only rigid transformations. A global signal regression was then performed to remove any light fluctuations. For each direction recorded, we performed a Fourier transform at the frequency of the visual stimulation to extract the phase and amplitude component of the signal 40 . Relative retinotopic maps in opposite directions were corrected for the hemodynamic delay to produce retinotopic maps in each orientation (azimuth and elevation) relative to neural activity. Maps of the sine of the difference between the azimuth and elevation retinotopic gradients were then generated (visual field sign) to identify cortical visual areas 44,48 . We established borders at reversals of the visual field sign at peripheral representations 41 . Following this step, three members of the laboratory independently delineated each map (vertical or horizontal retinotopies) through the analysis of the combination of different maps previously generated (phase, amplitude and hemodynamic delay) and texture analysis techniques (entropy, standard deviation and range) without knowing which group each mouse pertained to. Each pixel was then evaluated. At least two people had to have selected a given pixel for it to be considered within the activated area. The overlap of this considered area had to be over 70% of the total area delimited. Once the consensus delimitations made, we measured the level of overlap between the vertical and the horizontal map for each cortical hemisphere for each mouse. Only data from mice for which there was an overlap greater than 70% were further analyzed to allow for good segmentation of V1 and extrastriate areas. Moreover, data from one hemisphere per mouse was kept (the acquisitions that showed greater overlap, which correlated to greater signal-to-noise ratio). A total of 41 mice were assessed in this study. This amounted to 16 out of 21 mice for the EE group (76%), and 12 out of 20 mice for the ST group (60%). Subsequent analyses were based on the housing environment of the mice (ST or EE); we also made comparisons based on gender within each group. We calculated the cortical area activated (total, V1 and lateral extrastriate areas -namely, the anterolateral area, the laterointermediate area, the lateral anterior area, the lateromedial area and the rostrolateral area). We then determined the following in V1: the amplitude of the signal, the visual field coverage, the range of vision in azimuth and elevation as previously assessed in our laboratory 46 , the scatter index as determined by Cang and colleagues 49 , and the cortical magnification factor (as well as the cortical magnification factor versus the eccentricity and cortical area versus the eccentricity) using similar methodology as Garrett and colleagues 42 , as explained throughout the Results section. Statistics. To determine whether data were normally distributed within groups, the Kolmogorov-Smirnov test was performed. When data were normally distributed, comparisons were made using unequal variances t-tests (Welch's t-test). Otherwise, Wilcoxon Rank-sum tests were performed. Our main assumption being that mice reared in an EE had a developmental advantage since brain size and weight is increased with EE, we considered one-tailed comparisons. However, for amplitude and for gender comparisons, we performed two-tailed comparisons, as we found no indication in the literature for the direction of change. P-values less than 0.05 were considered significant (corrections were made for multiple testing when appropriate using the Bonferroni correction). JASP statistical package (version 0.14.1.0, JASP Team, Amsterdam, The Netherlands) and Microsoft Excel software (version 16.0, Microsoft Inc., Redmond, WA, USA) were used as complementary statistical tools. Results Size of visual cortical areas. To assess potential organizational and functional changes in the visual cortex, we obtained retinotopic maps with optical imaging of intrinsic signals of adult mice that developed in either an enriched environment (EE) or standard (ST) conditions 40,45 . We hypothesized that the primary visual area of the cortex would be mainly affected as it is more directly susceptible to changes due to the environment. In order to determine this, we first delineated V1 and the lateral extrastriate areas, namely the anterolateral area, the laterointermediate area, the laterolateral anterior area, the lateromedial area and the rostrolateral area, as these were more consistently activated in our cohorts 39,42 . The visual cortical area was larger and more readily identifiable in mice reared in an EE throughout development into adulthood compared to age-matched standard control animals (Fig. 1A-F). Our population of interest included 12 ST mice and 16 EE mice, since these animals reached our inclusion criteria (see "Materials and methods"). This represented 60% of the ST mice imaged, and 76% of the EE mice imaged. The average size of our delimitations was of 3.38 ± 0.54 mm 2 for ST mice (median of 3.34 mm 2 ) compared to 3.86 ± 0.39 mm 2 for EE mice (median of 4.01 mm 2 ; p = 0.0074; Fig. 1G). The differences were mainly explained by a significant increase in size in V1, with an average area of 2. 36 (Table 1). In addition, we observed no particular clustering of data due to litters (Supplementary Fig. S2 and Table S2). No statistics could be performed to establish whether there was a litter effect given the small number of animals per group. To evaluate if the difference in visual topography relates to visual function, we analyzed different parameters within V1 (amplitude of the response, visual field coverage, scatter index, cortical magnification factor and eccentricity). Signal amplitude. In order to assess potential differences in signal response, we evaluated the ∆ reflectance/ reflectance in our mouse populations. We first established that there were no differences due to the order of presentation of the visual stimuli. Figure 2 shows all averaged signal responses recorded per direction of the stimulation per animal. Descriptive statistics indicated that ST mice had more variability in their response ranges, as their interquartile range was larger ( www.nature.com/scientificreports/ we noted an overall trend towards higher amplitude levels for ST mice, especially in elevation ( Fig. 2A,B). This trend became more apparent when we compared signal responses by gender, with EE male mice having the lowest values and close to significance compared to EE females (p = 0.0301, with p < 0.0250 for significance due to multiple testing; Fig. 2C). There were no differences between ST males and females (p = 0.4579). Visual field coverage. As already suspected from the retinotopic maps, the size of visual field coverage in V1 of mice that had grown in an EE was vaster than that of ST mice. The former could see an area of 227 293 ± 64 502 mm 2 , compared to 182 864 ± 55 703 mm 2 for the latter (p = 0.0311; Fig. 3A). Moreover, the range of phases in both orientations was also greater in EE mice (Fig. 3B,C). These mice could see 528. 39 www.nature.com/scientificreports/ Scatter Index. Cang and colleagues 49 , and previous studies from our laboratory have compared the scatter index of different mouse populations 46,50 . This index allows exploring the finer organization of the visual field progression of retinotopic mapping through V1. A low scatter index indicates a higher quality of the retinotopy. Tighter cortical organization can be due to the cortical refinement and is somewhat detectable through optical imaging. Here, we evaluated the local standard deviation from neighboring pixels (sliding windows) along each axis. EE mice had a similar scatter index (even tended to be slightly higher) than ST mice in azimuth [ Fig. 4A Cortical magnification factor. We established the cortical magnification factor (CMF) of each animal by measuring the average distance of the visual field position from each pixel and its immediate surrounding pixels (8 pixels). We expected to see an increase in the average distance covered per set of pixels in the EE mouse population, since their visual field coverage was wider. However, we hypothesized that these differences could be masked by a more refined central vision. Taking these aspects into consideration, we also averaged the minimum and maximum distances, and the range of distances (Fig. 5A-D). As such, the average distance covered by adjacent pixels was indeed larger in mice reared in an EE compared to controls (1.31 ± 0.41 mm/° versus 0.94 ± 0.35 mm/°, p = 0.080; Fig. 5A). We found no significant differences between the averaged minimum distances (0.52 ± 0.34 mm/° for ST mice versus 0.77 ± 0.40 mm/° for EE mice, p = 0.0452; Fig. 5B Fig. 5D), although we noted an increase in both the averaged minimum and maximum distances for the population of mice that lived in an EE. Eccentricity. To further explore the effects of environmental enrichment on cortical magnification, we assessed the area allocated to different eccentricities of the visual field along V1. In order to do so, we compared the area dedicated per range of eccentricity (of 10°) within a comparable area of V1 (Fig. 6A). Interestingly, we found an effect by eccentricity (p < 0.001) and by group (p < 0.001), when performing a repeated measures ANOVA (Levene's test for equality of variance was first passed). Post hoc analyses revealed a particularly significant difference at 20° (p = 0.0009 from a two-tailed t-test with a threshold level of p < 0.0083), with 0.54 ± 0.08 mm 2 for ST mice compared to 0.40 ± 0.05 mm 2 for EE mice. We executed a two-tailed comparison, since we expected a similar distribution of the area per eccentricity between the two groups. In addition, we determined the CMF per eccentricity (Fig. 6B). We performed the same analyses as with area per eccentricity. We also found an effect by eccentricity (p < 0.001) and by group (p = 0.011). Taken together, these data strongly suggest that enrichment during development has profound effects on cortical organization and function geared towards a better integration of the surroundings; more specifically, an increased size of V1, a greater visual field and a refined visual cortex organization. www.nature.com/scientificreports/ Gender. Except for the tendency reported above within the EE population regarding amplitude, no differences or trends based on gender were observed in all the other parameters (Figs. 1, 2, 3, 4, 5). Discussion In this study, we report that an enriched environment (EE) during various critical periods of development, from the prenatal period to adulthood, has an impact on the structure and function of the visual cortex. Specifically, mice reared in an EE developed a larger visual cortical area compared to standard (ST) mice. This effect was mainly due to differences in the size of V1. To understand how this difference translates functionally, we evaluated parameters of vision in V1, of which the visual field coverage, the cortical magnification factor and the eccentricity topography were significantly affected. In order to assess the differential effects of the two environments, we first delineated V1 and the lateral extrastriate cortex through retinotopic mapping obtained by intrinsic signal optical imaging (ISOI). There were striking increases in the dimensions of V1 in the population of mice reared in an EE. Further analyses demonstrated that these expansions were not based on gender, as both females and males benefited from their EE. We had initially hypothesized that an EE would provide a wider range of effects with subjects falling somewhere on the spectrum of low to high responders of novelty and diversification depending on how much individual mice actively explore and engage with the environment they are in, even if they have genetically identical backgrounds 51,52 . Indeed, various studies, although not all, have indicated that they are higher levels of variability within EE mice, depending on what is measured 52,53 . Contrary to our expectations, the levels of variability appear similar amongst mice from either an ST or EE, with the EE mouse population tending towards narrower interquartile ranges. There are a large number of studies with differing enrichment paradigms that emphasize on particular elements (relevance of the context of the animal's environment reviewed in 53 ). Here, we established a level of enrichment that was minimal (few variables) and easily maintained that showed measurable outcomes to better understand enrichment. For illustrative purposes, we focus on the study by Freund and colleagues 51 . They followed a mouse cohort of 40 inbred female mice, C57BL/6N, from as many litters as possible, that was placed in a rather complex yet static environment at 4 weeks old and remained there for 3 months. They noted that individuality increased with age. Our results do not show this trend; however, our EE mouse population was much smaller (16 subjects). In addition, our small variability could be the result of all mice born from only three sets of litters (two pregnant females per cage, three times) rendering our cohort more uniform (reviewed in 54 ). Furthermore, novelty was introduced twice a week in our cages. Developmental studies in both animal models and humans have demonstrated that the pace of development varies according to different factors (reviewed in 34 ). Critical periods are flexible: comfortable and stimulating environments are permissive of longer plasticity windows. Gopnik 55 argues that there are explore-exploit tensions that allow the transition from childhood to adulthood, where 'explore' is a learning phase and 'exploit' a phase for skilled action. To avoid missing completed developmental stages, we made our recordings in young adulthood. We also placed our pregnant mice into their respective environments 1 week prior to giving birth, since the perinatal/prenatal period appears to be a sensitive period for cortical surface area development 37 . From our results, we can therefore cautiously speculate from our EE a possible threshold for beneficial impacts. More specifically, our EE paradigm offers enough stimulation to have a positive influence on brain structure and function as determined by the parameters studied. This doesn't discard the possibility that greater enrichment could trigger more complex and graded effects. However, this raises the question of what are appropriate levels of enrichment? Calhoun's studies from the 1960s and 1970s, where he created mouse or rat utopias, clearly indicate that there are ceilings to enrichment paradigms 56 . Too little causes deficiencies and too much causes excesses, both ends of the spectrum inducing anxiety. In line with these discoveries, male Ts65Dn mice with a deletion in chromosome 16, a model of Down Syndrome, did not profit from environmental enrichment; on the contrary, they had decreased learning capacities compared to mice living in a less EE 57 . A previous study from the same group had found that females benefited more than males from an EE with regards to spatial memory assessed by the Morris water maze 58 . Female mice appear more susceptible to stress than males 54 furthering the notion that optimal living conditions allow for greater adaptability. However, more studies are needed to elucidate whether or not there is gender advantage 59 . We did not observe one with the parameters we measured. The visual system of our EE mouse population was differently solicited than our ST mouse population; and therefore, the organization of the visual map was functionally affected. Quite interestingly, environmental enrichment during development stimulated the visual system to detect wider horizons. This was observed in both axes, although predominantly in azimuth. One could argue that our enrichment was more prominent in that plane. Interestingly, a recent study has demonstrated that humans have radial asymmetries of the visual field, where visual cortical area dedicated to the azimuth predominates 60 . The latter topography correlates with greater visual task performance in the horizontal axis (better acuity). In addition, visual field attentional redistribution has been shown in humans following training. For instance, the regular practice of sign language causes resources to focus on the inferior visual field as assessed by visual search task 61 . Signers exhibit an improved attention in the lower visual field. An in-depth analysis of the organization of V1 showed that the two mouse populations have different topographies within V1. Indeed, larger V1 areas were dedicated to smaller eccentricities in the visual cortex of ST mice. From 40° of eccentricity onwards, there was a switch where bigger areas of V1 of mice from an EE are devoted, compared to ST mice. The widest gap between the two groups was particularly evident at 20° of eccentricity, with ST mice exhibiting a marked increase in comparison to EE mice. ST mice also had a pronounced change from the allocation of area to each range of eccentricities from closest to furthest, whereas it was more constant in EE mice. Moreover, there was an effect on the overall cortical magnification factor (CMF), but also per range of eccentricity. Within comparable functional areas of V1 as determined by the maximal eccentricity interval, www.nature.com/scientificreports/ ST mice elicited bigger CMF, at all eccentricities. However, mice reared in an EE had a wider field of vision and surface of V1, and ultimately a wider average distance covered by sets of neighboring pixels. This data suggests that these discrepancies could originate from a lack of stimulation within the ST mouse population and an increased focus on what is right in front of them, while the visual cortex of the EE mouse population had to accommodate for a diversified reality. Although it was suspected that only V1 would show drastic effects (as it is the region that is mostly affected by the environment within the visual cortical hierarchy 62 ), we noted that differences were apparent throughout the delimited areas between the two populations based on the quality of the retinotopic maps. Perhaps this is due to an important circuitry linking V1 to the extrastriate areas present in mice and more prominent than in other higher mammalian species 63 . Although the size differences were only significant for V1, we argue that this is probably due to the fact that V1 is the largest area and therefore differences are more readily observable. More work into the subtleties of these differences will shed interesting light on the effects within the mouse visual cortex organization. Given the recent exponential growth of research dedicated to the mouse visual system, it is a great model to keep exploring the benefits of environmental enrichment. Especially since enrichment in general is not modality specific (reviewed in 53 ). In conclusion, our study provides clear measurable effects of an EE on the structure and function of the developing visual cortex, as determined by ISOI. Functional delimitations of surface area have seldom been performed in animal or human studies alike, hence these data add to our understanding of how flexible these processes are. Further discerning between all the variables will allow to better implement changes throughout development, certain elements potentially being more relevant at specific ages. Not only the nature but also the amount of enrichment needs to be carefully addressed 33 . Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on a reasonable request.
6,554.8
2023-04-29T00:00:00.000
[ "Biology", "Psychology" ]
Undecidable First-Order Theories of Affine Geometries Tarski initiated a logic-based approach to formal geometry that studies first-order structures with a ternary betweenness relation \beta, and a quaternary equidistance relation \equiv. Tarski established, inter alia, that the first-order (FO) theory of (R^2,\beta,\equiv) is decidable. Aiello and van Benthem (2002) conjectured that the FO-theory of expansions of (R^2,\beta) with unary predicates is decidable. We refute this conjecture by showing that for all n>1, the FO-theory of the class of expansions of (R^2,\beta) with just one unary predicate is not even arithmetical. We also define a natural and comprehensive class C of geometric structures (T,\beta), and show that for each structure (T,\beta) in C, the FO-theory of the class of expansions of (T,\beta) with a single unary predicate is undecidable. We then consider classes of expansions of structures (T,\beta) with a restricted unary predicate, for example a finite predicate, and establish a variety of related undecidability results. In addition to decidability questions, we briefly study the expressivities of universal MSO and weak universal MSO over expansions of (R^n,\beta). While the logics are incomparable in general, over expansions of (R^n,\beta), formulae of weak universal MSO translate into equivalent formulae of universal MSO. Introduction Decidability of theories of (classes of) structures is a central topic in various different fields of computer science and mathematics, with different motivations and objectives depending on the field in question.In this article we investigate formal theories of geometry in the framework introduced by Tarski [21,22].The logic-based framework was originally presented in a series of lectures given in Warsaw in the 1920's.The system is based on first-order structures with two predicates: a ternary betweenness relation β and a quaternary equidistance relation ≡.Within this system, β(u, v, w) is interpreted to mean that the point v is between the points u and w, while xy ≡ uv means that the distance from x to y is equal to the distance from u to v. The betweenness relation β can be considered to simulate the action of a ruler, while the equidistance relation ≡ simulates the action of a compass.See [22] for information about the history and development of Tarski's geometry. Tarski established in [21] that the first-order theory of (R 2 , β, ≡) is decidable.In [1], Aiello and van Benthem pose the question: "What is the complete monadic Π 1 1 theory of the affine real plane?"By affine real plane, the authors refer to the structure (R 2 , β).The monadic Π 1 1 -theory of (R 2 , β) is of course essentially the same as the first-order theory of the class of expansions (R 2 , β, (P i ) i∈N ) of the the affine real plane (R 2 , β) by unary predicates P i ⊆ R 2 .Aiello and van Benthem conjecture that the theory is decidable.Expansions of (R 2 , β) with unary predicates are especially relevant in investigations related to the geometric structure (R 2 , β), since in this context unary predicates correspond to regions of the plane R 2 . In this article we study structures of the type of (T, β), where T ⊆ R n and β is the canonical Euclidean betweenness predicate restricted to T , see Section 2.3 for the formal definition.Let E (T, β) denote the class of expansions (T, β, (P i ) i∈N ) of (T, β) with unary predicates.We identify a significant collection of canonical structures (T, β) with an undecidable first-order theory of E (T, β) .Informally, if there exists a flat two-dimensional region R ⊆ R n , no matter how small, such that T ∩R is in a certain sense sufficiently dense with respect to R, then the first-order theory of the class E (T, β) is undecidable.If the related density conditions are satisfied, we say that T extends linearly in 2D, see Section 2.3 for the formal definition.We prove that for any T ⊆ R n , if T extends linearly in 2D, then the FO-theory of E (T, β) is Σ 0 1 -hard.In addition, we establish that for all n ≥ 2, the first-order theory of E (R n , β) is Π 1 1 -hard, and therefore not even arithmetical.We thereby refute the conjecture of Aiello and van Benthem from [1].The results are ultimately based on tiling arguments.The result establishing Π 1 1 -hardness relies on the recurrent tiling problem of Harel [14]-once again demonstrating the usefulness of Harel's methods. Our results establish undecidability for a wide range of monadic expansion classes of natural geometric structures (T, β).In addition to (R 2 , β), such structures include for example the rational plane (Q 2 , β), the real unit cube ([0, 1] 3 , β), and the plane of algebraic reals (A 2 , β) -to name a few. In addition to investigating monadic expansion classes of the type E (T, β) , we also study classes of expansions with restricted unary predicates.Let n be a positive integer and let T ⊆ R n .Let F (T, β) denote the class of structures (T, β, (P i ) i∈N ), where the sets P i are finite subsets of T .We establish that if T extends linearly in 2D, then the first-order theory of F (T, β) is undecidable.An alternative reading of this result is that the weak universal monadic second-order theory of (T, β) is undecidable.We obtain a Π 0 1 -hardness result by an argument based on the periodic torus tiling problem of Gurevich and Koryakov [12].The torus tiling argument can easily be adapted to deal with various different kinds of natural classes of expansions of geometric structures (T, β) with restricted unary predicates.These include the classes with unary predicates denoting-for example-polygons, finite unions of closed rectangles, and real algebraic sets (see [8] for the definition). Our results could turn out useful in investigations concerning logical aspects of spatial databases.It turns out that there is a canonical correspondence between (R 2 , β) and (R, 0, 1, •, +, <), see [13].See the survey [17] for further details on logical aspects of spatial databases. The betweenness predicate is also studied in spatial logic [3].The recent years have witnessed a significant increase in the research on spatially motivated logics.Several interesting systems with varying motivations have been investigated, see for example the articles [1,4,5,15,16,18,20,23,24]. See also the surveys [2] and [6] in the Handbook of Spatial Logics [3], and the Ph.D. thesis [11].Several of the above articles investigate fragments of first-order theories by way of modal logics for affine, projective, and metric geometries.Our results contribute to the understanding of spatially motivated first-order languages, and hence they can be useful in the search for decidable (modal) spatial logics. In addition to studying issues of decidability, we briefly compare the expressivities of universal monadic second-order logic ∀MSO and weak universal monadic second-order logic ∀WMSO.It is straightforward to observe that in general, the expressivities of ∀MSO and ∀WMSO are incomparable in a rather strong sense: ∀MSO ≤ WMSO and ∀WMSO ≤ MSO.Here MSO and WMSO denote monadic second-order logic and weak monadic second-order logic, respectively.The result ∀WMSO ≤ MSO follows from already existing results (see [10] for example), and the result ∀MSO ≤ WMSO is more or less trivial to prove.While ∀MSO and ∀WMSO are incomparable in general, the situation changes when we consider expansions (R n , β, (R i ) i∈I ) of the stucture (R n , β), i.e., structures embedded in the geometric structure (R n , β).Here (R i ) i∈I is an arbitrary vocabulary and I an arbitrary related index set.We show that over such structures, sentences of ∀WMSO translate into equivalent sentences of ∀MSO.The proof is based on the Heine-Borel theorem. The structure of the current article is as follows.In Section 2 we define the central notions needed in the later sections.In Section 3 we compare the expressivities of ∀MSO and ∀WMSO.In Section 4 we show undecidability of the first-order theory of the class of monadic expansions of any geometric structure (T, β) such that T exends linearly in 2D.In addition, we show that for n ≥ 2, the first-order theory of monadic expansions of (R n , β) is not on any level of the arithmetical hierarchy.In Section 5 we modify the approach in Section 4 and show undecidability of the FO-theory of the class of expansions by finite unary predicates of any geometric structure (T, β) such that T extends linearly in 2D. Interpretations Let σ and τ be relational vocabularies.Let A be a nonempty class of σ-structures and C a nonempty class of τ -structures.Assume that there exists a surjective map F from C onto A and a first-order τ -formula ϕ Dom (x) in one free variable, x, such that for each structure B ∈ C, there is a bijection f from the domain of F (B) to the set Assume, furthermore, that for each relation symbol R ∈ σ, there is a first-order τ -formula ϕ R (x 1 , ..., x Ar (R) ) such that we have for every tuple (a 1 , ..., a Ar (R) ) ∈ (Dom(F (B))) Ar (R) .Here Ar (R) is the arity of R. We then say that the class A is uniformly first-order interpretable in C. If A is a singleton class {A}, we say that A is uniformly first-order interpretable in C. Assume that a class of σ-structures A is uniformly first-order interpretable in a class C of τ -structures.Let P be a set of unary relation symbols such that P ∩ (σ ∪ τ ) = ∅.Define a map I from the set of first-order (σ ∪ P)-formulae to the set of first-order (τ ∪ P)-formulae as follows. , where ϕ R (x 1 , ..., x k ) is the first-order formula for R witnessing the fact that A is uniformly first-order interpretable in C. We call the map I the P-expansion of a uniform interpretation of A in C. When A and C are known from the context, we may call I simply a P-interpretation.In the case where P is empty, the map I is a uniform interpretation of A in C. Lemma 2.1.Let σ and τ be finite relational vocabularies.Let A be a class of σ-structures and C a class of τ -structures.Assume that A is uniformly first-order interpretable in C. Let P be a set of unary relation symbols such that P ∩ (σ ∪ τ ) = ∅.Let I denote a related P-interpretation.Let ϕ be a first-order (σ ∪ P)-sentence.The following conditions are equivalent. 1.There exists an expansion A * of a structure A ∈ A to the vocabulary σ ∪ P such that A * |= ϕ. 2. There exists an expansion B * of a structure B ∈ C to the vocabulary τ ∪ P such that B * |= I(ϕ). Proof.Straightforward. Logics and structures Monadic second order logic, MSO, extends first-order logic with quantification of relation symbols ranging over subsets of the domain of a model.In universal (existential) monadic second order logic, ∀MSO (∃MSO), the quantification of monadic relations is restricted to universal (existential) prenex quantification in the beginning of formulae.The logics ∀MSO and ∃MSO are also known as monadic Π 1 1 and monadic Σ 1 1 .Weak monadic second-order logic, WMSO, is a semantic variant of monadic second-order logic in which the quantified relation symbols range over finite subsets of the domain of a model.The weak variants ∀WMSO and ∃WMSO of ∀MSO and ∃MSO are defined in the obvious way. Let L be any fragment of second-order logic.The L-theory of a structure M of a vocabulary τ is the set of τ -sentences ϕ of L such that M |= ϕ. Define two binary relations H, V ⊆ N 2 × N 2 as follows. We let G denote the structure (N 2 , H, V ), and call it the grid.The relations H and V are called the horizontal and vertical successor relations of G, respectively.A supergrid is a structure of the vobabulary {H, V } that has G as a substructure.We denote the class of supergrids by G. Let (G, R) be the expansion of G, where R = { (0, i), (0, j) ∈ N 2 × N 2 | i < j }.We denote the structure (G, R) by R, and call it the recurrence grid. Let m and n be positive integers.Define two binary relations H m,n , V m,n ⊆ (m × n) 2 as follows.(Note that we define m = {0, ..., m − 1}, and analogously for n.) We call the structure (m × n, H m,n , V m,n ) the m × n torus and denote it by T m,n .A torus is essentially a finite grid whose east border wraps back to the west border and north border back to the south border. Below we study geometric betweenness structures of the type (T, β T ) where T ⊆ R n and β T = β T .Here β T is the restriction of the betweenness predicate β of R n to the set T .To simplify notation, we usually refer to these structures by (T, β). Let T ⊆ R n and let β the corresponding betweenness relation.We say that L ⊆ T is a line in T if the following conditions hold. 1.There exist points s, t ∈ L such that s = t. 2. For all s, t, u ∈ L, the points s, t, u are collinear. Let T ⊆ R n and let L 1 and L 2 be lines in T .We say that L 1 and r 1 , . . ., r m ∈ R}.None of the vectors v i is allowed to be the zero-vector. A set U ⊆ R n is a linearly regular m-dimensional flat, where 0 ≤ m ≤ n, if the following conditions hold. 1.There exists an m-dimensional flat S such that U ⊆ S. 2. There does not exist any (m − 1)-dimensional flat S such that U ⊆ S. 3. U is linearly complete, i.e., if L is a line in U and L ⊇ L the corresponding line in R n , and if r ∈ L is a point in L and ∈ R + a positive real number, then there exists a point s ∈ L such that d(s, r) < .Here d is the canonical metric of R n . 4. U is linearly closed, i.e., if L 1 and L 2 are lines in U and L 1 and L 2 intersect in R n , then the lines L 1 and L 2 intersect.In other words, there exists a point A set T ⊆ R n extends linearly in mD, where m ≤ n, if there exists a linearly regular m-dimensional flat S, a positive real number ∈ R + and a point x ∈ S ∩ T such that { u ∈ S | d(x, u) < } ⊆ T. It is easy show that for example Q 2 extends linearly in 2D. Tilings A function t : 4 −→ N is called a tile type.Define the set TILES := { P t | t is a tile type } of unary relation symbols.The unary relation symbols in the set TILES are called tiles.The numbers t(i) of a tile P t are the colours of P t .The number t(0) is the top colour, t(1) the right colour, t(2) the bottom colour, and t(3) the left colour of P t . Let T be a finite nonempty set of tiles.We say that a structure A = (A, V, H), where V, H ⊆ A 2 , is T -tilable, if there exists an expansion of A to the vocabulary {H, V } ∪ { P t | P t ∈ T } such that the following conditions hold. 1.Each point of A belongs to the extension of exactly one symbol P t in T . 2. If uHv for some points u, v ∈ A, then the right colour of the tile P t s.t.P t (u) is the same as the left colour of the tile P t such that P t (v). 3. If uV v for some points u, v ∈ A, then the top colour of the tile P t s.t.P t (u) is the same as the bottom colour of the tile P t such that P t (v). Let t ∈ T .We say that the grid G is t-recurrently T -tilable if there exists an expansion of G to the vocabulary {H, V } ∪ { P t | t ∈ T } such that the above conditions 1 − 3 hold, and additionally, there exist infinitely many points (0, i) ∈ N 2 such that P t (0, i) . Intuitively this means that the tile P t occurs infinitely many times in the leftmost column of the grid G. Let F be the set of finite, nonempty sets T ⊆ TILES, and let Define the following languages The tiling problem is the membership problem of the set T with the input set F. The recurrent tiling problem is the membership problem of the set R with the input set H. The periodic tiling problem is the membership problem of S with the input set F. Theorem 2.2.[7] The tiling problem is Π 0 1 -complete.Theorem 2.3.[14] The recurrent tiling problem is Σ 1 1 -complete.Theorem 2.4.[12] The periodic tiling problem is Σ 0 1 -complete.Lemma 2.5.There is a computable function associating each input T to the (periodic) tiling problem with a first-order sentence ϕ T of the vocabulary τ := {H, V } ∪ T such that for all structures A of the vocabulary {H, V }, the structure A is T -tilable iff there exists an expansion A * of A to the vocabulary τ such that A * |= ϕ T . Proof.Straightforward.Lemma 2.6.There is a computable function associating each input (t, T ) of the recurrent tiling problem with a first-order sentence ϕ (t,T ) of the vocabulary τ := {H, V, R} ∪ T such that the grid G is t-recurrently T -tilable iff there exists an expansion R * of the recurrence grid R to the vocabulary τ such that R * |= ϕ (t,T ) . Proof. Straightforward. It is easy to see that the grid G is T -tilable iff there exists a supergrid G that is T -tilable. 3 Expressivity of universal MSO and weak universal MSO over affine real structures (R n , β) In this section we investigate the expressive powers of ∀WMSO and ∀MSO.While it is rather easy to conclude that the two logics are incomparable in a rather strong sense (see Proposition 3.1), when attention is limited to structures (R n , β, (R i ) i∈I ) that expand the affine real structure (R n , β), sentences of ∀WMSO translate into equivalent sentences of ∀MSO. Let L and L be fragments of second-order logic.We write L ≤ L , if for every vocabulary σ, any class of σ-structures definable by a σ-sentence of L is also definable by a σ-sentence of L .Let τ be a vocabulary such that β ∈ τ .The class of all expansions of (R n , β) to the vocabulary {β} ∪ τ is called the class of affine real τ -structures.Such structures can be regarded as τ -structures embedded in the geometric structure (R n , β).We say that L ≤ L over (R n , β), if for every vocabulary τ s.t.β ∈ τ , any subclass definable w.r.t. the class C of all affine real τ -structures by a sentence of L is also definable w.r.t.C by a sentence of L . We sketch a canonical proof of the following very simple observation.The result ∀WMSO ≤ MSO follows from already existing results (see [10] for example), and the result ∀MSO ≤ WMSO is easy to prove.Proposition 3.1.∀WMSO ≤ MSO and ∀MSO ≤ WMSO. Proof Sketch.It is easy to observe that ∀WMSO ≤ MSO: consider the sentence ∀X∃y ¬Xy.This ∀WMSO sentence is true in a model iff the domain of the model is infinite.A straightforward monadic second-order Ehrenfeucht-Fraïssé game argument can be used to establish that infinity is not expressible by any MSO sentence. To show that ∀MSO ≤ WMSO, consider the structures (R, <) and (Q, <).The structures can be separated by a sentence of ∀MSO stating that every subset bounded from above has a least upper bound.To see that the two structures cannot be separated by any sentence of WMSO, consider the variant of the MSO Ehrenfeucht-Fraïssé game where the players choose finite sets in addition to domain elements.It is easy to establish that this game characterizes the expressivity of WMSO.To see that the duplicator has a winning strategy in a game of any finite length played on the structures (R, <) and (Q, <), we devise an extension of the folklore winning strategy in the corresponding first-order game.Firstly, the duplicator can obviously always pick an element whose betweenness configuration corresponds exactly to that of the element picked by the spoiler.Furthermore, even if the spoiler picks a finite set, it is easy to see that the duplicator can pick his set such that each of its elements respect the betweenness configuration of the set picked by the spoiler. Theorem 3.2 (Heine-Borel). A set S ⊆ R n is closed and bounded iff every open cover of S has a finite subcover.Theorem 3.3.Let C be the class of expansions (R n , β, P ) of (R n , β) with a unary predicate P , and let F ⊆ C be the subclass of C where P is finite.The class F is first-order definable with respect to C. Proof.We shall first establish that a set T ⊆ R n is finite iff it is closed, bounded and consists of isolated points of T .Recall that an isolated point u of a set U ⊆ R n is a point such that there exists some open ball B such that B ∩ U = {u}. Assume T ⊆ R n is finite.Since T is finite, we can find a minimum distance between points in the set T .Therefore it is clear that each point t in T belongs to some open ball B such that B ∩ T = {t}, and hence T consists of isolated points.Similarly, since T is finite, each point b in the complement of T has some minimum distance to the points of T , and therefore b belongs to some open ball B ⊆ R n \ T .Hence the set T is the complement of the union of open balls B such that B ⊆ R n \ T , and therefore T is closed.Finally, since T is finite, we can find a maximum distance between the points in T , and therefore T is bounded. Assume then that T ⊆ R n is closed, bounded and consists of isolated points of T .Since T consists of isolated points, it has an open cover C ⊆ Pow(R n ) such that each set in C contains exactly one point t ∈ T .The set C is an open cover of T , and by the Heine-Borel theorem, there exists a finite subcover D ⊆ C of the set T .Since D is finite and each set in D contains exactly one point of T , the set T must also be finite. We then conclude the proof by establishing that there exists a first-order formula ϕ(P ) stating that the unary predicate P is closed, bounded and consists of isolated points.We will first define a formula parallel (x, y, t, k) stating that the lines defined by x, y and t, k are parallel in (R n , β).We define parallel (x, y, t, k) := x = y ∧ t = k ∧ (collinear (x, y, t) ∧ collinear (x, y, k)) ∨ ¬∃z(collinear (x, y, z) ∧ collinear (t, k, z)) We will then define first-order {β}-formulae basis k (x 0 , . . ., x k ) and flat k (x 0 , . . ., x k , z) using simultaneous recursion.The first formula states that the vectors corresponding to the pairs (x 0 , x i ), 1 ≤ i ≤ k, form a basis of a k-dimensional flat.The second formula states the points z are exactly the points in the span of the basis defined by the vectors (x 0 , x i ), the origin being x 0 .First define basis 0 (x 0 ) := x 0 = x 0 and flat 0 (x 0 , z) := x 0 = z.Then define flat k and basis k recursively in the following way.We then define a first-order {β, P }-formula sepr (x, P ) asserting that x belongs to an open ball B such that each point in B \ {x} belongs to the complement of P .The idea is to state that there exist n + 1 points x 0 , . . ., x n that form an n-dimensional triangle around x, and every point contained in the triangle (with x being a possible exception) belongs to the complement of P .Every open ball in R n is contained in some n-dimensional triangle in R n and vice versa.We will recursively define first-order formulae opentriangle k (x 0 , . . ., x k , z) stating that z is properly inside a k-dimensional triangle defined by x 0 , . . ., x k .First define opentriangle 1 (x 0 , x 1 , z) := β * (x 0 , z, x 1 ), and then define opentriangle k (x 0 , . . ., x k , z) := basis k (x 0 , . . ., x k ) ∧ ∃y opentriangle k−1 (x 0 , . . ., x k−1 , y) ∧ β * (y, z, x k ) . We are now ready to define sepr (x, P ).Let sepr (x, P ) := ∃x 0 , . . ., x n opentriangle n (x 0 , . . ., x n , x) ∧ ∀y (opentriangle n (x 0 , . . ., x n , y) ∧ y = x) → ¬P y .Now, the sentence ϕ 1 := ∀x ¬P x → sepr (x, P ) states that each point in the complement of P is contained in an open ball B ⊆ R n \ P .The sentence therefore states that the complement of P is a union of open balls.Since the set of unions of open balls is exactly the same as the set of open sets, the sentence states that P is closed. The sentence ϕ 2 := ∀x P x → sepr (x, P ) clearly states that P consists of isolated points. Finally, in order to state that P is bounded, we define a formula asserting that there exist points x 0 , . . ., x n that form an n-dimensional triangle around P .ϕ 3 := ∃x 0 , . . ., x n basis n (x 0 , . . ., x n ) ∧ ∀y P y → opentriangle n (x 0 , . . ., x n , y) The conjunction ϕ 1 ∧ ϕ 2 ∧ ϕ 3 states that P is finite.Corollary 3.4.Limit attention to expansions of (R n , β).Sentences of ∀WMSO translate into equivalent sentences of ∀MSO, and sentences of WMSO into equivalent sentences of MSO. Undecidable theories of geometric structures with an affine betweenness relation In this section we prove that the universal monadic second-order theory of any geometric structure (T, β) that extends linearly in 2D is undecidable.In addition we show that the universal monadic second-order theories of structures (R n , β) with n ≥ 2 are highly undecidable.In fact, we show that the theories of structures extending linearly in 2D are Σ 0 1 -hard, while the theories of structures (R n , β) with n ≥ 2 are Π 1 1 -hard-and therefore not even arithmetical.We establish the results by a reduction from the (recurrent) tiling problem to the problem of deciding whether a particular {β}-sentence of monadic Σ 1 1 is satisfied by (T, β) (respectively, (R n , β)).The argument is based on interpreting supergrids in corresponding {β}-structures. Lines and sequences Let T ⊆ R n .Let L be a line in T .Any nonempty subset Q of L is called a sequence in T .Let E ⊆ T and s, t ∈ T .If s = t and if u ∈ E for all points u ∈ T such that β * (s, u, t), we say that the points s and t are linearly E-connected (in (T, β)).If there exists a point v ∈ T \ E such that β * (s, v, t), we say that s and t are linearly disconnected with respect to E (in (T, β)).Definition 4.1.Let Q be a sequence in T ⊆ R n .Suppose that for each s, t ∈ Q such that s = t, there exists a point u ∈ T \ {s} such that 1. β(s, u, t) and 2. ∀r ∈ T β * (s, r, u) → r ∈ Q , i.e., the points s and u are linearly (T \ Q)connected. Then we call Q a discretely spaced sequence in T. Definition 4.2.Let Q be a discretely spaced sequence in T ⊆ R n .Assume that there exists a point s ∈ Q such that for each point u ∈ Q, there exists a point v ∈ Q \ {u} such that β(s, u, v).Then we call the sequence Q a discretely infinite sequence in T .The point s is called a base point of Q. Definition 4.3.Let Q be a sequence in T ⊆ R n .Let s ∈ Q be a point such that there do not exist points u, v ∈ Q \ {s} such that β(u, s, v).Then we call Q a sequence in T with a zero.The point s is a zero-point of Q.Notice that Q may have up to two zero-points. It is easy to see that a discretely infinite sequence has at most one zero point. Definition 4.4.Let Q be a discretely infinite sequence in T ⊆ R n with a zero.Assume that for each r ∈ T such that there exist points s, u ∈ Q \ {r} with β(s, r, u), there also exist points s , u ∈ Q \ {r} such that 1. β(s , r, u ) and Then we call Q an ω-like sequence in T (cf.Lemma 4.7). Lemma 4.5.Let P be a unary relation symbol.There is a first-order sentence ϕ ω (P ) of the vocabulary {β, P } such that for every T ⊆ R n and for every expansion (T, β, P ) of (T, β), we have (T, β, P ) |= ϕ ω (P ) if and only if the interpretation of P is an ω-like sequence in T . The formula sequence(P ) states that P is a sequence.By inspection of Definition 4.1, it is easy to see that there is a first-order formula ψ such that the conjunction sequence(P ) ∧ ψ states that P is a discretely spaced sequence.Continuing this trend, it is straightforward to observe that Definitions 4.2, 4.3 and 4.4 specify first-order properties, and therefore there exists a first-order formula ϕ ω (P ) stating that P is an ω-like sequence. Definition 4.6.Let P be a sequence in T ⊆ R n and s, t ∈ P .The points s, t are called adjacent with respect to P , if the points are linearly (T \ P )-connected.Let E ⊆ P × P be the set of pairs (u, v) such that 1. u and v are adjacent with respect to P , and 2. β(z, u, v) for some zero point z of P . We call E the successor relation of P . We let succ denote the successor relation of N, i.e., succ Lemma 4.7.Let P be an ω-like sequence in T ⊆ R n and E the successor relation of P . There is an embedding from (N, succ) into (P, E) such that 0 ∈ N maps to the zero point of P .If T = R n , then (N, succ) is isomorphic to (P, E). Proof.We denote by i 0 the unique zero point of P .Since P is a discretely infinite sequence, it has a base point.Clearly i 0 has to be the only base point of P .It is straightforward to establish that since P is an ω-like sequence with the base point i 0 , there exists a sequence (a i ) i∈N of points a i ∈ P such that i 0 = a 0 and a i+1 is the unique E-successor of a i for all i ∈ N. Define the function h : N → P such that h(i) = a i for all i ∈ N. It is easy to see that h is an embedding of (N, succ) into (P, E). Assume then that T = R n .We shall show that the function h : N −→ P is a surjection.Let d denote the canonical metric of R, and let d R be the restriction of the canonical metric of R n to the line R in R n such that P ⊆ R. Let g : R −→ R be the isometry from (R, d) to (R, d R ) such that g(0) = i 0 = h(0) and such that for all r ∈ ran(h), we have β i 0 , g(1), r or β i 0 , r, g (1) .Let (R, ≤ R ) be the structure, where If ran(h) is not bounded from above w.r.t.≤ R , then h must be a surjection.Therefore assume that ran(h) is bounded above.By the Dedekind completeness of the reals, there exists a least upper bound s ∈ R of ran(h) w.r.t.≤ R .Notice that since h is an embedding of (N, succ) into (P, E), we have s ∈ ran(h).Due to the definition of E, it is sufficient to show that { t ∈ P | s ≤ R t } = ∅ in order to conclude that h maps onto P . Assume that the least upper bound s belongs to the set P .Since P is a discretely spaced sequence, there is a point u ∈ R n \ {s} such that β(s, u, i 0 ) and ∀r ∈ R n β * (s, r, u) → r ∈ P .Now u < R s and the points u and s are linearly (R n \ P )-connected, implying that s cannot be the least upper bound of ran(h).This is a contradiction.Therefore s ∈ P . Assume, ad absurdum, that there exists a point t ∈ P such that β(i 0 , s, t).Now, since P is an ω-like sequence, there exists points u , v ∈ P \ {s} such that β(u , s, v ) and ∀r ∈ R n β * (u , r, v ) → r ∈ P .We have β(s, u , i 0 ) or β(s, v , i 0 ).Assume, by symmetry, that β(s, u , i 0 ).Now u < R s, and the points u and s are linearly (R n \ P )-connected.Hence, since s ∈ ran(h), we conclude that s is not the least upper bound of ran(h).This is a contradiction. Geometric structures (T, β) with an undecidable monadic Π 1 1 -theory Let Q be an ω-like sequence in T ⊆ R n and let q 0 be the unique zero point of Q. Assume there exists a point q e ∈ T \ Q such that β(q 0 , q, q e ) holds for all q ∈ Q.We call Q ∪ {q e } an ω-like sequence with an endpoint in T .The point q e is the endpoint of Q ∪ {q e }.Notice that the endpoint q e is the only point x in Q ∪ {q e } such that the following conditions hold. 1.There does not exist points s, t ∈ Q ∪ {q e } such that β * (s, x, t). ∀yz ∈ Definition 4.8.Let P and Q be ω-like sequences with an endpoint in T ⊆ R n .Let p e and q e be the endpoints of P and Q, respectively.Assume that the following conditions hold. 1.There exists a point z ∈ P ∩ Q such that z is the zero-point of both P \ {p e } and Q \ {q e }. 2. There exists lines L P and L Q in T such that L P = L Q , P ⊆ L P and Q ⊆ L Q . 3. For each point p ∈ P \ {p e } and q ∈ Q \ {q e }, the unique lines L p and L q in T such that p, q e ∈ L p and q, p e ∈ L q intersect. We call the structure (T, β, P, Q) a Cartesian frame. Lemma 4.9.Let T ⊆ R n , n ≥ 2, and let C be the class of all expansions (T, β, P, Q) of (T, β) by unary relations P and Q.The class of Cartesian frames with the domain T is definable with respect to C by a first-order sentence. Proof.Straightforward by virtue of Lemma 4.5. Lemma 4.10.Let T ⊆ R n , n ≥ 2. Let C be the class of Cartesian frames with the domain T , and assume that C is nonempty.Let G be the class of supergrids and G the grid.There is a class A ⊆ G that is uniformly first-order interpretable in the class C, and furthermore, G ∈ A. Proof.Let C = (T, β, P, Q) be a Cartesian frame.Let p e ∈ P and q e ∈ Q be the endpoints of P an Q, respectively.We shall interpret a supergrid G C in the Cartesian frame C. The domain of the interpretation of G C in C will be the set of points where the lines that connect the points of P \ {p e } to q e and the lines that connect the points of Q \ {q e } to p e intersect.First let us define the following formula which states in C that x is the endpoint of P . end P (P, Q, x) := P x ∧ ¬Qx ∧ ¬∃y∃z P y ∧ P z ∧ β * (y, x, z) In the following, we let atomic expressions of the type x = p e and β * (x, y, q e ) abbreviate corresponding first-order formulae ∃z end P (P, Q, z) ∧ x = z and ∃z end Q (Q, P, z) ∧ β * (x, y, z) of the vocabulary {β, P, Q} of C. We define and analogously for V D C .By Lemma 4.7, it is easy to see that there exists an injection f from the domain of the grid G = (G, H, V ) to D C such that the following three conditions hold for all u, v ∈ G. Hence there is a supergrid G C = (G C , H, V ) such that there exists an isomorphism f from G C to D G such that the above two conditions hold.Theorem 4.12.Let T ⊆ R n be a set and let β be the corresponding betweenness relation.Assume that T extends linearly in 2D.The monadic Π 1 1 -theory of (T, β) is Σ 0 1 -hard. Proof.Since T extends linearly in 2D, we have n ≥ 2. Let σ = {H, V } be the vocabulary of supergrids, and let τ = {β, X, Y } be the vocabulary of Cartesian frames.By Lemma 4.9, there exists a first-order τ -sentence that defines the class of Cartesian frames with the domain T with respect to the class of all expansions of (T, β) to the vocabulary τ .Let ϕ Cf denote such a sentence.By Lemma 2.5, there is a computable function that associates each input S to the tiling problem with a first-order σ ∪ S-sentence ϕ S such that a structure A of the vocabulary σ is S-tilable if and only if there is an expansion A * of the structure A to the vocabulary σ ∪ S such that A * |= ϕ S . Since T extends linearly in 2D, the class of Cartesian frames with the domain T is nonempty.By Lemma 4.10 there is a class of supergrids A such that G ∈ A and A is uniformly first-order interpretable in the class of Cartesian frames with the domain T .Therefore there exists a uniform interpretation I of A in the class of Cartesian frames with the domain T .Let S be a finite nonempty set of tiles.Note that S is by definition a set of proposition symbols P t , where t is a tile type.Let I be the S-expansion of the uniform interpretation I of A in the class of Cartesian frames with the domain T . Define ψ S := ∃X ∃Y (∃P t ) Pt ∈ S ϕ Cf ∧ I( ϕ S ) .We will prove that for each input S to the tiling problem, we have (T, β) |= ψ S if and only if the grid G is S-tilable.Thereby we establish that there exists a computable reduction from the complement problem of 5 Geometric structures (T, β) with an undecidable weak monadic Π 1 1 -theory In this section we prove that the weak universal monadic second-order theory of any structure (T, β) such that T extends linearly in 2D is undecidable.In fact, we show that any such theory is Π 0 1 -hard.We establish this by a reduction from the periodic tiling problem to the problem of deciding truth of {β}-sentences of weak monadic Σ 1 1 in (T, β).The argument is based on interpreting tori in (T, β).Most notions used in this section are inherited either directly or with minor modification from Section 4. Let Q be a subset of T ⊆ R n .We say that Q is a finite sequence in T if Q is a finite nonempty set and the points in Q are all collinear.Definition 5.1.Let T ⊆ R n and let β be the corresponding betweenness relation.Let P and Q be finite sequences in T such that the following conditions hold. 1. P ∩ Q = {a 0 }, where a 0 is a zero point of both P and Q. Proof.Let C = (T, β, P, Q) be a finite Cartesian frame.We denote by p e ∈ P and q e ∈ Q the limit points of P and Q, respectively.Clearly p e and q e are definable by a first-order formula with one free variable. Define ϕ fin Dom (u) := ϕ Dom (u).Also define the following variants of the {β, P, Q}formulas ϕ H (u, v) and ϕ V (u, v) definined in Lemma 4.10.Let It is straightforward to check that if C is an m × n Cartesian frame, then there exists a bijection f from the domain of the m × n torus T m,n = (T m,n , H m,n , V m,n ) to F C such that the following conditions hold for all u, v ∈ T m,n . 1 ).Notice that since T extends linearly in 2D, there exist finite Cartesian frames of all sizes in the class of finite Cartesian frames with the domain T .Hence the class of finite tori is uniformly first-order interpretable in the class of finite Cartesian frames with the domain T . Theorem 5.4.Let T ⊆ R n and let β be the corresponding betweenness relation.Assume that T extends linearly in 2D.The weak monadic Π 1 1 -theory of (T, β) is Π 0 1 -hard.Proof.Since T extends linearly in 2D, we have n ≥ 2. Let σ = {H, V } be the vocabulary of tori, and let τ = {β, X, Y } be the vocabulary of finite Cartesian frames.Let C = { (T, β, X, Y ) | X and Y are finite sets, X, Y ⊆ T }.By Lemma 5.2, there exists a firstorder τ -sentence that defines the class of finite Cartesian frames with the domain T with respect to the class C. Let ϕ fcf denote such a sentence.By Lemma 2.5, every input S to the periodic tiling problem can be effectively associated with a first-order σ ∪ S-sentence ϕ S such that for all tori B, the torus B is S-tilable iff there is an expansion B * of B to the vocabulary σ ∪ S such that B * |= ϕ S . By Lemma 5.3, the class of tori is uniformly first-order interpretable in the class of finite Cartesian frames with the domain T .Let S be a finite nonempty set of tiles and let J be the S-expansion of the uniform interpretation of the class of tori in the class of finite Cartesian frames with the domain T .Let φ S denote the following monadic Σ 1 1 -sentence. We will show that (T, β) |= φ S if and only if there exists an S-tilable torus D. First assume that there is an S-tilable torus D. Therefore, by Lemma 2.5, there is an expansion D * of D to the vocabulary σ ∪ S such that D * |= ϕ S .Since the class of tori is J-interpretable in the class of finite Cartesian frames with the domain T and D * |= ϕ S , it follows by Lemma 2.1 that there is a finite Cartesian frame C with the domain T and an expansion C * of C to the vocabulary τ ∪ T such that C * |= J(ϕ S ).Therefore C |= (∃P t ) Pt∈S J(ϕ S ).Since there exists a finite Cartesian frame with the domain T that satisfies (∃P t ) Pt∈S J(ϕ S ), we can conclude that (T, β) |= ∃X∃Y (∃P t ) Pt∈S (ϕ fcf ∧ J(ϕ S )). If, on the other hand, it holds that (T, β) |= ∃X∃Y (∃P t ) Pt∈S ((ϕ fcf ∧ J(ϕ S )), it follows that there is a finite Cartesian frame C with the domain T such that C |= (∃P t ) Pt∈S J(ϕ S ).Therefore there exists an expansion C * of C to the vocabulary τ ∪ T such that C * |= J(ϕ T ).Since the class of tori is uniformly J-interpretable in the class of finite Cartesian frames with the domain T and C * |= J(ϕ S ), there is by Lemma 2.1 an expansion D * of a torus D to the vocabulary σ ∪ S such that D * |= ϕ S .Now by Lemma 2.5, D is S-tilable.Hence there is a torus which is S-tilable. We have now shown that for any finite set of tiles S it holds that there is a torus which is S-tilable if and only if (T, β) |= φ S .Hence we have reduced the periodic tiling problem to the problem of deciding truth of {β}-sentences of weak monadic Σ 1 1 in (T, β).From the Σ 0 1 -completeness of the periodic tiling problem (Theorem 2.4), we conclude that the weak monadic Σ 1 1 -theory of the structure (T, β) is Σ 0 1 -hard.Therefore the membership problem of the weak monadic Π 1 1 -theory of the structure (T, β), is Π 0 1 -hard. Corollary 5.5.Let T ⊆ R n be a set such that T extends linearly in 2D.Let C be the class of expansions (T, β, (P i ) i∈N ) of (T, β) with finite unary predicates.The first-order theory of C is undecidable. Conclusions We have studied first-order theories of geometric structures (T, β), T ⊆ R n , expanded with (finite) unary predicates.We have established that for n ≥ 2, the first-order theory of the class of all expansions of (R n , β) with arbitrary unary predicates is highly undecidable (Π 1 1 -hard).This refutes a conjecture from the article [1] of Aiello and van Benthem.In addition, we have established the following for any geometric structure (T, β) that extends linearly in 2D. 1.The first-order theory of the class of expansions of (T, β) with arbitary unary predicates is Σ 0 1 -hard. 2. The first-order theory of the class of expansions of (T, β) with finite unary predicates is Π 0 1 -hard. Geometric structures that extend linearly in 2D include, for example, the rational plane (Q 2 , β) and the real unit rectangle ([0, 1] 2 , β), to name a few.The techniques used in the proofs can be easily modified to yield undecidability of first-order theories of a significant variety of natural restricted expansion classes of the affine real plane (R 2 , β), such as those with unary predicates denoting polygons, finite unions of closed rectangles, and real algebraic sets, for example.Such classes could be interesting from the point of view of applications. In addition to studying issues of decidability, we briefly compared the expressivities of universal monadic second-order logic and weak universal monadic second-order logic.While the two are incomparable in general, we established that over any class of expansions of (R n , β), it is no longer the case.We showed that finiteness of a unary predicate is definable by a first-order sentence, and hence obtained translations from ∀WMSO into ∀MSO and from WMSO into MSO. Our original objective to study weak monadic second order logic over (R n , β) was to identify decidable logics of space with distinguished regions.Due to the ubiquitous Figure 3 : Figure 3: Illustration of how the grid is interpreted in a Cartesian frame. Let A := { G C ∈ G | C is a Cartesian frame with the domain T }.Clearly G ∈ A, and furthermore, A is uniformly first-order interpretable in the class of Cartesian frames with the domain T .Lemma 4.11.Let n ≥ 2 be an integer.The recurrence grid R is uniformly first-order interpretable in the class of Cartesian frames with the domain R n .Proof.Straightforward by Lemma 4.7 and the proof of Lemma 4.10. 2 . P and Q are non-singleton sequences.3.There exists lines L P , L Q in T such that L P = L Q , P ⊆ L P and Q ⊆ L Q .We call the structure (T, β, P, Q) a finite Cartesian frame with the domain T .The unique intersection point of P and Q is called the origo of the frame.If |P | = m + 1 and |Q| = n + 1, we call (T, β, P, Q) an m × n Cartesian frame with the domain T .Lemma 5.2.Let T ⊆ R n , n ≥ 2. Let C be the class of all expansions (T, β, P, Q) of (T, β) by finite unary relations P and Q.The class of finite Cartesian frames with the domain T is definable with respect to C by a first-order sentence.Proof.Straightforward.Lemma 5.3.Let T ⊆ R n , n ≥ 2. Assume that T extends linearly in 2D.The class of tori is uniformly first-order interpretable in the class of finite Cartesian frames with the domain T .
12,104.4
2012-08-01T00:00:00.000
[ "Mathematics" ]
Some translation peculiarities of economic texts (on the basis of economic texts translation form english into ukrainian) This paper deals with some translation peculiarities of economic texts and analyses the ways of rendering economic terms, metaphors and phraseological units, acronyms and names of organizations in the economic texts. It describes translation transformations used in the process of rendering the above mentioned lexical units. INTRODUCTION Taking into account the modern changes of the world and current political situations the economic texts are becoming more and more popular among both specialists and ordinary citizens. The interest is constantly growing in the following spheres: finance, credits, industry, and currency. In order to solve global financial and economic problems many economists combine their efforts in collaboration creating anticrisis programs and conventions. The topicality of this article is based on the increasing interest in the special translation theory of economic translation, because it is still not researched properly and constantly causes some difficulties. The practical value of this article is in the ability to use its materials and conclusions both in the economic texts translation process and in the preparing lectures for the special course "Economic Texts Translation". The aim of the article is to reveal some peculiarities of economic texts translation from English into Ukrainian. When we speak about economic texts, we mean publications (texts) on economic issues. Therefore, our research is based on the material of publicist texts from the prominent economic publishers. In order to reach the aim mentioned above, it is necessary to fulfill the following tasks:  to find out and analyze language translation peculiarities of the economic texts in the process of translating from English into Ukrainian;  to research most frequent translation transformations and approaches used in overcoming translation difficulties in economic texts. Having analyzed economic texts, we came to the following lexical peculiarities: 1) the research materials are full of economic vocabulary the biggest part of which is terms; 2) there are many metaphors and phraseological units which prevail in the English economic texts more than in Ukrainian ones; 3) wide usage of the acronyms and titles. On the whole, the above mentioned aspects of economic texts translation are worthy of special attention in the process of translation, so they will be paid more attention. First of all, there are some language peculiarities that have much influence on the result of translation, so we want to point out the role of terms and special economic vocabulary [1]. TERMS AND SPECIAL ECONOMIC VOCABULARY TRANSLATION AND PROBLEMS IN RENDERING COMMON VOCABULARY IN ECONOMIC TEXTS. Some economic terms in English-Ukrainian translation has dictionary equivalents, for example: industry -промисловість, manufacturing output -об'єми промислового виробництва, capability-продуктивність (ukr). Other economic terms strongly need the context to be translated, for example: Some securities on bank books are starting to recover in value. -Деякі цінні папери на банківських депозитах починають відновлюватись у ціні (ukr) [2]. This sentence consists of terms and words from the special economic vocabulary: securityбезпека, охорона, застава, поручитель, а в множині -цінні папери; to recover-повертати собі, видужувати, надолужувати, стягувати судовим порядком (ukr). Obviously, to choose the right equivalent we need at least the context of the sentence and the knowledge of the economic field the term belongs to. Our research also reveals that in the Ukrainian language there are many terms which were created as the result of calque (literal translation), for example: liquid funds-ліквідні фонди, reserve currency-резервна валюта, hedge funds-хеджеві фонди (ukr). Taking into account that this tendency is still in trend now, we can consider the calque (literal translation) as another way of terms and words from special vocabulary translation. The usage of transliteration and transcription is also widespread in the translation of economic texts, for instance: export-экспорт, capital-капітал, industrial-індустріальний. Transcription is more frequent in the titles or proper names rendering: Kaoru Yosano -Каору Ёсано, Wall Street Journal -Уолл-стрит джорнал (ukr). In general, as it is shown in the examples, the knowledge of the difference in lexical meanings, which is depend on the economic field the word belongs to is vital. TRANSLATION OF THE METAPHORS AND PHRASEOLOGICAL UNITS IN THE ECONOMIC TEXTS. The problems in translating the phraseological units in the economic texts are based on the fact that not every dictionary can predict all its usage possibilities in the context [3]. For example, it is possible to translate dog and pony show as: «зробити промову», «показати номер», in the economic text as usually it means «презентація товару, на якій використовують багато візуальних ефектів» (ukr). That is why such phraseological units are translated with the help of description transformation, addition (extension) transformation. Also while translating phraseological units we have to follow stylistic and genre homogeneity with the phraseological unit of source language [4]. In this sentence in order to follow the stylistic and genre homogeneity the metaphor to jump into the market was rendered not as «вскочити» (which has more negative connotation and is more informal), but the transformations of addition (extension) were used. The difficulties in metaphor and phraseological unit translation are the following. 1. If this text is translated by the economists who know the language and the subject (they understand the topic they translate), but they are not experienced in translating metaphors and/or phraseological units, in this case there will be word by word translation, for example: loan shark -1) "кредитна акула", this version of translation is so widespread that it has been already in the dictionary; 2) лихвар (юридична або фізична особа, що видає кредити під проценти, що перевищують встановлений законом максимум) (ukr). 2. If this text is translated by the licensed translator the rendering of the metaphor and phraseological unit will be done perfectly but the main sense of the sentence could vanish. It is also worth mentioning that metaphor and phraseological unit are more frequent in the English publisistics than in Ukrainian. TRANSLATION OF THE ACRONYMS AND NAMES OF ORGANIZATIONS. The economic texts are characterized by acronyms, most of which are used in economic texts and documents, for example: IMF -International Monetary Fund -Міжнародний валютний фонд (МВФ), Gross Domestic Product-валовий внутрішній продукт (ВВП) (ukr). As a rule, the acronyms mentioned above are widespread in the economic field, so they could be presented in the economic oriented dictionaries (the usage of which is strongly recommended for translating special texts). Furthermore, today we have many new acronyms, which are not mentioned in the dictionaries. Sometimes we deal with new ones, and sometimes these acronyms are created in order to use time and place wisely. In this case, first of all the translator should decide to which word each letter of the acronym refers and then he or she should translate the whole word combination, then the translator should make it short (create his own acronym in the target language), for example: It's only recently that computer hardware and software of the type needed to run enterprise resource planning (ERP) have become powerful enough to extend beyond the boundaries of a single firm. -Лише з недавнього часу комп'ютерне забезпечення та програми, необхідні для управління плануванням ресурсів підприємства (ПРП), стали досить потужними, щоб поширюватися за межі фірми (ukr) [5]. There are some acronyms, which are usually transliterated. As a rule, it's the names of companies and systems created a long time ago, for instance: SWIFT-Society for Worldwide Interbank Financial Telecomunications.-Міжнародна міжбанківська система передачі інформації та здійснення платежів (СВІФТ) (ukr). CONCLUSION In conclusion, in the process of translating acronyms it is recommended to consult with the search engines and look up in the dictionary in order to realize the actual meaning of the acronym. To sum up, it is very important not only to know translation transformations and be experienced in its usage, but also to be informed in the subject of translation (the knowledge of economic processes and notions).
1,807.4
2015-11-01T00:00:00.000
[ "Economics", "Linguistics" ]
Interhemispheric structure and variability of the 5-day planetary wave from meteor radar wind measurements . A study of the quasi-5-day wave (5DW) was performed using meteor radars at conjugate latitudes in the Northern and Southern hemispheres. These radars are located at Esrange, Sweden (68 ◦ N) and Juliusruh, Germany (55 ◦ N) in the Northern Hemisphere, and at Tierra del Fuego, Argentina (54 ◦ S) and Rothera Station, Antarctica (68 ◦ S) in the Southern Hemisphere. The analysis was performed using data collected during simultaneous measurements by the four radars from June 2010 to December 2012 at altitudes from 84 to 96 km. The 5DW was found to exhibit significant short-term, seasonal, and interannual variability at all sites. Typical events had planetary wave periods that ranged between 4 and 7 days, durations of only a few cycles, and infre-quent strongly peaked variances and covariances. Winds exhibited rotary structures that varied strongly among sites and between events, and maximum amplitudes up to ∼ 20 m s − 1 . Mean horizontal velocity covariances tended to be largely negative at all sites throughout the interval studied. Using TIMED/SABER temperature measurements, Pancheva et al. (2010) reported that the 5DW amplitude increased with altitude and maximized at midlatitudes near equinoxes with a latitudinally symmetric structure. Their analysis suggested that mean amplitudes are larger in the Northern Hemisphere, but with significant interannual variability. Riggin et al. (2006) addressed the global structure of the 5DW in May 2003 and found larger amplitudes in the Northern Hemisphere with a maximum at lower altitudes than in the Southern Hemisphere. Wu et al. (1994) used UARS/HRDI wind measurements between May 1992 and June 1994 to infer a 5DW amplitude that was larger in the zonal component than in the meridional component, with both exhibiting a Rossby (1, 1) structure at low and middle latitudes. They also noted that 5DW amplitudes were larger in the summer hemisphere when the response was anti-symmetric about the equator. Similarly, Hirooka (2000) analyzed the 5DW in November in geopotential height using the Improved Stratosphere and Mesosphere Sounder (ISAMS) onboard UARS and reported larger amplitudes in the Southern Hemisphere than in the Northern Hemisphere. MLT radar wind measurements have also been used by various authors for 5DW and 6.5-day wave studies. Kovalam et al. (1999) demonstrated that the 6.5-day wave is westward propagating with zonal wavenumber 1 and has an enhanced amplitude in April and September using MF radars at the equatorial sites of Pontianak (0 • , 109 • E) and Christmas Island (2 • N, 157 • W). The 6.5-day wave at Pontianak and Christmas Island was compared with that observed at Yamagawa (32 • N, 131 • E) by Isoda et al. (2002), and all yielded similar enhancements from mid-April to mid-May. Kishore et al. (2004) reported an enhancement of the 6.5-day wave from September to October, as well as from April to May, especially in the zonal component employing MF radar winds at Tirunelveli (9 • N, 78 • E). Lima et al. (2005) observed the 6.5-day wave using a meteor radar at Cachoeira Paulista (23 • S, 45 • W) and found significant interannual variability in the zonal component, with maximum amplitudes occurring from winter to spring. Jiang et al. (2008) employed radar winds at six low-and mid-latitude sites and concluded that enhancement of the 6.5-day wave from April to May is a global-scale phenomenon, maximizing at subequatorial latitudes in the Northern Hemisphere. Furthermore, Kishore et al. (2006) employed stratospheric temperature measurements by a Rayleigh lidar at Gadanki (14 • N, 79 • E) and confirmed a stronger 6.5-day wave at lower altitudes than in the MLT. The 5DW observed at Bear Lake Observatory (42 • N, 111 • W) by Day et al. (2012) using a meteor radar exhibited enhancements in winter and late summer. In contrast, Merzlyakov et al. (2013) found a strong response of the 5DW in autumn using a meteor radar at Obninsk (55 • N, 37 • E). Most ground-based measurements to date have occurred in the Northern Hemisphere, hence assessments of interhemispheric variability of the 5DW have been limited. One study by Day and Mitchell (2010) employing meteor radar winds at Esrange (68 • N, 21 • E), Sweden, and Rothera Station (68 • S, 68 • W), Antarctica found similar wave activity in each hemisphere, with enhanced amplitudes in winter and late summer in each case. Here, we further explore interhemispheric variability of the 5DW at middle and high latitudes determined from meteor radar winds at four nearly conjugate Northern and Southern Hemisphere sites. Section 2 describes our measurements and analysis methodology. Section 3 describes representative results. Our discussion and summary are presented in Sects. 4 and 5. Analysis methodology Hourly zonal and meridional winds at the four radars were determined in the same manner as employed by Fritts et al. (2012) for 3 km altitude intervals centered from 84 to 96 km. Hourly wind estimates were judged useful if there were at least five meteor echoes with radial velocities less than 150 m s −1 and zenith angles between 10 and 70 • . While Fritts et al. (2012) restricted data to zenith angles between 15 and 50 • , we accepted more echoes by widening the zenith angle range in the current analysis, yielding increases of 100 % for Esrange, 110 % for Juliusruh, 60 % for TdF, and 90 % for Rothera, compared to Fritts et al. (2012). However, we have confirmed that results of our 5DW analysis do not exhibit large differences between the two ranges of zenith angles. Mean echo counts per hour at 90 km were 29 for Esrange, 86 for Juliusruh, 163 for TdF, and 52 for Rothera. Mean standard deviations of hourly mean winds were 8.6, 6.9, 7.5, and 8.6 m s −1 , respectively. 5DW fits were determined using a band pass filter with 3dB points at periods of 4.375 and 7.0 days. We have confirmed that structure of the resulting 5DW in the horizontal wind fields was identical with and without considering vertical winds in a determination of hourly winds. Figure 1 shows S-transform amplitudes in a period range between 4 and 7 days from hourly mean zonal and meridional winds at 90 km for the four sites during 2012. For the zonal wind, the S-transforms exhibit unique features at each site. Oscillations at periods > 5 days had amplitudes > 10 m s −1 in late January, early March and April, and early and late September at Esrange. Additionally, oscillations at periods < 5.5 days were enhanced in July and August. At Juliusruh, amplitude enhancements with > 10 m s −1 occurred at periods < 5 days from late January to early February, early August, and mid-October and at periods centered near 5.5 days in April. From late September to mid-December, oscillations at periods > 6 days also had large amplitudes with > 10 m s −1 . S-transforms At TdF, amplitudes were enhanced with > 10 m s −1 at periods > 5 days in January, March, August, and from late October to mid-November, and at periods < 6.5 days during late July. Amplitudes with > 10 m s −1 were also at periods > 6 days in late February, < 6.5 days in July, and > 4.5 days from late September to mid-October at Rothera. Primary periods of the meridional wind oscillations were < 6.5 days in mid-January, the full range from 4 and 7 days in April, and < 5.5 days in mid-July, August to early September, mid-November, and mid-December at the two sites in the Northern Hemisphere. Amplitude enhancements at periods from 4 to 7 days were observed in early February only at Esrange. In the Southern Hemisphere, large amplitudes occurred at periods from 4 to 7 days from October to November. Additional amplitude enhancements were observed only at Rothera in summer. We note, in particular, that S-transforms of the meridional winds exhibit significant similarities in the same hemisphere, but differences between the hemispheres, suggesting that the anti-symmetric modes with respect to the equator appear to be major contributors to the large-scale wind fields. Figure 2 shows contours of mean 5DW horizontal velocity variances, < u 2 > + < v 2 > (where brackets denote a 5-day average), at the four sites. At all sites, somewhat larger variances were observed from late June to August in all 3 years, but with different features in each year. In 2010, variances from late June to August were larger at higher latitudes than at lower latitudes in both hemispheres. Variances at Esrange were 100 m 2 s −2 or larger during this interval, and maxima were smaller at Juliusruh for this interval. At Rothera, on the other hand, a maximum enhancement for the same interval occurred in late July (> 200 m 2 s −2 at altitudes between 87 and 93 km). As in the Northern Hemisphere, these were accompanied by weaker maxima at TdF in late July. 5DW variance enhancements from late July to mid-August in 2011 www.ann-geophys.net/33/1349/2015/ were larger at midlatitudes than at high latitudes in both hemispheres. Maxima at midlatitudes occurred earlier than at high latitudes (late July to early August at Juliusruh and mid-August at Esrange in the Northern Hemisphere, and mid-to late-July at TdF, and late July to early August at Rothera in the Southern Hemisphere). In 2012, variances were enhanced at all sites and the enhancements lasted to September. Maxima occurred in late July at TdF, early August at Esrange, late August at Juliusruh, and early September at Rothera. Therefore, enhancements of 5DW variances during the Northern Hemisphere summer can be regarded as symmetric with respect to the equator, however latitudinal structure in a hemisphere was different for each year. Variances and horizontal momentum fluxes Variances > 100 m 2 s −2 were observed in November 2010, and January and April 2012 only in the Northern Hemisphere, and in November 2012 only in the Southern Hemisphere. Due to missing data in February 2011 at TdF and in February 2012 at Juliusruh, it is impossible to estimate latitudinal symmetry during these intervals. Interannual variability Hourly 5DW zonal and meridional winds from 84 to 96 km at the four sites are displayed in Fig. 4 for July and August of 2010 to 2012 to illustrate its interannual variability. In 2010, the 5DW exhibited larger amplitudes at higher latitudes beginning in mid-to late-July, somewhat smaller amplitudes at lower latitudes beginning in early August, and a successive amplitude maximum at Esrange extending to late August. Phase relations between the two components suggested approximate quadrature, with the meridional component leading the zonal component where both amplitudes were large, e.g., in late July at TdF and Rothera, early August at Juliusruh, and mid-August at Esrange. 5DW structure and variability in 2011 were quite different from 2010, tending to be more variable in altitude and time at all sites, perhaps suggesting a superposition of different modes. Strong earlier responses occurred at higher altitudes in the meridional component at Esrange and in the zonal component at TdF, again with the meridional phase leading the zonal phase at both sites. Seen at both TdF and Rothera was a progression of the largest amplitudes from higher to lower altitudes with time, but beginning ∼ 10 days earlier at TdF and exhibiting a shorter period in the meridional component than in the zonal component (again suggesting a mode superposition). During stronger responses in early August at Esrange and in late July at Juliusruh, relative phases showed the meridional component leading (lagging) the zonal component at Esrange (Juliusruh). The 5DW structure for 2012 also exhibited very significant variability from 2010 and 2011. This included (1) large amplitude disparities between components in late June and early July at Rothera and mid-to late-July at TdF and Rothera, (2) a sustained response from mid-July through August at Juliusruh, but having different periods in the two components, and (3) a similar long response at Esrange, but which appeared to have the same period in both components. Figure 5 shows hodographs at 90 km during August for the 3 years. In 2010, the 5DW had a clockwise circulation in the beginning of the month at all sites, but the clockwise circulation persisted to the end of the month at Esrange, changed to counterclockwise at the end of the month at TdF, and alternated the remainder of the month at Juliusruh and Rothera. In 2011, the circulation changed with short time intervals at all sites, suggesting changes of a wave period in both components. The 5DW in 2012 was dominated by a clockwise circulation at Esrange, Juliusruh, and Rothera, with some interruption by a counterclockwise circulation, suggesting sim- Seasonal comparison Because variances in Fig. 2 revealed maxima in April 2012 in the Northern Hemisphere and in November 2012 in the Southern Hemisphere, time evolution of the 5DW in the zonal and meridional winds is shown in Fig. 6 from March to May 2012 for the Northern Hemisphere and from October to December 2012 for the Southern Hemisphere, and hodographs at 90 km for April and November 2012 at the four sites are shown in Fig. 7. In the Northern Hemisphere, enhancements of the 5DW in the two components which occurred from late March were slightly earlier at Esrange than at Juliusruh, with a larger amplitude in the meridional component, but lasted longer at lower (higher) altitudes at Esrange (Juliusruh). Hodographs reveal a clockwise circulation throughout the month at both Esrange and Juliusruh. However, the phase relation at Juliusruh was nearly inphase at the beginning of the month and anti-phase at the end of the month, again suggesting a super- In the Southern Hemisphere, the 5DW in the two components was enhanced from late October to early November and was slightly earlier at TdF than at Rothera, with larger amplitudes at higher altitudes at both sites. Hodographs reveal an anti-phase relation at early November at TdF and a clockwise circulation with a quadrature relation with the zonal component leading the meridional component at Rothera. To summarize, meridional amplitudes were larger than zonal amplitudes in both hemispheres, with increasing amplitudes with altitude. The amplitude enhancement was also earlier at the higher latitude in the Northern Hemisphere but was later at the higher latitude in the Southern Hemisphere. Murphy et al. (2007) studied seasonal variations of the 4-8-day wave using MF radar wind measurements at Davis Station (69 • S), Antarctica from 1997 to 2005 and found wave enhancements from July to August. Williams and Avery (1992) reported an enhancement of the 5DW in July 1984 using a mesosphere-stratosphere-troposphere (MST) radar at Poker Flat (65 • N), Alaska. These are consistent with our results of enhancements in the Northern Hemisphere summer. Furthermore, spectral characteristics of MF radar wind measurements at Tromsø (70 • N) in the spring 1997 by Hall et al. (1998) showed the existence of the 5DW in the zonal component in February and March. Our S-transform results for 2012 in Fig. 1 exhibit amplitude enhancements at periods > 4.5 days in March at Esrange but spring enhancements of the 5DW in 2011 occurred in February (not shown), suggesting possible interannual variability of the enhancements. Murphy et al. (2007) showed that a maximum amplitude in a month can be larger than monthly median by 3 times, suggesting large amplitude variations in time. Our study exhibited an interannual variation in amplitude in Northern Hemisphere summer at all sites and intervals having enhanced am-plitudes were often less than a month. A previous study of the 5DW at Esrange (1999 and Rothera (2005Rothera ( to 2008 by Day and Mitchell (2010) also showed significant interannual variation, e.g., several years without Northern Hemisphere summer enhancements. Lawrence and Jarvis (2003) also reported years with both strong and weak responses of the 5DW during the austral winter between 1997 and 1999 using Imaging Doppler Interferometer (IDI) wind measurements at Halley (76 • S). Discussion A wavelet analysis of radar wind measurements at lowto mid-latitudes by Jiang et al. (2008) exhibited nearly simultaneous amplitude enhancements at latitudes between Adelaide (35 • S) and Platteville (40 • N) from April to May 2003. Riggin et al. (2006) reported that S-transform amplitudes for a ∼ 5-day period in May at Saskatoon (52 • N) were < 10 m s −1 for the meridional component at altitudes between 88 and 92 km and were even smaller at Tromsø. By comparison, our S-transform amplitudes in May 2012 were < 5 m s −1 for the meridional component at Esrange and even smaller at Juliusruh in May but larger at Juliusruh in June. These differences are additional evidence of the large interannual variations of the 5DW. If the 5DW can be amplified by baroclinic instability (Meyer and Forbes, 1997), interannual variability of the 5DW may be associated with interannual variability of the vertical shear because baroclinic instability is associated with a strong vertical shear in the zonal wind. The vertical shear is related to the 11-year solar cycle (Fritz and Angell, 1976) and even has a multi-decadal variability (Aiyyer and Thorncroft, 2011). It is well known that the vertical shear is also influenced by the quasi-biennial oscillation (QBO) (Baldwin et al. 2001) and Kishore et al. (2004) reported a dependency of 5DW amplitudes on the QBO. Additionally, Walsh and Syktus (2003) showed differences of the vertical shear between El Niño and La Niña years. The interval we studied here, mid-2010 to 2012, spanned a weakening of La Niña from 2010 to 2011 and a transition to weak El Niño in 2012. Energy for an amplification of the 5DW by the instability can be transferred from the mean zonal flow (Mayer and Forbes, 1997) and interannual variability and long-period oscillations of the zonal wind have been reported by various studies. Among these studies, Iimura et al. (2011) showed trends, and 28-, 66-and 132-month oscillations of the zonal wind over Antarctic and Arctic and larger amplitudes of these oscillations over the Arctic than the Antarctic at most altitudes. Because a mean zonal wind observed from a ground site can be influenced by a zonal mean wind and stationary planetary wave, trends and oscillations of the zonal wind by Iimura et al. (2011) may also be associated with the stationary planetary wave. If the 6.5-day wave is generated by the nonlinear interaction between the s = 2 7-day wave and s = 1 stationary planetary wave (Pogoreltsev, 2002), interannual variability of the stationary planetary wave may also result in interannual variability of the 5DW. The 5DW in geopotential height was simulated in the General Circulation Model (GCM) developed at Kyushu University by Miyoshi (1999). The GCM predicted maximum amplitudes at ∼ 40 to 50 • in latitude and ∼ 100 km in altitude, and these results were supported by a global 5DW analysis using UARS/ISAMS temperature measurements (Hirooka, 2000). Based on their results, 5DW (1, 1) amplitudes should be larger at Juliusruh and TdF than at Esrange and Rothera. However, our results often exhibited larger amplitudes at high latitudes (Esrange and Rothera) than at midlatitudes (Juliusruh and TdF). Furthermore, time-latitude cross Sects. of 5DW amplitudes in geopotential height using GCM by Miyoshi (1999) and in temperature using TIMED/SABER measurements by Pancheva et al. (2010) showed latitudinal symmetry with respect to the equator during enhancements. Our results, on the other hand, often exhibited amplitude enhancements only in one hemisphere, suggesting a major contribution of anti-symmetric modes. From limited ground-based measurements, it is impossible to extract only the Rossby normal mode (1, 1) 5DW, and hence it is very plausible that our results contain influences by other modes. Pancheva et al. (2010) reported a climatology of a global distribution of the eastward-propagating zonal wavenumber 1 (E1) 5.5-day wave in the temperature field from TIMED/SABER measurements from 2002 to 2007. Due to the latitudinal coverage of the SABER measurements, they reported the E1 5.5-day wave only equatorward of 50 • , however their results suggest that the E1 5.5-day wave also exists at higher latitudes. The E1 5DW was also found in the meridional wind as a winter phenomenon using meteor radar wind measurements at the South Pole during 1995 by Palo et al. (1998). Simulation using the extended Canadian Middle Atmosphere Model (CMAM) by McLandress et al. (2006) predicted E1 quasi-4-day wave during austral winter between 63 and 85 • S, as observed by Lawrence et al. (1995), Prata (1984) and Stanford (1979, 1982), although Hough function theory predicts that the quasi-4-day wave corresponds to (2, 1) (Hirota and Hirooka, 1984;Salby, 1984;Talaat et al., 2002). Discrepancies between our results and previous 5DW studies from groundbased radar measurements introduced above may also be a longitude variability caused by a mixture of these different wave modes. Riggin et al. (2006) investigated a climatological global structure of the 5DW using TIMED/SABER satellite and ground-based radar observations in May and concluded that the 5DW originates in the winter hemisphere and propagates to the summer hemisphere to create a symmetrical structure with respect to the equator. Our results show enhancements at the four sites from July to August and the 5DW would propagate from the Southern Hemisphere to the Northern Hemisphere according to the cross-equatorial propagation theory. However, momentum fluxes in June and July in Fig. 3 do not exhibit consistent northward transport of the eastward momentum, likely because the momentum fluxes reflect differ-ing mean-flow (or possibly wave-wave) interactions in the different winter and summer environments. Summary Meteor radar wind measurements at Esrange and Juliusruh (∼ 68 and 55 • N) and at Tierra del Fuego and Rothera (∼ 68 and 54 • S) were employed for a study of the interhemispheric and interannual variability of the quasi-5-day wave (5DW) at altitudes from 84 to 96 km spanning simultaneous operations from June 2010 to December 2012. Enhancements of 5DW variances were observed after the Northern Hemisphere summer solstice from June to August during a short period of less than a month. The enhancements were larger at high latitudes in 2010, midlatitudes in 2011, and somewhat similar at all sites in 2012 based on variances of horizontal winds. Enhancements of the 5DW variances also occurred in February 2011 only at high latitudes, in January and April 2012 only in the Northern Hemisphere, and in November 2012 only in the Southern Hemisphere. Clear and consistent correlations were not found between variances and horizontal momentum fluxes. Large positive (> 50 m 2 s −2 ) momentum fluxes were found during July and August in 2010 at Rothera, and 2011 at Esrange, but negative (< −50 m 2 s −2 ) momentum fluxes were found in 2012 at TdF. Negative momentum fluxes were also found in November 2010 at Esrange and in November 2012 at TdF, while positive momentum fluxes were found in February 2011 at Rothera and in April 2012 at Juliusruh. As above, we suggest that these inconsistent momentum fluxes may be more likely indicative of differing interaction conditions in the two hemispheres. Our results also indicate large (short-and long-period, latitudinal, and interhemispheric) variations of the phase relations between the zonal and meridional components of the 5DW, or a possible superposition of the 5DW and other modes. Strong variability of the 5DW may also be indicative of strong interactions with other planetary waves and/or the zonal mean flow, given its small zonal phase speed.
5,441.8
2015-11-06T00:00:00.000
[ "Environmental Science", "Physics" ]
The Gulf Coast: A New American Underbelly of Tropical Diseases and Poverty The recent finding that dengue fever has emerged in Houston, Texas—the first major United States city in modern times with autochthonous dengue—adds to previous evidence indicating that the Gulf Coast of the Southern US is under increasing threat from diseases thought previously to affect only developing countries. The recent finding that dengue fever has emerged in Houston, Texas-the first major United States city in modern times with autochthonous dengueadds to previous evidence indicating that the Gulf Coast of the Southern US is under increasing threat from diseases thought previously to affect only developing countries. Extreme poverty and a warm, tropical climate are the two most potent forces promoting the endemicity of neglected tropical diseases in Africa, Asia, and Latin America. Now, these same forces are also widely prevalent in the five states of the US Gulf Coast-Texas, Louisiana, Mississippi, Alabama, and Florida ( Figure 1). Poverty is rampant: ten million Gulf Coast residents currently live below the US poverty line, with Mississippi topping the list of all states in terms of percentage of people who live in poverty (22%) [1]. Texas alone has almost five million poor people [1]. Of particular concern is the level of extreme poverty-defined as less than one-half of the federal poverty level-in the region, especially among minorities. One in ten black children living in Louisiana and Mississippi live in such near-developing-nation-level conditions [2]. Superimposed on this pervasive extreme poverty are frequent and periodic exposures to climate and environmental hazards, including hurricanes, floods, droughts, and oil spills [3,4], which in some cases can further exacerbate financial hardships in the region. Thus, today the Gulf Coast is currently considered America's most vulnerable and impoverished region [4,5]. One of us (PJH) previously noted in 2011 how neglected tropical diseases could emerge in this mixing bowl of poverty and hardship in the Gulf (Table 1) [6]. At that time, the key factors linking poverty with disease on the Gulf Coast included housing with inadequate or absent plumbing, air conditioning, and/or window screens, and it was predicted that the region faces imminent threats from dengue fever and other vector-borne tropical infections [6]. Now, a new retrospective study of almost 4,000 sera samples has revealed that Houston, Texas, suffered from a seasonal outbreak of dengue fever caused by dengue virus type 2 (DENV-2) from May until September of 2003, with transmission (by Aedes mosquitoes) also occurring in the two subsequent years [7]. No information beyond this period is available, so it remains a possibility that dengue emerged prior to 2003 and might still be causing seasonal epidemics. Moreover, it was also reported that in 2004-2005 an outbreak of DENV-2 dengue fever occurred in Cameron County, more than 300 miles to the south on the Texas Gulf Coast [8,9]. Additional news reports indicate that dengue returned to Cameron and Hidalgo Counties late in 2013. In both the Houston and South Texas outbreaks, the poorest communities were most affected [7][8][9]. In light of the locally acquired cases of dengue fever caused by DENV-1 in Florida in 2009-2010 [10], an added concern is whether the phenomenon of viral immune enhancement that could result from the presence of two different dengue serotypes (previous exposure to one serotype followed by infections with a different serotype) on the Gulf could place populations living there at future risk for dengue's most serious complications: severe dengue and dengue shock syndrome. Beyond dengue, Texas previously suffered from regular St. Louis encephalitis summer outbreaks [11] and currently has had the largest number of cases of West Nile virus (WNV) infection (transmitted by Culex mosquitoes) of any state, with periodic spikes in the number of cases occurring at three-year intervals [12]. Possibly unique to WNV strains in Texas [13] is the observation that chronic persistent infection and prolonged immunoglobulin M (IgM) seropositivity is a common occurrence and is associated with several major clinical sequelae [14], including depression [15] and chronic kidney disease associated with viruria [16]. The US Gulf Coast is also considered vulnerable to the introduction of Chikungunya fever, an alphavirus infection transmitted by Aedes mosquitoes that clinically resembles dengue, with the possibility of year-round transmission in the warm Gulf climate [17]. Still another mosquito-transmitted viral infection-Venezuelan equine encephalitis (VEE)-spread rapidly from Guatemala and into Gulf coastal regions of Mexico and South Texas during the late 1960s and early 1970s, resulting in the deaths of 1,500 horses and several hundred human illnesses on the US side [18]. The VEE virus continues to actively circulate in areas of Mexico bordering the US [18]. Important neglected bacterial infections also stand out. Both murine and epidemic typhus have emerged among the homeless in Houston [19]. Vibrio vulnificus is a gramnegative bacterium of estuarine and coastal habitats of the northern Gulf of Mexico, where it has become an important opportunistic pathogen that can cause serious wound infections and primary septicemia among individuals who come into contact with seawater or contaminated seafood [20]. Among the parasitic infections now considered widespread in the Gulf Coast, trichomoniasis was shown to be the leading sexually transmitted infection and an important cofactor in the HIV/AIDS epidemic in New Orleans, Louisiana [6,21]. Human autochthonous Chagas disease transmission has been confirmed in Texas and Louisiana [6,22,23]. Canine Chagas has also been found in these states. A recent economic analysis reveals that Chagas disease incurs almost $900 million in costs in the US [24], although the percentage of these costs for the Gulf region has not been specified. Similarly, toxocariasis, a soil-transmitted helminthic zoonosis, disproportionately occurs in the South, affecting as many as one in five non-Hispanic blacks and linked to low education levels and cognitive delays [25], but its prevalence in the Gulf is not known. To date, the major social determinants of the neglected tropical diseases are poverty and also race or ethnicity. The actual biomedical underpinnings for these connections are poorly understood, although, with respect to poverty, in some cases poor housing may increase exposure to medically relevant vectors while lack of sanitation and access to clean water in impoverished areas, as well as lack of access to health care, would further promote disease. These diseases also disproportionately occur among non-His-panic blacks and Hispanics, but this relationship may also be based mostly on links to poverty. Still another observation is the association between some of these neglected tropical diseases and maternal and child health. There are an estimated 40,000 pregnant North American women who are Trypanosoma cruzi seropositive and at risk of transmitting the parasite to their babies [26]. Thus, there is an urgent need to measure the frequency of congenital Chagas disease and to evaluate the need for screening and treatment. Dengue in pregnancy is also increasingly recognized for its associations with increased risks of postpartum hemorrhage and preterm birth [27]. Some of the urgent needs in addressing the neglected tropical diseases in the Gulf have been summarized previously and include specific recommendations for greatly expanded disease surveillance and studies to determine exactly how these diseases are transmitted [6,28,29]. Currently, such studies are not being actively pursued across the Gulf region for any major neglected tropical disease. Mosquito control programs are often well organized, but there is a need to seriously investigate different control strategies for vector-borne diseases in order to reduce vector populations and host exposure [17]. For many neglected tropical diseases, diagnostic tests are cumbersome or not widely available. There is a severe lack of physician awareness about how to manage and treat neglected tropical diseases and an equally urgent need to develop new or better drugs and vaccines. The stakes are high. The Gulf Coast remains vitally important to the American economy because of its key role in petrochemicals [3] and shipping [4]. Today, Houston and New Orleans represent two of the largest American ports [4], with expectations that these ports will continue to expand significantly with the imminent widening of the Panama Canal. Enhanced measures to detect, treat, and prevent neglected tropical diseases are important steps to promote the health of populations living on the Gulf and ensure the region's economic vitality.
1,888.6
2014-05-01T00:00:00.000
[ "Economics" ]
Observation of frequency-uncorrelated photon pairs generated by counter-propagating spontaneous parametric down-conversion We report the generation of frequency-uncorrelated photon pairs from counter-propagating spontaneous parametric down-conversion in a periodically-poled KTP waveguide. The joint spectral intensity of photon pairs is characterized by measuring the corresponding stimulated process, namely, the difference frequency generation process. The experimental result shows a clear uncorrelated joint spectrum, where the backward-propagating photon has a narrow bandwidth of 7.46 GHz and the forward-propagating one has a bandwidth of 0.23 THz like the pump light. The heralded single-photon purity estimated through Schmidt decomposition is as high as 0.996, showing a perspective for ultra-purity and narrow-band single-photon generation. Such unique feature results from the backward-wave quasi-phase-matching condition and does not has a strict limitation on the material and working wavelength, thus fascinating its application in photonic quantum technologies. Spontaneous parametric down-conversion (SPDC) in nonlinear crystals has been a successful technique to generate photon pairs which constitutes a core resource for photonic quantum technologies 1 . However, due to the energy-conserving condition, the photon pairs are usually correlated or entangled in frequency, and consequently, the single-photon state is mixed without frequency information readout or elimination 2 . This feature may bring contamination in many applications involving multiple SPDC sources, or pure single photons 3 . A straightforward way to eliminate the frequency-correlated information is spectral filtering by a filter with a much narrower bandwidth than the single photons 2 , however, it may reduce the source brightness. One possible solution to this problem is shaping the joint spectrum to produce frequency-uncorrelated photon pairs by adjusting parameters such as crystal length, crystal material and dispersion, phase-matching frequencies, and pump bandwidth 4 . Despite of the fact that several such photon-pair sources have been realized [5][6][7][8][9][10] , such method relies on modulating the dispersion relationship between the pump and down-conversion photons, i.e., group velocity matching (GVM) condition, and thus has limited choices on the working wavelengths and materials. Another method to produce frequency-uncorrelated photon pairs is to utilize the counter-propagating quasiphase-matching (QPM) SPDC process [11][12][13][14] , where the frequency correlation is eliminated by the narrow-band backward-wave-type phase-matching spectrum function [15][16][17][18] , and hence such method can be applied in a large range of nonlinear materials and wavelengths. Due to the counter propagation of the signal and idler photons, an ultra-short poling period in the order of sub-µm is required to satisfy the phase matching condition 19 www.nature.com/scientificreports/ with the fifth-order QPM. A narrow-band counter-propagating photonic polarization-entanglement source based on the third-order QPM was realized in our lab 22 . In this paper, we demonstrate an observation of the frequency-uncorrelated photon pairs using the thirdorder QPM counter-propagating SPDC process in a periodically-poled KTP (PPKTP) waveguide. We measure the joint spectral intensity (JSI) by employing the corresponding stimulated process, namely, the difference frequency generation (DFG) process 23 . This method has been demonstrated to be a rapid and efficient way to characterize the JSI [24][25][26][27] . The high precision JSI result exhibits a heralded single-photon purity of 0.996 estimated by Schmidt decomposition. The bandwidth of the backward-propagating photon is as narrow as 7.46 GHz, while the forward-propagating photon has a bandwidth of 0.23 THz similar to the pump light. Such unique feature shows perspective for frequency-multiplexed heralding single-photon generation 28 as well as other applications in photonic technologies. Theory of frequency correlation in counter-propagating SPDC The photon-pair state generated from SPDC can be written as 4 where |vac� represents the vacuum state, and a † is the creation operator for photons with angular frequency ω , with the subscripts s and i denoting the signal and idler photons, respectively. The coefficient A absorbs all the constants and slowly varying functions of frequency. The spectral property of the photon pairs is determined by the joint spectral amplitude (JSA) given by where α(ω s , ω i ) represents the pump spectral function and φ(ω s , ω i ) = sinc(∆kL/2)exp(−i∆kL/2) is the phasematching function, with L denoting the interaction length. In the conventional co-propagating SPDC process, as shown in Fig. 1a, the signal (s) and idler (i) photons propagate in the same direction with the pump (p) photon, where the phase mismatch ∆k is written as where the mth-order reciprocal wave vector k G = 2πm/� with denoting the poling period. While in the counter-propagating SPDC process, as shown in Fig. 1b, the signal photons travel in the forward direction along with the pump and the idler photons travel in the opposite direction. Then the phase mismatch is given by We define frequency offsets δ j ≡ � j − ω j , with j = p, s, i , where j are central frequencies satisfying perfect phase-matching condition ∆k = 0 . To analysis the JSA, we expand the phase mismatch to the first order in δ j , having (2) ) and background positive ( +χ (2) ) domains, respectively, with a period of . The purple, red, and oranges arrows represent pump ( k p ), signal ( k s ), and idler ( k i ) wave vectors, respectively, with the reciprocal wave vector k G denoted by the blue arrow. www.nature.com/scientificreports/ for co-propagating and counter-propagating SPDC processes, respectively. Note that higher-order dispersion can be neglected for the backward-wave-type phase matching [11][12][13][14][15][16][17][18] . Here k ′ j , (j = p, i, s) are the inverse of group velocities u j at central frequencies j , namely, In the approximation made in Eqs. (5) and (6), the phase-matching function φ(ω s , ω i ) is a linear function of δ s and δ i with a group-velocity angle θ with respect to the ω i -axis given by for co-propagating and counter-propagating SPDC, respectively. We can see that the angle is related to the group velocities of the pump, signal and idler photons, and thus can be engineered in some specific wavelength ranges [4][5][6][7][8][9][10] . However, for the counter-propagating case, θ c can keep a small value in a large wavelength range. The angle θ c can be characterized in the temporal domain by introducing the following two characteristic temporal scales 12,13 The scale τ s represents the "small" temporal separation between the pump and co-propagating signal waves induced by the group velocity mismatch. The other scale τ i describes the "large" temporal separation between the pump and counter-propagating idler waves, which is determined by the traveling time through the waveguide of the pulse centers. Therefore, the angle θ c given by Eq. (9) can be rewritten as In the limit of θ c → 0 the JSA given by Eq. (2) is separable [11][12][13] , but it is not a sufficient condition due to the role of the pump spectral function. It has been demonstrated 12,13 that the JSI approaches a factorized form provided that the pump pulse duration τ p satisfies the condition of τ i ≫ τ p ≫ τ s . Note that there is no extra requirement on the specific spectral shape of pump light. Moreover, this condition merely sets a limitation on the temporal scales, without any confinement on the material, dispersion, and working wavelength, provided central frequencies satisfying perfect phase-matching condition of ∆k = 0. Experiment and results In our experiment, we utilize a 10-mm-long PPKTP waveguide with a poling period of = 1.3 µ m which can satisfy the third-order QPM condition for type-II counter-propagating SPDC. Fixing the temperature of waveguide at 70 • , we expect to obtain the required frequency nondegenerate SPDC, H p (784.5 nm) → H s (1585.5 nm) + V i (1553.08 nm) , with H and V denoting the horizontal and vertical polarization, respectively. Based on the temperature-dependent Sellmeier equation 29 , we can obtain the two temporal scales, τ i = 73 ps and τ s = 0.7 ps , respectively. Here we set the pump pulse duration τ p = 2 ps to satisfy the condition of τ i ≫ τ p ≫ τ s for frequency-uncorrelated photon pairs generation. The experimental setup is shown in Fig. 2. A femtosecond laser from Ti: Sapphire oscillator with a center wavelength of 784.5 nm first passes a combination of half-wave plate (HWP) and a polarization beam splitter (PBS) to adjust the power, and then is filtered to 2-ps pulses by two band-pass filters. After a dichroic mirror (DM) it is coupled into the waveguide. The forward-propagating signal photon is coupled into a superconducting nanowire single photon detector (SNSPD1) through port a with a long-pass filter (LPF) filtering the pump light. The backward-propagating idler photon is coupled into SNSPD2 after reflected at the DM. The time-to-digital converter (TDC) is used for two-photon coincidence measurement. When the pump power coupled into the waveguide is 11.3 mw, a coincidence counting rate of 870 Hz is measured. Taking into account waveguide-to-fiber coupling efficiency, the transmission loss in the fiber connection from the source to detectors, and the detector efficiency, the total coupling efficiency for signal or idler photon is estimated to be 6% , so we can estimate an intrinsic photon pair generation rate to be about 2.1 × 10 4 Hz/mw. . 27 , namely, the mode square of the JSA given by Eq. (2). A traditional and direct way to measure the JSI is spectrally resolved single photon coincidence measurements. This method is time consuming and has a low resolution, due to the low generation rate of photon pairs. Here we employ the method of "stimulated emission tomography" 23 to characterize the JSI, which relies on the relationship between the spontaneous process and its corresponding stimulated process, and is possible with classical detectors, enabling rapid measurement of the JSI and an improved signal-to-noise ratio [24][25][26][27] . Here for the SPDC the corresponding process is the DFG process, in which a seed signal or idler pulse is injected together with a pump pulse. As illustrated in Fig. 2, a wavelength-tunable continuous-wave laser (Santec-550) within the idler bandwidth is used as the idler seed to stimulate the emission of signal photons. After a DM it is coupled into the waveguide through port b in the opposite direction of the original pump laser for SPDC. The signal photon is coupled into fiber through port b and then directed into optical spectrum analyzer (OSA) after reflected by the DM. By tuning the idler seed wavelength from 1552.2 to 1554 nm with a step spacing 0.01 nm, we capture the spectrum of signal light by using the OSA with a spectral accuracy of 0.065 nm. The experimental result is shown in Fig. 3a, with a particular example shown in Fig. 3b in the case of the seed wavelength setted as 1553.08 nm. From the JSI distribution shown in Fig. 3a, we can see that the JSI behaves as an approximate ellipse with its principal axes aligned along s and i . The bandwidth of idler photons is about 0.06 nm, namely, 7.46 GHz, which is consistent with the phase-matching bandwidth. On the other hand, the bandwidth of signal photons is about 2 nm, corresponding to 0.23 THz, which matches well with the pump light bandwidth. The result indicates that the JSI is factorable, with the signal and idler spectrum governed by energy and momentum conservation function, respectively. Our result is in well agreement with the theoretical prediction by Gatti et al. 12,13 . To further evaluate the spectral uncorrelation, we perform the Schmidt decomposition 30 on the JSI, from which we can estimate the heralded single-photon purity to be 0.996. Conclusion We demonstrate the generation of a frequency-uncorrelated photon pairs using counter-propagating SPDC in a PPKTP waveguide with a poling period on the order of interaction wavelength. By characterizing the corresponding DFG process, we obtain a high-precision JSI image with a heralded single-photon purity of 0.996 estimated by Schmidt decomposition. The underlying physics of our method is the spectral property of the backward-type SPDC phase-matching, and thus this method is not strictly limited by the material, dispersion, and working wavelength. Moreover, here we use the QPM technique to realize the SPDC source, and hence our source is flexible in wavelength choice, provided advanced fabrication techniques 31 . In particular, the backward-propagating idler photon has a narrow bandwidth of 7.46 GHz determined by phase-matching, while the forward-propagating signal photon has a broad bandwidth of 0.23 THz similar to the pump light. The energy-time entanglement between GHz and THz photon pairs may have some unique applications, for instance, the frequency-multiplexed heralding single-photon generation 28 . We hope our approach can stimulate more such investigations. Figure 3. Experimetanl result of the JSI measurement. (a) JSI obtained by combining signal spectra measured at 180 idler seed wavelengths. The horizontal resolution depends on the seed linewidth and sweep step, and the step size is set as 0.01 nm. The vertical resolution is 0.065 nm that is determined by the resolution of the optical spectrum analyzer. (b) Spectral profile of the signal light measured when the idler seed wavelength is set as 1553.08 nm. The black dots represent the data obtained from the spectrometer and the red solid curve is fitted with a Gaussian function.
3,057.6
2021-06-16T00:00:00.000
[ "Physics" ]
A Hybrid DC–DC Quadrupler Boost Converter for Photovoltaic Panels Integration into a DC Distribution System : This paper presents a non-isolated DC–DC boost topology with a high-voltage-gain ratio for renewable energy applications. The presented converter is suitable for converting the voltage from low-voltage sources, such as photovoltaic panels, to higher voltage levels. The proposed converter consists of a multiphase boost stage with an interleaving switching technique and a voltage multiplier cell to provide a voltage level at a reduced duty cycle. The interleaved boost stage consists of two legs and can be either fed from single or multiple voltage sources with the ability to control each source separately. The voltage multiplier cell can increase the voltage level by charging and discharging the capacitors. Several advantages are associated with the converter, such as reduced voltage stress on semiconductor elements and a scalable structure, where the number of voltage multiplier cells can be increased. The inductors in the interleaved boost stage share the input current equally, which reduces the conduction loss in the inductors. The input and the output of the converter share the same ground, and all active switches are low-side, which means no feedback or signal isolation is required. The theory of operation and steady-state analysis of the converter operating in the continuous conduction mode is presented. Components selections and efficiency analysis are presented and validated by comparative analysis and simulation results. A 0.195 kW experimental prototype was designed and implemented to convert the voltage from 20 V input source to 400 V output load, at 50 kHz. The test results show a high-performance of the converter as the maximum efficiency point is above 97%. Introduction The number of renewable energy sources (RES) installations has been increasing since the end of the 20th century. Several factors contribute to the increase of the RES adoption. First, renewable energy sources are a viable solution to both the energy shortage and environmental pollution. Secondly, the price of material and the manufacturing cost have been substantially declining [1,2]. Several programs and projects are subsidized by the governments to stimulate the energy markets, such as One Million Solar Roof Initiatives in 1997 [3] and the Rural Energy for America Program [4]. The growth of the renewable energy market has driven the research and developments of recent applications and technologies that enable RES deployments, such as DC microgrids and DC distribution systems. The DC distribution system has been attracting researchers' attention due to the advantages the DC distribution system has over the AC distribution system. The DC distribution system requires fewer converter units, and it has several advantages, for instance, high-efficiency, high power quality, low cost, and the suitability for renewable energy integration [5,6]. Most renewable energy sources feature low-output voltage, such as photovoltaic panels, and the integration into a DC distribution system is challenging. The PV panels typically have a voltage range of 15-45 V [7], and the DC distribution system has a voltage of 360-960 V. Therefore, a step-up DC-DC conversion with a large voltage-gain ratio unit is needed to facilitate the integration. The simplest step-up topology is the traditional boost converter. The traditional boost converter's power stage contains only four components: a coil, a low-side MOSFET, a diode, and a capacitor [8]. However, achieving a high-voltage-gain ratio out of the traditional boost converter necessitates operation at a high duty cycle. Ideally, the voltage gain could be very high as the duty cycle approaches unity. Still, in practice, the voltage gain at high duty cycles becomes insufficient due to the inductor and MOSFET conduction losses [9]. The traditional boost converter is not a preferred solution to provide high output voltage because the voltage stress across the diode is equal to the output voltage. This might compel the designer to select inefficient and expensive components. In addition, the critical inductance that ensures continuous conduction mode operation is large, so the power density is decreased [10]. Thus, a multilevel converter such as a three-level step-up converter was mainly proposed to minimize the magnetic elements size and voltage stress across components [11,12]. Nevertheless, the three-level step-up converter's gain is similar to the one of the traditional boost converter. Cascading multiple boost converters allows operation at low duty cycles and enhances the overall voltage gain [13,14]. However, such an approach's efficiency is lower because the power is processed multiple times, and the output diode is required to block the high output voltage. Flying capacitor multilevel converters can boost the input voltage to the desired output voltage with reduced voltage stress across its internal components. Moreover, the inductor required to ensure continuous conduction operation is very small due to the virtual frequency seen by the inductor. The virtual frequency is several times higher than the switching frequency, which depends on the number of stages [15,16]. The flying capacitor is dependent on the phase shift modulation. The higher the number of voltage stages, the higher the minimum duty cycle that is necessary. Increasing the number of stages limits the duty cycle's operation to a narrow range, making the converter not suitable for applications such as tracking control and load matching. Another approach used to increase the voltage gain is by employing a coupled inductor or transformer, which can also be utilized to provide isolation [17][18][19]. The use of magnetic devices makes the output voltage a function of the turns ratio, which allows the design at any desired duty ratio. The disadvantage of utilizing coupled inductors or transformers is the voltage spikes across the semiconductor switches caused by leakage inductance. To overcome that, an extra snubbing circuitry is required to circulate the energy. Using a transformer or coupled inductor takes a large area of the hardware prototype, and hence the converter's power density is reduced. Several research papers introduced multiple boost converters with interleaving technique hybridized with switched capacitors' circuits [20,21]. Using such a method can significantly enhance the topology, where the switched capacitor increases the power density and minimizes the size of the magnetic elements. However, switch capacitor circuits require a complicated driving circuit and an advanced control scheme to eliminate the capacitors' voltage mismatches. Replacing the switches capacitor cell with a voltage multiplier cell removes the complexity of gate drive circuitry and the signal isolation such as in [22][23][24][25]. However, the voltage stress across components still high and current sharing between phases is not equal, which compromises the efficiency. The limitation in existing topologies motivates the research in this paper. The proposed converter comprises a two-phase boost stage with an interleaving technique, an intermediate capacitor, and voltage multiplier cells. The advantages of the presented converter are: • The two-phase boost stage with interleaving reduces the current ripples on input current, doubles the ripple frequency, makes it easy to be filtered, and allows precise current measurements to enhance the maximum power point tracking. • The converter offers high-voltage gain and, at the same time, low voltage stress across both active and passive components. • The proposed converter has a modular structure and can be extended to reduce further the operating duty cycle and voltage stresses across the components. • The output of the converter shares the ground with input sources. Thus, the output voltage can be sensed through a voltage divider and no need for expensive differential voltage sensors and isolated feedback loop. • The proposed converter can operate in continuous conduction mode (CCM) with smaller inductance. Therefore, higher power density can be achieved. • The average current of both inductors are equal, so that conduction loss is at its minimum since the conduction loss is a quadratic function of the inductor RMS currents. The rest of the papers are organized as follows: Section 2 presents the operation principle and derivation of steady-state equations. Section 3 presents converter design and efficiency analysis. In Section 4, comparative analysis with several high-voltage-gain converters is presented. Section 5 presents simulation results, details about hardware implementation, and experimental results are provided and discussed. Finally, the summary and key points are presented in Section 5. Principle of Operation and Derivation of Steady-State Equations The converter consisted of a two-phase boost stage, an intermediate capacitor, and a diode capacitor cell to multiply the voltage. The two-phase boost stage uses two low-side MOSFETs and an interleaving technique to share the input current between inductors. The interleaving technique reduces the magnetic volume and increases the source current ripples' frequency to be filtered. Figure 1a shows the proposed converter with one stage voltage multiplier cell. The voltage multiplier cell is shown in Figure 1b, which comprises three diodes and two capacitors. The proposed converter has a general, flexible structure. That is, the voltage gain could be increased by arranging voltage multiplier cells consecutively, as shown in Figure 1c. However, increasing the number of voltage multiplier cells increases the total conduction loss of the diodes. Throughout this paper, a single voltage multiplier cell stage is used to provide complete analysis and implementation. The proposed converter has three modes of operation, which are governed by two control signals, as shown in Figure 2. Mode 1, where both switches are on, always comes between mode 2 and mode 3. To perform the analysis of the circuit in these modes, few assumptions were made to simplify the analysis. (1) The elements are lossless, (2) the converter operation is in the steady-state, (3) the duty cycle of both MOSFETs are equal, and they are out of phase with (4) all capacitors being large enough to neglect the voltage ripples. Mode 1: The MOSFETs Are Both Conducting Mode 1 occurs twice during a switching period in t 0 − t 1 and t 2 − t 3 . Both MOSFETs are conducting in this mode, and both inductors are drawing energy from the input source. Hence, all diodes are in the reverse-bias mode, and they are OFF. Therefore, the voltage multiplier cell is disconnected from the interleaved boost stage. The equivalent circuit of this time interval is illustrated in Figure 3a. The state equations of this interval are given by Mode 2: S 1 Is OFF and S 2 Is ON In this mode, diodes D 2 and D o are forward-biased, and they are conducting. The inductor L 2 is still drawing energy from the source, while the energy of L 1 is being transferred to the voltage multiplier cell capacitors. The diodes D 1 and D 3 are in reverse-bias, and they are blocking in this interval. The capacitors C 1 and C 2 are being discharged to the output load and capacitor C 3 . The equivalent circuit of the converter in this interval is shown in Figure 3b. The state equations are calculated by Mode 3: S 1 Is ON and S 2 Is OFF This mode is opposite to the previous mode. The diodes D 1 and D 3 are forward-biased, and they are conducting while the diodes D 2 and D o are reverse-biased, and they are OFF. The capacitor C 1 is drawing energy from the input voltage and the inductor L 2 . Inductor L 1 is being charged from the input voltage. The capacitors and C 2 and C 3 are connected in parallel and, therefore, the energy in C 3 is being discharged to C 2 . The equivalent circuit of this interval is shown in Figure 3c. The state equations of this interval are given by Steady-State Static Voltage Gain The voltage-second balance is used to derive the steady-state equations. Thus, the average value of the inductors is given by From Figure 3 and the inductor current equations, one can find the relationship between the voltages in the circuit. The relationship between capacitors voltage and input voltage is given by By solving (7), one can obtain the voltage across capacitors. The voltage of the intermediate capacitor C 1 is given by The voltage across voltage multiplier cell capacitors is given by and the output voltage is given by In case the converter has N of voltage multiplier cells, the static voltage gain of the converter is calculated by The ideal voltage gain of the converter at a various number of voltage multiplier cells is shown in Figure 4. One can obtain a high voltage gain at a reduced duty ratio by adding an extra number of voltage multiplier cells. However, increasing the number of voltage multiplier cells increases the bill of material and the c. The primary source of non-idealities is diodes. The voltage gain considering the forward voltage of the diodes (Vf) is calculated by The previously detailed analysis in this paper was for the case of one independent source and the same duty cycle of the MOSFETs. The presented converter can take power from multiple independent sources, where each independent source is connected to a phase. For example, two different PV panels with different voltage levels can be connected in parallel and controlled separately. The connection of two independent sources is illustrated in Figure 5. Each phase can work at a different duty cycle than the other, which is applicable to track an individual PV panel's maximum power point. Table 1 summarizes the voltage gain in the case of two independent sources and various duty cycles cases: Figure 5. The proposed converter with two independent input sources. Both sources share the ground with the output. Case The Output Voltage Converter Design and Efficiency Analysis The selection of the most suitable components ensures the converter's proper operation and enhances the quality of the overall design. This section presents information about components ratings, maximum stresses, and currents. Inductor Selection As previously mentioned, the input current is equally shared among the phases. The average current passing through each inductor is given by The current ripple of the inductor current is calculated by The proposed converter is intended to work in the continuous conduction mode (CCM). Therefore, the critical inductance that ensures the proposed converter operates in the CCM is given by However, the inductors are usually selected based on the desired tolerance of current ripples, which is typically less than 30%. The peak current of the inductor is given by The RMS current is given by MOSFET Selection The voltage across the MOSFETs is given by the following: The input current is shared equitably among inductors, and, therefore, the average value of the switch current is given by and, for N voltage multiplier cells, the average currents can be calculated by The effective value of the MOSFET current is given by and for N voltage multiplier cells the effective value of the MOSFET current is calculated by Diode Selection The maximum voltage stresses across the blocking diodes are given by The voltage stress on the output diode is calculated by The voltage stress across blocking diodes could be generalized for N number of cells, which is given by and the output diode voltage is given by The average current value passing through diodes is equal to the output current and the rms current can be calculated by Capacitors Selection Capacitors are required store energy during off-states and assist with multiplication of the voltage. The output capacitor current is given by The voltage multiplier cell capacitors C 2 and C 3 rms current is given as The capacitor C 1 rms current is given by The rms current passing through voltage multiplier cell capacitors is not affected by the number of stages. The current of the intermediated capacitor C 1 , on the other hand, depends on the number of voltage multiplier cells, and is given by The capacitance is selected based on the tolerated voltage ripples. The output capacitor needs to be large enough to supply the load during mode 1. The required output capacitance is given by where f s is the switching frequency, and ∆V o is the tolerated voltage ripples. Efficiency Analysis The conduction power loss in inductors can be determined by the RMS current, which can be calculated by where DCR 1 and DCR 2 are the DC resistance. In case of I L 1,rms = I L 2,rms and DCR 1 = DCR 2 , the total conduction loss of the inductors is given by The power losses in the MOSFETs can be approximated into two segments: switching loss and conduction loss. The switching loss of a MOSFET can be found using the following equation: where C oss is MOSFET output capacitor, and T ON and T OFF are the ON and OFF time of the MOSFET, respectively.The conduction loss part is given by where R 1 (on) and R 2 (on) are the ON resistance of the S 1 and S 2 , respectively. The power loss in the diodes is given by where V f is the forward voltage and r f the forward resistance of the diode. The power loss in capacitors is given by P C,total = N I 2 C n,rms ESR n + I 2 where the ESR is the equivalent series resistance of the capacitor. The total loss is given by P loss = P D,total + P C,total + P S,conduction + P SW + P L (57) The power stage efficiency of the converter is calculated by Comparative Analysis Several high-voltage gain DC-DC step-up converters for renewable energy applications can be found in the literature [26][27][28][29][30][31][32]. In this section, the proposed converter is compared only to the converters that have shared ground between the input and the output, and do not have any floating active switch or coupled inductors. The selected existing converters are compared to the proposed converter in terms of the number of components, number of inductive and capacitive storage elements, voltage stress across switching devices, and the voltage gain. Table 2 shows a comparison of the proposed converter with other converters. The proposed converter has higher voltage gain compared to the conventional boost and the interleaved boost converters and lower voltage stress across elements. The converter in [33] has higher voltage stress across components than the proposed converter. The proposed converter has less number of components and higher voltage gain than the converter in [34]. The converter in [35] has a higher number of components than the proposed converter, and with slightly higher voltage gain. The input current in [35] is not equally shared between inductors. The conduction loss of the inductors in an interleaved boost converter is the lowest when the input current is shared equally among inductors. Figure 6 shows the difference of inductors conduction power loss between a converter with equal current sharing and one without equal current sharing between inductors. The difference can be up to 20 W of power loss at 5 A load current and a DCR = DCR 1 = DCR 2 = 0.1 Ω. Figure 6. The difference in inductors' conduction power loss between a two-phase interleaved boost where the input current is equally shared between inductors and another where input current is not equally shared between inductors. Simulation and Experimental Results The operation of the proposed converter was confirmed with simulation and experimental study. The parameters used in the simulation are listed in Table 3. In addition to simulation parameters, small parasitic elements were included to avoid singular loops and allow better simulation performance. The inductors voltage and current waveforms are shown in Figure 7. The average and RMS values of each inductor current are 5 A and 5.2 A, correspondingly. Figure 8 shows the voltage stress across the active switches and diodes. The maximum voltage across the active switches is 100 V, and the maximum voltage across the voltage multiplier cell diodes is 200 V, and the voltage stress across the output diode is 100 V. The current passing through MOSFETs is shown in Figure 9, where the RMS of the switches S 1 and S 2 are 5.25 A and 5.28 A, respectively. The average current passing through each diode is 0.5 A, and the effective value of the current is 1.11 A. The waveform of capacitors currents is shown in Figure 10. The RMS value of the currents of C 1 , C 2 , C 3 , and C o are 3. 3 A, 1.65 A, 1.65 A, 1.65 A, and 1 A, respectively. The voltage across capacitors is shown in Figure 11. The voltage across the intermediate capacitor C 1 is 100 V, and the voltage across capacitors C 2 and C 3 is 200 V. The efficiency simulation was performed using the equations in Section 3.5. The loss breakdown and breakdown percentage in Figure 12 indicates the component loss value and percentage with respect to the total power versus the load power. The diodes MOSFETs share the majority of power loss, and their losses increase with the increase of load power, and they are culprits of more than 82% of the total power loss at full load. The inductors' power loss comes after the switching elements, which share about 16% out of the total power loss at full load. Capacitors with low ESR have insignificant power loss compared to other elements. The proposed converter was experimentally tested in the laboratory to verify the operation. A 195 Watt hardware prototype was designed and constructed to convert a 20 V supplied by N5700 programmable power supply to a 400 V DC load. Figure 13 shows the hardware setup of the experiment. The programmable electronic load BK8502 is used as a load, and an auxiliary voltage source is used to power the gate drive circuits. The power stage was constructed using the components listed in Table 4. The MOSFETs are implemented using IPA105N15N3, which has a voltage rating of 150 V and has low conduction loss due to low ON resistance. The coils 60B104C are used for L 1 and L 2 . The inductors have 100 µH, which ensure CCM operation and smooth input current. Capacitors are all implemented using B32674D3106K film capacitors with 10µF and capability of operation at higher voltages. All diodes are implemented using a MBR40250G Schottky diode with low forward voltage and fast reverse recovery time. The experimental results are shown in Figures 14-16. Figure 14 shows the controlling signal of the MOSFETs, which are provided by the signal generator and the voltage stress across MOSFETs and diodes. The voltage across capacitors and their ripples are shown in Figure 15. The voltage ripples of internal capacitors voltages are all less than 1 V, and the magnitude of output voltage ripples is less than 0.2 V. Figure 16 shows the currents in the interleaved boost stage. Similar to the simulation, the average value of the inductors current is 5 A. Due to the out of phase operation between the phases, the input current has a higher frequency and less current ripples. The converter's efficiency has a maximum value of about 97% and occurred at 80 W. At full load, the efficiency is around 94.5%. The efficiency can be further improved by selecting more efficient switching elements. Input voltage The hardware prototype The electronic load Oscilloscope Auxiliary supply Conclusions This paper has presented an interleaved high-voltage-gain step-up DC-DC topology with voltage multiplier cells to convert 20 V to 400 V. The proposed converter has peak efficiency above 97% at 80 W, and full load efficiency is roughly 94%. The converter's operation was explained by detailed analysis and verified by simulation and experimental results. The proposed converter has several advantages: a high-voltage-gain ratio, low voltage stress across the switching elements, and high efficiency. The input current is smoother than the traditional boost converter, suitable for sensing input current and obtaining accurate measurements. Future work includes controlling the proposed converter using a maximum power point tracking controller and integrating the converter to a 400 V distribution bus or connecting to an inverter to provide AC power to the main grid.
5,537
2020-11-20T00:00:00.000
[ "Engineering", "Environmental Science" ]
Improving the Listening Ability of Elementary School Students Through the Use of Augmented Reality-Based Learning Media This research is motivated by the low listening ability of elementary school students. The purpose of this study is to determine the improvement of listening skills of fifth-grade elementary school students by using augmented reality-based learning media. This study uses an experimental model with a pretest-posttest control group design with a sample of 60 students. The instrument used in this study is to use a listening ability test, besides that the test results are analyzed by looking at the differences in students' listening abilities between students who learn by using augmented reality-based learning media and students who learn by using conventional learning media. The results showed that (1) there was a very significant difference in listening ability between the experimental class and the control class INTRODUCTION Listening is one of the language skills that everyone should have. Furthermore, listening is one aspect of language skills that is very important for students to learn (Laeli, 2021;Metruk, 2019;Mutasim, 2020). Based on this, listening skills are a form of responsive language skills. It can be understood that listening is the process of listening to oral symbols with full attention, understanding, appreciation, and interpretation to obtain information, capture content and messages, and understand the meaning of communication that has been conveyed by the speaker through spoken language (Afriyuninda & Oktaviani, 2021;Saragih, 2022). Listening skills are very important language skills and must be taught the earliest before other language skills (Basri et al., 2020;Sabri et al., 2020). Students who lack the skills to listen to the lessons given by the teacher will have difficulty following the learning itself (Mailawati & Anita, 2022;Pham, 2021). Furthermore (Coskun & Uzunyol-Köprü, 2021;Eriani & Dimyati, 2019;Hajerah, 2019;Intan et al., 2022;Mankel et al., 2020;Nurhanani et al., 2020) suggests that activities in communication that are carried out every day include 45% is used for listening, 30% for speaking, 16% for reading, and 9% for writing. Furthermore, it is said that 50% is for listening and 50% for speaking, reading and writing (Girsang et al., 2019;Munar & Suyadi, 2021;Rahmat & Sumira, 2020). In the communication process that takes place in learning at school, teachers and students must be able to use listening skills well. In the process of learning activities, students must be able to capture and understand correctly the information conveyed by the teacher and other students. If students do not have effective listening skills, they will be wrong in understanding and interpreting the information which results in acquiring and having the wrong knowledge. Based on the description of the problem above, it is necessary to make efforts to improve the quality of listening learning. One of the efforts that can be used to improve students' listening skills is to use augmented reality-based learning media. Furthermore, Augmented Reality (AR) is a technology to complement and overlays the real world with virtual information (Garzón, 2021). Furthermore, AR learning media can visualize abstract concepts to understand and construct object models that enable AR as a more effective medium in accordance with the objectives of learning media (Billinghurst, 2002;Elmqaddem, 2019;Subhashini et al., 2020). Research on the application of augmented reality has been carried out by previous researchers. The research that has been conducted by Subhashini et al., (2020) provides some assistance to students by encouraging them to learn new ideas using graphic guides. Besides being used in schools, it can also be used in the commercial, travel industry, games, and medicine. Next research that has been done (Lee, 2012) states that augmented reality is a projection on the field of education and training in the future. Bower et al., (2014) suggest that augmented reality can encourage students to have high-level thinking skills by presenting the real world in virtual form and can present real problems through the use of digital technology. Furthermore, it is said that by using augmented reality students can interact with visual objects as if they were integrated with the real world so that the interaction can be seen in real terms (Bower et al., 2014;Radu, 2014;Wang et al., 2018). In line with this, the application of augmented reality in the field of education has the advantage of being an educational medium that can have a considerable influence where students who study learning materials will more easily understand the material compared to students who do not use augmented reality media (Ariani et al., 2019;Selviana et al., 2020;Widayati & Simatupang, 2019). But in reality, augmented reality media has not been implemented yet, fiber has not been applied in learning. Furthermore, the application of augmented reality in education can be a solution for teachers in helping provide knowledge and understanding to students. Based on the explanation above, researchers are interested in conducting research on improving the listening ability of elementary school students through the use of augmented reality-based learning media. This study aims to see the effectiveness of augmented reality media in listening learning for elementary school students. METHODS This study aims to determine how effective the use of augmented reality learning media is in learning listening for elementary school students. This type of research is an experimental design with a pretest-posttest control group design. Pretest-posttest control group design is this research design consisting of two groups that were, then given a pretest before learning and a posttest after learning which serves to determine whether there is a difference between the control group and the experimental group. The experimental class was given treatment using augmented reality learning media, while the control class used conventional learning media. Respondents in this study were fifth-grade elementary school students. The sampling technique used in this study is a sampling purposive technique. Purposive sampling is a sampling technique with certain considerations. The reason for using the purposive sampling technique is that it takes two classes that are homogeneous in their abilities and can represent the characteristics of the population. The selected sample is 60 respondents. The instrument used in this study was a listening ability test. According to (Dole, 2020) the listening indicators are divided into 5 stages, namely listening, paying attention, perceiving, assessing, and responding. Meanwhile, an almost identical opinion is explained by Pebriana et al., (2018) explaining that indicators can be used as material for assessment in the learning process. Listening is the listening stage, understanding stage, interpreting stage, evaluating stage, and responding stage. Before conducting the research, the researcher tested the instrument to measure the validity of the instrument that had been prepared. Furthermore, the data collected in this study was obtained by conducting a pretest and posttest. The pretest is used to measure the initial ability before learning begins and the posttest is used to measure students' ability after learning is complete. Pretest and posttest were given to the control class and the experimental class. Then a different test of the average initial performance was carried out in each group. This is done to determine whether there is a difference in the average initial achievement of the two groups. The test used is an independent sample. RESULTS AND DISCUSSION The results of data analysis in this study were to find out how the use of augmented reality learning media improves the listening skills of grade V elementary school students. We can see in the Table 1 It can be understood that the average value of students before being given action in the experimental and control classes was 29.33. Furthermore, the average value of students after being given action in the experimental class was 94.67 and the average value in the control class was 72.33. So descriptively there is a difference in the average before and after the use of Augmented Reality Media in elementary school students. Next, the researcher conducted a Paired Sample T-test for the experimental class and the control class. The following is a Based on the table of t-test results it can be understood that the significance value of the experimental class is 0.000, it can be said that the significance value of the experimental class is less than 0.05 (sig. 2-tailed <0.05), it can be stated that in the experimental class there are differences in students' understanding abilities before and after using augmented reality media. Furthermore, the significance value in the control class is 0.000. It can be understood that the significant value in the control class is less than 0.05, it is stated that in the control class there are differences in students' listening skills before and after learning. Next, the researcher conducted a simple linear regression test with the aim of testing the effect of one independent variable on the dependent variable. You can also see how big the impact is. The results of a simple linear regression test can be seen in the Table 3 below. It can be understood that the correlation value (R) is 0.414, then the coefficient of determination (R square) is 0.171. Based on the results of these statistical tests, it can be concluded that there is an influence of Augmented Reality media for elementary school students. Furthermore, to see the influence of the use of Augmented Reality media on students' listening skills as follows. The results can be seen in the Table 3 below. Based on these calculations, it is known that the effect size is 1.32 Percentile Standing 90%, which means that in the interpretation table it is included in the high category. So it can be concluded that the influence of the use of augmented reality media on students' listening skills is 90% and is classified as high. The research above is in line with the results of research that has been carried out by (Saidin et al., 2015) showing that the application of AR in a number of learning fields including Medicine, Chemistry, Mathematics, Physics, Geography, Biology, Astronomy, and History shows that, Overall, AR technology has positive potential and advantages that can be adapted to the world of education. Furthermore, the results of the study Elmqaddem (2019) state that the nature of AR and VR promises new teaching and learning models that better meet the needs of 21st-century learners. Based on this explanation, in general, it can be concluded that the results of the statistical test show that there are differences in the listening ability of students who learn by using augmented reality learning media with students who learn by using conventional learning media. This can be seen from the significance value of the different test results using a non-parametric test of 0.000 which is smaller than 0.05, thus it can be concluded that the difference in students' listening abilities is very significant. Apart from the results of the statistical analysis, differences in students' listening abilities can be seen from the average score of the student's final tests in their respective learning classes. The average value in the control class is 95.50 while the average value of listening ability in the control class is 71.17. The research above is in line with the results of research that has been carried out by showing that the application of AR in a number of learning fields including Medicine, Chemistry, Mathematics, Physics, Geography, Biology, Astronomy, and History shows that, Overall, AR technology has positive potential and advantages that can be adapted to the world of education. The results of the study Elmqaddem (2019) state that the nature of AR and VR promises new teaching and learning models that better meet the needs of 21stcentury learners. Increasingly advanced technology has had an impact in all fields. One of them with the presence of augmented reality. Erbas & Atherton (2020) explained that augmented reality is a technology that allows user interactivity from the real world into computer-generated objects. Augmented reality technology is also capable of creating an environment using virtual objects to support real conditions. Meanwhile, Sontay, & Karamustafaoğlu (2021) said that augmented reality applications have been prepared with 4D technology. Several studies regarding the use of augmented reality applications by Bower et al., (2014) and Uluyol & Eryilmaz, (2015) suggest new research on augmented reality, especially in the field of education because of the unique features possessed by augmented reality itself which can provide innovation for students or teachers. Based on the explanation, in general, it can be concluded that in improving the listening ability of elementary school students, augmented reality learning media can be used. CONCLUSION Based on the results of the research and discussion that have been presented, it can be concluded that in improving the listening skills of fifth-grade elementary school students, augmented reality learning media can be used. Augmented reality learning media can present and visualize abstract learning materials in real form through the use of technology in the learning process. In addition, the use of augmented reality-based learning media can encourage students to have critical thinking skills and be able to visualize abstract concepts
2,997.4
2023-01-01T00:00:00.000
[ "Education", "Computer Science" ]
How Linguistic Chickens Help Spot Spoken-Eggs: Phonological Constraints on Speech Identification It has long been known that the identification of aural stimuli as speech is context-dependent (Remez et al., 1981). Here, we demonstrate that the discrimination of speech stimuli from their non-speech transforms is further modulated by their linguistic structure. We gauge the effect of phonological structure on discrimination across different manifestations of well-formedness in two distinct languages. One case examines the restrictions on English syllables (e.g., the well-formed melif vs. ill-formed mlif); another investigates the constraints on Hebrew stems by comparing ill-formed AAB stems (e.g., TiTuG) with well-formed ABB and ABC controls (e.g., GiTuT, MiGuS). In both cases, non-speech stimuli that conform to well-formed structures are harder to discriminate from speech than stimuli that conform to ill-formed structures. Auxiliary experiments rule out alternative acoustic explanations for this phenomenon. In English, we show that acoustic manipulations that mimic the mlif–melif contrast do not impair the classification of non-speech stimuli whose structure is well-formed (i.e., disyllables with phonetically short vs. long tonic vowels). Similarly, non-speech stimuli that are ill-formed in Hebrew present no difficulties to English speakers. Thus, non-speech stimuli are harder to classify only when they are well-formed in the participants’ native language. We conclude that the classification of non-speech stimuli is modulated by their linguistic structure: inputs that support well-formed outputs are more readily classified as speech. IntroductIon Speech is the preferred carrier of linguistic messages. All hearing communities use oral sound as the principal medium of linguistic communication (Maddieson, 2006); from early infancy, people favor speech stimuli to various aural controls (e.g., Vouloumanos and Werker, 2007;Shultz and Vouloumanos, 2010;Vouloumanos et al., 2010); and speech stimuli may engage many so-called language areas in the brain to a greater extent than non-speech inputs (Molfese and Molfese, 1980;Vouloumanos et al., 2001;Liebenthal et al., 2003;Meyer et al., 2005;Telkemeyer et al., 2009; but see Abrams et al., 2010;Rogalsky et al., 2011). The strong human preference for speech suggests that the language system is highly tuned to speech. This is indeed expected by the view of the language system as an adaptive processor, designed to ensure a rapid automatic processing of linguistic messages (Liberman et al., 1967;Fodor, 1983;Liberman and Mattingly, 1989;Trout, 2003;Pinker and Jackendoff, 2005). But surprisingly, the preferential tuning to speech is highly flexible. And indeed, linguistic phonological computations apply not only to aural language, but also to printed stimuli read silently (e.g., Van Orden et al., 1990;Lukatela et al., 2001Lukatela et al., , 2004. Moreover, many natural languages take manual signs as their inputs, and such inputs spontaneously give rise to phonological systems that mirror several aspects of spoken language phonology (Sandler and Lillo-Martin, 2006;Sandler et al., 2011;Brentari et al., 2011). Finally, 1993/2004Pinker, 1994). Existing research has shown that such computations might apply to a wide range of inputs -both inputs that are perceived as speech, and those classified as non-speechlike. To the extent the system is interactive, it is thus conceivable that the classification of an input as "linguistic" might be constrained by its output -namely, its structural well-formedness. Such top-down effects could acquire several forms. On a weaker, attention-based explanation, ill-formed stimuli are less likely to engage attentional resources, so they allow for a more rapid and accurate classification of the stimulus, be it speech or nonspeech. A stronger interactive view asserts that the output of the computational system can inform the interpretation of its input -the stronger the well-formedness of the output (e.g., harmony, Prince and Smolensky, 1997), the more likely the input is to be interpreted as linguistic. Accordingly, ill-formedness should facilitate the rapid classification of non-speech inputs, but impair the classification of speech stimuli. While these two versions differ on their accounts for the classification of speech inputs, they converge on their predictions for non-speech stimuli: ill-formed inputs will be more readily classified as non-speech compared to well-formed structures. Past research has shown that the identification of non-speech stimuli is constrained by several aspects of linguistic knowledge. Azadpour and Balaban (2008) observed that people's ability to discriminate non-speech syllables from each other depends on their phonetic distance: the larger the phonetic distance, the more accurate the discrimination. Moreover, this sensitivity to the phonetic similarity of non-speech stimuli remains significant even after statistically controlling for their acoustic similarity (determined by the Euclidian distance among formants). Subsequent research has shown that the identification of nonspeech stimuli is constrained by phonological knowledge as well . Participants in these experiments were presented with various types of auditory continua -either natural speech stimuli, non-speech stimuli, or speech-like controls -ranging from a monosyllable (e.g., mlif) to a disyllable (e.g., melif), and they were instructed to identify the number of their "beats" (a proxy for syllables). Results showed that syllable count responses were modulated by the phonological well-formedness of the stimulus, and the effect of well-formedness obtained regardless of whether the stimulus was perceived as speech or non-speech. These results demonstrate that people can compute phonological structure (a property of linguistic messages) for messengers that they classify as non-speech. But other aspects of the findings suggest that the structure of the message can further shape the classification of messenger. The critical evidence comes from the comparison of speech and non-speech stimuli. As expected, responses to speech and non-speech stimuli differed -a difference we dub the "speechiness" effect. But remarkably, the "speechiness" effect was stronger for well-formed stimuli compared to ill-formed ones. Well-formedness, here, specifically concerned the contrast between monosyllables (e.g., mlif) and their disyllabic counterparts (e.g., melif) in two languages: English vs. Russian. While English allows melif-type disyllables, but not their monosyllabic mlif-type counterparts, Russian phonotactics are opposite -Russian allows sequences like mlif, but bans disyllables such as melif (/m e lif/). The experimental results showed that the Russian and English groups were each sensitive to the status of the stimuli as speech or non-speech, but the "speechiness" effect depended on the well-formedness of the stimuli in the participants' language. English speakers manifested a stronger "speechiness" effect for melif-type inputs, whereas for Russian speakers, this effect was more robust for monosyllables -structures that are well-formed in their language. Interestingly, this well-formedness effect obtained irrespective of familiarity -for both mlif-type items (which are both well-formed and attested in Russian) and the mdif-type -items that are structurally well-formed (Russian exhibits a wide range of sonorant-obstruent onsets), but happen to be unattested in this language. These results suggest that the classification of auditory stimuli as speech depends on their linguistic well-formedness: well-formed stimuli are more "speechlike" than ill-formed controls. Put differently, structural properties of the linguistic message inform the classification of the messenger. The following research directly tests this prediction. Participants in these experiments were presented with a mixture of speech and non-speech stimuli, and they were asked simply to determine whether or not the stimulus sounds like speech. The critical manipulation concerns the well-formedness of those stimuli. Specifically, we compare responses to stimuli generated from inputs that are either phonologically well-formed or illformed. Our question here is whether the ease of discriminating speech from non-speech might depend on the phonological well-formedness of these non-speech stimuli. The precise source of this well-formedness effect (whether it is due to a weaker effect of attention-grabbing, or a strong top-down interaction) is a question that we defer to the Section "General Discussion." For this reason, we make no a priori predictions regarding the effect of well-formedness on speech stimuli. Our goal here is to first establish that well-formedness modulates the classification of non-speech inputs. To the extent that well-formed stimuli are identified as speech-like, we expect that participants should exhibit consistent difficulty in the classification of non-speech stimuli whose structure is well-formed. Well-formed stimuli, however, might also raise difficulties for a host of acoustic reasons that are unrelated to phonological structure. Our investigation attempts to distinguish phonological well-formedness from its acoustic correlates in two ways. First, we examine the effect of well-formedness across two different languages, using two manifestations that differ on their phonetic properties -Experiment 1 examines the restrictions on syllable structure in English, whereas Experiment 3 explores the constraints on stem structure in Hebrew. Second, we demonstrate that the effect of well-formedness is dissociable from the acoustic properties of the input. While Experiments 1 and 3 compare well-formed stimuli to ill-formed counterparts, Experiment 2 and Experiment 4 each applies the same phonetic manipulations to stimuli that are phonologically well-formed. Specifically, Experiment 2 shows that a phonetic manipulation comparable to the one used in Experiment 1 fails to produce the same results for stimuli that are well-formed, whereas Experiment 4 demonstrates that the difficulties with non-speech stimuli that are well-formed in Hebrew are eliminated once the same stimuli are presented to English speakers. Materials. The materials included the three pairs of nasal C 1 C 2 VC 3 -C 1 e C 2 VC 3 non-speech and speech-control continua used in . Members of the pair were matched for their rhyme and the initial consonant (always an m), and contrasted on the second consonant -either l or d (/mlIf/-/mdIf/, /mlεf/-/mdεf/, / mlεb/-/mdεb/). To generate those continua, we first had an English talker naturally produce the disyllabic counterparts of each pair member (e.g., /m e lIf/, /m e dIf/) and selected disyllables that were matched for length, intensity, and the duration of the pretonic schwa. We next continuously extracted the pretonic vowel at zero crossings in five steady increments, moving from its center outwards. This procedure yielded a continuum of six steps, ranging from the original disyllabic form (e.g., /m e lIf/) to an onset cluster, in which the pretonic vowel was fully removed (e.g., /mlIf). The number of pitch periods in Stimuli 1-5 was 0, 2, 4, 6, and 8, respectively; Stimulus 6 (the original disyllable) ranged from 12 to 15 pitch periods. These natural speech continua were used to generate non-speech stimuli and speech-like stimuli using the procedure detailed in . Briefly, non-speech materials were generated by deriving the first formant contours from spectrograms of the original speech stimuli (256 point DFT, 0.5 ms time increment, Hanning window) using a peak-picking algorithm, which also extracted the corresponding amplitude values. A voltage-controlled oscillator modulated by the amplitude contour was used to resynthesize these contours back into sounds, and the amplitude of the output was adjusted to approximate the original stimulus. The more "speech-like" controls were generated using a digital low-pass filter with a slope of −85 dB per octave above a cutoff frequency that was stimulus-dependent (1216 Hz for /m e lIf/ and /m e dIf/-type items, 1270 Hz for /m e lεf/, 1110 Hz for /m e lεb/, 1347 Hz for /m e dεf/, and 1250 Hz for /m e dεb/-type items), designed to reduce but not eliminate the speech information available at frequencies higher than the cutoff frequency. This manipulation was done as a "control" manipulation to acoustically alter the stimuli in a similar manner to the non-speech stimuli, while preserving enough speech information for these items to be identified as (degraded) speech. Previous testing using these materials confirmed that they were indeed identified as intended (speech or non-speech) by native English participants . Figure 1 provides an illustration of the non-speech materials and controls; a sample of the materials is available at http://www.psych.neu.edu/faculty/i. berent/publications.htm. The six-step continuum for each of the three pairs was presented in all six durations for both non-speech stimuli and speech controls, resulting in a block of 72 trials. Each such block was repeated three times, yielding a total of 216 trials. The order of trials within each block was randomized. Procedure. Participants were wearing headphones and seated in front of the computer screen. Each trial began with a message indicating the trial number. Participants initiated the trial by pressing the spacebar, which, in turn, triggered the presentation of a fixation point (+, presented for 500 ms) followed by an auditory stimulus. Participants were asked to determine as quickly and accurately as possible whether or not the stimulus corresponded to speech, and indicate their response by pressing PArt 1: EnGLISH SYLLABLE StructurE conStrAInS tHE cLASSIFIcAtIon oF non-SPEEcH StIMuLI Experiments 1-2 examine whether the classification of acoustic stimuli as speech depends on their well-formedness as English syllables. Participants in this experiment were presented with a mixture of non-speech stimuli and matched speech-like controls, and they were simply asked to determine whether or not each stimulus sounds like speech. Of interest is whether the classification of the stimulus as speech depends on its well-formedness. To examine this question, we simultaneously manipulated both the phonological structure of the stimuli and their speech status. Phonological well-formedness was manipulated along nasal-initial continua ranging from well-formed disyllables (e.g., /m e lIf/, /m e dIf) to ill-formed monosyllables (e.g., /mlIf/, /mdIf/). To generate these continua, we first had a native English talker naturally produce a disyllable that began with a nasal-schwa sequence -either one followed by a liquid (e.g., /m e lIf/) or one followed by a stop (e.g., /m e dIf/). We then gradually excised the schwa in five steps until, at the last step, the schwa was entirely removed, resulting in a CCVC monosyllable -either /mlIf/ or /mdIf/ (we use C and V for consonants and vowels, respectively). The CCVC-C e CVC continuum thus presents a gradual contrast between well-formed inputs (in step 6, C e CVC) and ill-formed ones (in step 1, CCVC). Among these two CCVC monosyllables, mdIf-type stimuli are worse formed than their mlIftype counterparts (Berent et al., 2009). Although past research has shown that English speakers are sensitive to the ml-md distinction given these same materials -both speech and non-speech (Berent et al., 2009, it is unclear whether this subtle contrast can modulate performance in a secondary speech-detection task. Our main interest, however, concerns the contrast between CCVC monosyllables (either mlif or mdif) and their C e CVC counterparts. Across languages, complex onsets (in the monosyllable mlif) are worse formed than simple onsets (in the disyllable melif, Smolensky, 1993/2004). Nasal-initial complex onsets, moreover, are utterly unattested in English. Accordingly, the monosyllables at step 1 of our continuum are clearly ill-formed relative to the disyllabic endpoints. Our question here is whether ill-formedness of those monosyllables would facilitate their classification as non-speech. To address this question, we next modified those continua to generate non-speech inputs and speech controls. Non-speech stimuli were produced by resynthesizing the first formant of the natural speech stimuli. To assure that differences between non-speech and speech-like stimuli are not artifacts of the re-synthesis process, we compared those non-speech stimuli to more speech-like inputs that were similarly filtered. If well-formed stimuli are more speech-like, then non-speech responses should be harder to make for well-formed non-speech stimuli -those corresponding to the disyllabic items -compared to ill-formed monosyllables. Accordingly, the identification of non-speech stimuli should be modulated by vowel duration. Experiment 1 verifies this prediction; Experiment 2 rules out alternative non-linguistic explanations for these findings. Method Participants. Ten native English speakers, students at Northeastern University took part in the experiment in partial fulfillment of course requirements. To determine whether speech and non-speech inputs were affected by syllable structure, we first compared speech and nonspeech inputs by means of a 2 speech status × 6 vowel duration × 2 continuum type (ml vs. md) ANOVA. Because each condition in this experiment includes only three items, these analyses were conducted using only participants as a random variable. The analysis of response accuracy only produced a significant main effect of speech status [F(1, 9) = 7.12, MSE = 0.0052, p < 0.03], indicating that people responded more accurately to speech-like stimuli compared to their non-speech counterparts. No other effect was significant (all p > 0.11). The analysis of response time, however, yielded a reliable effect of vowel duration [F(5, 45) = 5.20, MSE = 978, p < 0.0008] as well as a significant three way interaction [F(5, 45) = 2.42, MSE = 1089, p = 0.050]. No other effect was significant (all p > 0.13). We thus one of two keys on the computer's numeric keypad (1 = speech, 2 = non-speech). Their response was timed relative to the onset of the stimulus. Slow (responses slower than 1000 ms) and inaccurate responses triggered a warning message from the computer. Prior to the experiment, participants received a short practice session with similar items that did not appear in the experimental session. Results and Discussion Outliers (correct responses falling 2.5 SD beyond the mean, or faster than 200 ms, less than 3% of the total correct responses) were removed from the analyses of response time. Mean response time and response accuracy are provided in Table 1. An inspection of those means confirmed that participants indeed classified the speech and non-speech stimuli as intended (M = 97%). The ANOVAs (6 vowel duration × 2 continuum) conducted on speech-like stimuli produced no significant effects in either response time (all p > 0.15) or accuracy (all F < 1). The linguistic structure of the stimuli also did not reliably affect response accuracy to non-speech inputs (all p > 0.15). In contrast, response time to non-speech stimuli was reliably modulated by their linguistic structure. The 6 vowel duration × 2 continuum ANOVA on non-speech stimuli yielded a significant effect of vowel duration [F(5, 45) = 4.12, MSE = 979, p < 0.005]. Tukey HSD tests revealed that response to fully monosyllabic stimuli (in step 1) were faster than response to disyllabic stimuli (in step 6, p < 0.001), and marginally so relative to steps 5 (p < 0.07) and 4 (p < 0.07). The same ANOVA also yielded marginally significant effects of continuum type [F(1, 9) = 3.90, MSE = 897, p < 0.09] and a vowel duration × continuum type interaction [F(5, 45) = 2.24, MSE = 2070, p < 0.07]. Tukey HSD tests indicated that md-type continua produced slower responses than their ml-type counterparts at step 4 only (p < 0.009). This effect, however, did not concern monosyllables in step 1, so it likely reflects the acoustic properties of some of the md-items, rather than their phonological structure. Because the silence associated with stop consonants promotes discontinuity in the phonetic signal, the phonetically bifurcate md-stimuli might be more readily identified as disyllabic. Such phonetic cues might be particularly salient when the duration of the pretonic vowel is otherwise ambiguous -toward the middle of the vowel continuum. For this reason, middle-continuum md-stimuli might be considered as better formed than ml-controls. The main finding of Experiment 1 is that non-speech stimuli are harder to classify when they correspond to well-formed syllables compared to ill-formed ones. Thus, well-formedness impairs the identification of non-speech stimuli. proceeded to investigate the effect of linguistic structure for speech and non-speech stimuli separately by means of 2 continuum type × 6 vowel duration ANOVAs. Figure 2 plots the effect of vowel duration on speech-like and non-speech stimuli. An inspection of the means suggests that, as the duration of the vowel increased, people took longer to respond to non-speech stimuli. In contrast, response to speech-like stimuli was not monotonically linked to vowel duration. Figure 2 | response time to speech and non-speech stimuli as a function of vowel duration (in experiment 1). Error bars reflect confidence intervals, constructed for the difference among means along each of the vowel duration continua (i.e., speech and non-speech). Method Participants. Ten native English speakers, students at Northeastern University took part in this experiment in partial fulfillment of a course requirement. Materials and Procedure. The materials corresponded to the same three pairs of naturally produced disyllables used in Experiment 1. For each such disyllable, we gradually decreased the duration of the tonic vowel using a procedure identical to the one applied to the splicing of the pretonic vowel in Experiment 1. We first determined the portion of the tonic vowel slated for removal by a identifying a segment of 70 ms (12-14 pitch periods), matched to the duration of the pretonic vowel (M = 68 ms, ranging from 12 to 15 pitch periods). This segment was measured from the center of the vowel outwards, and it included some coarticulatory cues. The entire tonic vowel (unspliced) was presented in step 6. We next proceed to excise this segment in steady increments, such that steps 5-1 had 8, 6, 4, 2, and 0 pitch periods remaining out of the pitch periods slated for removal. Despite removing a chunk of the tonic vowel, the items in step 1 were clearly identified as disyllabic, and their remaining tonic vowel averaged 38 ms (7.16 pitch periods). The resulting speech continua were next used to form nonspeech and speech -control stimuli along the same method used in Experiment 1. The procedure was the same as in Experiment 1. Results and Discussion Outliers (responses faster than 2.5 SD from the means, less than 3% of all correct observations) were excluded from the analyses of response time. Mean response time for speech and non-speech stimuli as a function of the duration of the tonic vowel is presented in Figure 3 (the accuracy means are provided in Table 2). An inspection of means showed no evidence that responses to non-speech stimuli were monotonically linked to the duration of the tonic vowel. A 2 speech status × 2 continuum type × 6 vowel ANOVA yielded a reliable effect of continuum type [response accuracy: F(1, 9) = 5.23, MSE = 0.007, p < 0.05; response time: F(1, 8) = 10.69, MSE = 1413, p < 0.02], indicating that md-type stimuli were identified more slowly and less accurately than their ml-type counterparts. Because this effect did not depend on vowel duration, the difficulty with md-type stimuli is most likely due to the acoustic properties of those stimuli, rather than their phonological structure. The only other effect to approach significance was that of speech status (speech vs. non-speech) on response time [F(1, 8) = 4.97, MSE = 4757, p < 0.06]. No other effects were significant (p > 0.17). Unlike Experiment 1, where speech stimuli were identified more readily than non-speech, in the present experiment, speech stimuli produced slower responses than their non-speech counterparts. This finding is consistent with the possibility that well-formed nonspeech stimuli tend to be identified as speech, and consequently, they are harder to discriminate from non-speech-like inputs. Indeed the discrimination (d′) of speech from non-speech was lower in Experiment 2 (d′ = 2.69) relative to Experiment 1 (d′ = 3.82). Crucially, however, unlike Experiment 1, response to nonspeech stimuli in the present experiment was not modulated by vowel duration. A separate 2 continuum type × 6 vowel duration ExPErIMEnt 2 The difficulties in responding to well-formed non-speech stimuli could indicate that the classification of non-speech is modulated by phonological well-formedness. Such difficulties, however, could also result from non-linguistic reasons. One concern is that the longer responses to the well-formed disyllables are an artifact of their longer acoustic duration. This explanation, however, is countered by the finding that the very same duration manipulation had no measurable effect on the identification of speech stimuli, so it is clear that response time does not simply mirror the acoustic duration of the stimuli. Our vowel duration manipulation, however, could have nonetheless affected other attributes of these stimuli that are unrelated to well-formedness. One possibility is that the acoustic cues associated with vowels are more readily identified as speech, so stimuli with longer vowels are inherently more speech-like than short-vowel stimuli. Another explanation attributes the "speechiness" of the disyllabic endpoints to splicing artifacts. Recall that, unlike the other five steps, the sixth endpoint was produced naturally, unspliced. Its greater resemblance to speech could thus result from the absence of splicing. Experiment 2 addresses these possibilities by dissociating these two acoustic attributes from linguistic well-formedness. To this end, Experiment 2 employs the same vowel manipulation used in Experiment 1, except that the excised vowel was now the tonic (e.g., /I/ in m e dIf), rather than the pretonic vowel (i.e., the schwa). We applied this manipulation to the same naturally produced disyllables used in Experiment 1, and we gradually decreased the vowel duration along the same six continuum-step employed in Experiment 1, such that the difference between steps 1 and 6 in the two experiment was closely matched. This manipulation thus replicates the two acoustic characteristics of the pretonic vowel continua -it gradually decreases the acoustic energy of a vowel, and it contrasts between spliced vowels (in step 1-5) and unspliced ones (in step 6). Unlike Experiment 1, however, this manipulation did not fully eliminate the vowel but only reduced its length, such that the short-and longendpoints were clearly identified as disyllables. Since the vowel endpoints do not contrast phonologically in English, the increase in the duration of the tonic vowel (in Experiment 2) does not alter the phonological structure of these stimuli. If the difficulty responding to non-speech stimuli with longer pretonic vowels (in Experiment 1) is due to the acoustic properties of vowels, then non-speech stimuli with longer tonic vowels (in Experiment 2) should be likewise difficult to classify. Similarly, if the advantage of non-speech stimuli in steps 1-5 (relative to the unspliced sixth step) results from their splicing, then these spliced steps should show a similar advantage in the present experiment. In contrast, if the speechiness of disyllables is due to their phonological well-formedness, then responses to non-speech stimuli in Experiment 2 should be unaffected by vowel duration. Such well-formedness effects, moreover, should also be evident in the overall pattern of responses to non-speech stimuli. Because the nonspeech stimuli used in this experiment are all well-formed disyllables, we expect their structure to inhibit the non-speech response. Consequently, participants in Experiment 2 should experience greater difficulty in distinguishing non-speech stimuli from their speech-like counterparts. To further bolster this conclusion, we next compared the responses to non-speech across the two experiments using a 2 Experiment × 2 continuum type × 6 vowel duration ANOVA (see Figure 4) 1 . The analysis of response time yielded a significant effect of continuum type [F(1, 17) = 6.08, MSE = 977, p < 0.03] and a reliable three way interaction [F(5, 85) = 2.42, MSE = 1142, p < 0.05]. No other effects were significant (all p > 0.19). To interpret this interaction, we next examined responses to the two continuum types (ml vs. md) separately, using a 2 Experiment × 6 vowel duration ANOVAs. The analysis of the ml-continuum yielded no reliable effects (all p > 0.14). In contrast, the md-continuum yielded a marginally significant interaction [F(5, 85) = 2.23, MSE = 1392, p < 0.06]. No other effect was significant (all p > 0.32). We further interpreted the effect of vowel duration by testing for the simple main effect of vowel duration for the tonic vs. pretonic vowels, separately (in Experiment 2 vs. 1). Vowel duration was significant only for the pretonic vowel condition [F(5, 45) = 5.47, MSE = 746, p < 0.0006], but not in the tonic vowel condition [F(5, 40) < 1, MSE = 2118]. The attenuation of the tonic-pretonic contrast for the mlcontinuum is likely due to phonetic factors. As noted earlier, the md-items exhibit a phonetic bifurcation due to the silence associated with the stop, and for this reason, disyllabicity might be more salient for md-items. The absence of a vowel effect in the ml-continuum indicates that merely increasing the duration of the vowel -whether it is tonic pretonic -is insufficient to impair the identification of non-speech stimuli. Results with the md-continuum, however, clearly show that vowel duration had of the non-speech stimuli confirmed that responses to non-speech inputs were unaffected by vowel duration (F < 1, in response time and accuracy); the interaction also did not approach significance (F < 1, in response time and accuracy). Given that the tonic vowel manipulation (in Experiment 2) closely matched the pretonic vowel manipulation (in Experiment 1), the confinement of the vowel effect to non-speech stimuli in Experiment 1 suggests that this effect specifically concerns the wellformedness of non-speech inputs, rather than vowel duration per se. Figure 3 | response time to speech and non-speech stimuli as a function of vowel duration (in experiment 2). Error bars reflect confidence intervals, constructed for the difference among means along each of the vowel duration continua (i.e., speech and non-speech). Experiments 3-4 further test this hypothesis by seeking converging evidence from an unrelated phenomenon in another language -Hebrew. To demonstrate that the effect of well-formedness on non-speech is not specific to conditions that require its comparison to edited speech stimuli (resynthesized or spliced), we compared non-speech stimuli with naturally produced speech. As in the case of English, we compared the ease of speech/nonspeech discrimination for stimuli that were either phonologically well-formed or ill-formed. In the case of Hebrew, well-formedness is defined by the location of identical consonants in the stemeither initially (e.g., titug), where identical consonants are illformed in Semitic languages, or finally (e.g., gitut), where they are well-formed. Accordingly, the phonetic characteristics of distinct effects on tonic and pretonic stimuli. While increasing the duration of the pretonic vowel impaired the identification of non-speech, the same increase in vowel length had no measurable effect when it concerned the tonic vowel -a phonetic contrast that does not affect well-formedness. The finding that identical vowel manipulations affected the identification of nonspeech stimuli in a selective manner -only when it concerned the pretonic vowel, and only with the md-continuum -confirms that this effect is inexplicable by vowel duration per se. Merely increasing the duration of a vowel is insufficient to impair the classification of non-speech stimuli as such. Together, the findings of Experiments 1-2 suggest that well-formed structures are perceived as speech-like. identical consonants (e.g., titug, gitut) and the ABB and ABC forms were further matched for the co-occurrence of their consonants in Hebrew roots. The speech stimuli were recorded naturally, by a native Hebrew speaker -these materials were previously used in Berent et al. (2007b;Experiment 6), and they are described there in detail (see Berent et al., 2007b, Appendix A, for the list of stimuli). As noted there, the three types of stimuli did not differ reliably on their acoustic durations [F < 1; for AAB items: M = 1191 ms (SD = 108 ms); for ABB items: M = 1195 ms (SD = 103 ms); for ABC items: M = 1171 ms (SD = 101 ms)]. We next generated non-speech stimuli by adding together three synthetic sound components derived from the original stimulus waveforms. The first, low-frequency component was produced by lowpass filtering the stimulus waveforms at 400 Hz (slope of −85 dB per octave) to isolate the first formant, and deriving a spectral contour of the first formant frequency values from spectrograms of the filtered speech stimuli (256 point DFT, 0.5 ms time increment, Hanning window) using a peak-picking algorithm, which also extracted the corresponding amplitude values to produce an amplitude contour. Next, this low-frequency spectral contour was shifted up in frequency by multiplying it by 1.47, and then resynthesized into a sound component using a voltage-controlled oscillator modulated by the amplitude contour. The second, intermediatefrequency sound component was produced by bandpass filtering the original stimulus waveforms between 2000 and 4000 Hz (slope of −85 dB per octave), and deriving a single spectral contour of the frequency values in this intermediate range from spectrograms of the filtered speech stimuli (256 point DFT, 0.5 ms time increment, Hanning window) using a peak-picking algorithm, which also extracted the corresponding amplitude values to produce an amplitude contour. Next, this intermediate spectral contour was shifted down in frequency by multiplying it by 0.79, and then resynthesized into a sound component using a voltage-controlled oscillator modulated by the amplitude contour. The third, highfrequency sound component was produced by bandpass filtering the original stimulus waveforms between 4000 and 6000 Hz (slope of −85 dB per octave), and deriving a single spectral contour of the frequency values in this high range from spectrograms of the filtered speech stimuli (256 point DFT, 0.5 ms time increment, Hanning window) using a peak-picking algorithm, which also extracted the corresponding amplitude values to produce an amplitude contour. These three components were then summed together with relative amplitude ratios of 1.0:0.05:2.0 (low-frequency component: intermediate-frequency component: high-frequency component) to produce the non-speech version of each stimulus. The structure of these non-speech stimuli and their natural speech counterparts is illustrated in Figure 5 (a sample of the materials is available at http://www.psych.neu.edu/faculty/i.berent/publications.htm). The experimental procedure was the same as in the previous experiments. Results and Discussion Figure 6 plots mean response time for speech and non-speech stimuli as a function of stem structure (the corresponding accuracy means are provided in Table 3). An inspection of the means suggests that speech and non-speech stimuli were readily identified well-formedness differ markedly from the ones considered for English. To the extent that stimuli corresponding to well-formed structures are consistently harder to classify as non-speech (across different phonetic manifestations and languages), such a convergence would strongly implicate phonological structure as the source of this phenomenon. PArt 2: IdEntItY rEStrIctIonS on HEBrEW StEMS ExtEnd to non-SPEEcH StIMuLI Like many Semitic languages, Hebrew restricts the location of identical consonants in the stem: AAB stems (e.g., titug), where identical consonants occur at the left edge, are ill-formed, whereas their ABB counterparts (e.g., with identical consonants at the right edge, gitut) are well-formed (Greenberg, 1950). A large body of literature shows that Hebrew speakers are highly sensitive to this restriction and they freely generalize it to novel forms. Specifically, novel-AAB forms are rated as less acceptable than ABB counterparts (Berent and Shimron, 1997;Berent et al., 2001a), and because novel-AAB stems (e.g., titug) are ill-formed, people classify them as non-words more rapidly than ABB/ABC controls (e.g., gitut, migus) in the lexical decision task (Berent et al., 2001b(Berent et al., , 2002(Berent et al., , 2007b and they ignore them more readily in Stroop-like conditions (Berent et al., 2005). Given that AAB Hebrew stems are clearly ill-formed, we can now turn to examine whether their structure might affect the classification of non-speech stimuli. If phonologically ill-formed stimuli are, in fact, more readily identifiable as non-speech, then, ill-formed AAB Hebrew stems should be classified as non-speech more easily than their well-formed (ABB and ABC) counterparts. To examine this prediction, Experiment 3 compares the classification of three types of novel stems. Members of all three stem types are unattested in Hebrew, but they differ on their well-formedness. One group of stimuli, with an AAB (e.g., titug) structure is illformed, whereas the two controls -ABB (e.g., gitut) and ABC (e.g., migus) are well-formed. These items were recorded by a native Hebrew talker, and they were presented to participants in two formats: either unedited, as natural speech, or edited, such that they were identified as non-speech. Participants were asked to rapidly classify the stimulus as either speech or non-speech. If ill-formed stimuli are less speech-like, then non-speech stimuli with an AAB structure should elicit faster responses compared to their wellformed counterparts, ABB or ABC stimuli. Method Participants. Twenty-four native Hebrew speakers, students at the University of Haifa, Israel, took part in the experiment for payment. Materials. The materials corresponded to 30 triplets of speech stimuli along with 30 triplets of non-speech counterparts. All materials were non-words, generated by inserting novel consonantal roots (e.g., ttg) in the vocalic nominal template C 1 iC 2 uC 3 -the template of mishkal Piʔul (e.g., ttg + C 1 iC 2 uC 3 → titug). In each such triplet, one stem had identical consonants at its left edge (AAB), another had identical consonants at the right edge (ABB), and a third member (ABC) had no identical consonants (e.g., titug, gitut, migus). Within a triplet, AAB and ABB forms were matched for their was significant (all p > 0.14). We next proceeded to interpret the interaction by testing for the simple main effects of stem structure and of speechiness, followed by planned orthogonal contrasts (Kirk, 1982). These conclusions are borne out by the outcomes of the 2 speech status (speech/non-speech) × 3 stem-type (AAB/ABB/ ABC) ANOVA. Since in this experiment, the conditions of interest are each represented by 30 different items, these analyses were conducted using both participants and items as random variables. The ANOVAs yielded a reliable interaction [In response time: F(2, 46) = 6.30, MSE = 1967, p < 0.004; F(2, 58) = 7.78, MSE = 2452, p < 0.002; In response accuracy: both F < 1] as well as a marginally significant effect of speech status [In response accuracy: F 1 (1, 23) = 3.03, MSE = 0.036, p < 0.10, F 2 (1, 29) = 82.90, MSE = 0.001, p < 0.0001; In response time: F 1 < 1; phonological structure from the acoustic properties of ill-formed stimuli using a complementary approach. Here, we maintained the acoustic properties by using the same stimuli as Experiment 3, but we altered their phonological well-formedness by presenting these items to a group of English speakers. English does not systematically restrict the location of identical consonants in stems, and our past research suggested that, to the extent English speakers constrain the location of identical consonants, their preference is opposite to Hebrew speakers' , showing a slight preference for AAB forms (Berent et al., 2002, footnote 7). Clearly, English speakers should not consider AAB items ill-formed. If the tendency of Hebrew speakers to classify AAB stems as non-speech-like is due to the acoustic properties of these items, then the results of English should mirror the Hebrew participants. If, in contrast, the easier classification of AAB stimuli as speech is due to their phonological structure, then the findings from English speakers should diverge with Hebrew participants. Method Participants. Twenty-four English speakers, students at Northeastern University, took part in this study in partial fulfillment of a course requirement. The materials and procedure were identical to Experiment 3. Results Mean response time and response accuracy to speech and nonspeech stimuli is provided in Figure 7 (the accuracy means are listed in Table 4). An inspection of the means suggests that, unlike Hebrew speakers, English participants' responses to non-speech Tests of the simple main effect of speech status further indicated that non-speech stimuli promoted faster responses than speech stimuli given ill-formed AAB structures [F 1 (1, 23) = 5.56, MSE = 5296, p < 0.03; F 2 (1, 29) = 31.03, MSE = 2966, p < 0.0001], but not reliably so with their well-formed counterparts, either ABB [F 1 (1, 23) < 1; F 2 (1, 29) = 4.10, MSE = 3350, p < 0.06], or ABC (both F < 1) stems. These results demonstrate that ill-formedness facilitated the classification of non-speech stimuli. ExPErIMEnt 4 The persistent advantage of non-speech stimuli that are phonologically ill-formed across different structural manifestations and languages is clearly in line with our hypothesis that "speechiness" depends, inter alia, on phonological well-formedness. The fact that similar acoustic manipulations failed to produce the effect given well-formed stimuli (in Experiment 2) offers further evidence that the advantage concerns phonological structure, rather than acoustic attributes. Experiment 4 seeks to further dissociate Figure 6 | Mean response time of Hebrew speakers to speech and non-speech inputs as a function of their phonological well-formedness in Hebrew. Error bars reflect confidence intervals constructed for the difference between the three types of stem structures, constructed separately for speech and non-speech stimuli. non-speech AAB stems -stems that are ill-formed in their language, English participants in the present experiment were utterly insensitive to the structure of the same non-speech stimuli. And indeed, English does not systematically constrain the location of identical consonants in the stem. The selective sensitivity of Hebrew, but not English speakers to the structure of non-speech stimuli demonstrates that this effect reflects linguistic knowledge, rather than the acoustic properties of those stimuli. While stem structure did not affect the responses of English participants to non-speech inputs, it did modulate their responses to speech stimuli: speech stimuli with identical consonants -either AAB or ABB -were identified faster than ABC controls. Since English speakers did not differentiate AAB and ABB stems, this effect must be due to reduplication per se, rather than to its location. Indeed, many phonological systems produce identical consonants by a productive grammatical operation of reduplication (McCarthy, 1986;Yip, 1988;Suzuki, 1998). It is thus conceivable that English speakers encode AAB and ABB stems as phonologically structured, and consequently, they consider stems as better formed than no-identity controls. The sensitivity of English speak-stimuli were utterly unaffected by stem structure. Stem structure, however, did modulate responses to speech stimuli, such that stems with identical consonants produced faster responses than no-identity controls. Discussion The findings from Experiment 4 demonstrate that the processing of non-speech stimuli is modulated by linguistic knowledge. While Hebrew participants (in Experiment 3) responded reliably faster to unfamiliar, as these particular items (mdif-type monosyllables) -while structurally well-formed -happened to be unattested in the participants' language (Russian; . Likewise, the restriction on identical Hebrew consonants generalizes across the board, to novel segments and phonemes (e.g., Berent et al., 2002), and computational simulations have shown that such generalizations fall beyond the scope of several non-algebraic mechanisms (Marcus, 2001;Berent et al., in press). The algebraic properties of phonological generalizations, on the one hand, and their demonstrable dissociation with acoustic familiarity, on the other, suggest that the knowledge available to participants specifically concerns grammatical well-formedness 2 . Whether such algebraic knowledge modulates the identification of non-speech stimuli, specifically, remains to be seen. But regardless of what linguistic knowledge is consulted in this case, it is clear that some structural attributes of the linguistic message inform the classification of auditory stimuli as speech. Why are well-formed non-speech stimuli harder to classify? Earlier, we proposed two possible loci for the effect of phonological well-formedness. One possibility is that well-formedness affects the allocation of attention to acoustic stimuli. In this account, well-formed stimuli engage attentional resources that are necessary for the speech-discrimination task, and consequently, the classification of non-speech stimuli suffers compared to ill-formed structures. On a stronger interactive account, well-formedness informs the evaluation of acoustic inputs by the language system itself. In this view, the output of the phonological grammar feeds back into the evaluation of the input, such that better-formed inputs are interpreted as more speech-like. While the weak attention view and strong interactive accounts both predict difficulties with well-formed non-speech stimuli, they differ with respect to their predictions for speech inputs. The strong interactive account predicts that well-formed speech stimuli should be easier to recognize as speech, whereas the attention-grabbing explanation predicts that well-formed speech stimuli should likewise engage attention resources, hence, they should be harder to classify than ill-formed counterparts. While our results are not entirely conclusive on this question, two observations favor the stronger interactive perspective. First, well-formedness impaired the identification of non-speech stimuli in Experiment 1, but it had no such effect on speech stimuli. The relevant (speech status × vowel duration) interaction, however, was not significant, so the interpretation of this finding requires some caution. Stronger differential effects of well-formedness obtained in Experiment 3. Here, well-formedness selectively impaired the classification of non-speech stimuli. Moreover, well-formedness produced the opposite effect on speech inputs (in Experiment 4): well-formed inputs with reduplication were identified more readily as speech. Although these conclusions are limited in as much as the contrasting findings for speech and non-speech stimuli come from different experiments (Experiments 3 vs. 4), these ers to consonant-reduplication is remarkable for two reasons. First, reduplication is not systematically used in English, so the sensitivity of English speakers to reduplication might reflect the encoding of a well-formedness constraint that is not directly evident in their own language (for other evidence consistent with this possibility, see Berent et al., 2007a). Second, the finding that reduplicated (AAB and ABB) stems are more readily recognized as speech suggests that well-formedness affects not only non-speech stimuli but also the processing of speech inputs. GEnErAL dIScuSSIon Much research suggests that people can extract linguistic messages from auditory carriers that they classify as "non-linguistic" (e.g., Remez et al., 1981Remez et al., , 2001. Here, we examine whether structural aspects of linguistic messages can inform the classification of these messengers as linguistic. Four experiments gauged the effect of phonological well-formedness on the discrimination of non-speech stimuli from various speech controls. In Experiment 1, we showed that English speakers experience difficulty in the classification of non-speech stimuli generated from syllables that are phonologically well-formed (e.g., melif) compared to ill-formed counterparts (e.g., mlif). Experiment 3 replicated this effect using a second manifestation of well-formedness in a different language -the restrictions on identical consonants in Hebrew stems. Once again, participants (Hebrew speakers) experienced difficulties responding to nonspeech stimuli that are well-formed in their language (e.g., gitut) compared to ill-formed controls (e.g., titug). The converging difficulties across diverse manifestations of well-formedness suggest that these effects are likely due to phonological structure, rather than the acoustic properties of these stimuli. Experiments 2 and 4 further support this conclusion by showing that acoustic manipulations similar to the ones in Experiments 1 and 3, respectively, fail to produce such difficulties once wellformedness is held constant. Specifically, Experiment 2 showed that merely increasing the duration of a vowel (a manipulation that mimics the mlif-m e lif contrast from Experiment 1) is insufficient to impair the classification of non-speech stimuli once long-and short-vowel items are both well-formed in participants' language (English). Similarly, Experiment 4 showed that non-speech items that are well-formed in Hebrew present no difficulties for English participants. The convergence across two different manipulations of well-formedness, on the one hand, and its divergence with the outcomes of similar (or even identical) acoustic manipulations, on the other, suggest that the observed difficulties with well-formed non-speech stimuli are due to productive linguistic knowledge. As such, these results suggest that structural properties of linguistic messages inform the classification of acoustic messengers as linguistic. While our present results do not directly speak to the nature of the knowledge consulted by participants, previous findings suggest that it is inexplicable by familiarity -either statistical knowledge or familiarity with the coarse acoustic properties of the language (e.g., familiarity accounts such as Rumelhart and McClelland, 1986;Goldinger and Azuma, 2003;Iverson et al., 2003;Iverson and Patel, 2008;Yoshida et al., 2010). Recall, for example, that well-formed nasal-initial sequences exhibited a stronger speechiness effect even when the stimuli were 2 Note that our conclusions concern cognitive architecture, not its neural instantiation. The account outlined here is perfectly consistent with the possibility that phonological knowledge reshapes auditory brain areas, including low-level substrates. At the functional level, however, such changes support generalizations that are discrete and algebraic. these effects can originate from structural descriptions computed by the language system itself. Although these findings leave open the possibility that the language system is specialized with respect to structures that it can compute, they do suggest that the selection of its inputs is not encapsulated.
10,890.6
2011-05-11T00:00:00.000
[ "Linguistics", "Physics" ]
Sensor for Accumulated Charge Detection in Packaged Insulation Layer of Insulated Gate Bipolar Transistor Power Devices A compact system for detecting charge accumulation in the insulation layer of power electronic devices such as insulated gate bipolar transistor (IGBT) modules was developed. The amount of electric charge accumulated in IGBT modules at high dc voltages (100 to 5000 V) and in a wide temperature range (20 to 180 °C) was measured using the system consisting of two units. The battery-operated sensing and transmission unit (500 × 700 × 300 mm3) was connected in series to an IGBT module to which various dc voltages were applied at various temperatures. The collected data were transmitted to a data receiving unit for analysis, which is electrically isolated from the sensing and transmission unit. The data received as a function of time Q(t) were analyzed to obtain various parameter values that provide information on the insulation status of the IGBT module, such as the amount of initial charge, absorption current, and conduction current under various conditions. On the basis of the ratio Q(t)/Q0, where Q0 is the amount of initial charge, the amount of charge accumulated in the IGBT modules under high-stress conditions was obtained. The compact system for accumulated charge detection will be convenient for evaluating the insulation characteristics of IGBT modules, which operate under harsh conditions in real power electronic devices such as those used in electric vehicles. Introduction Power electronic transistors have been developed for use in a wide range of devices (1) such as the thyristors used in high-voltage transmission grid systems, gate-commutated turn-off/gate turn-off (GCT/GTO) thyristors, high-voltage insulated gate bipolar transistor (IGBT) modules, (2) high-voltage intelligent power modules (HV-IPM), high-speed railway operation control systems, electric vehicles, and industrial robots. These power electronic devices operate under high electric fields and high temperatures. Therefore, heat radiation from a power tip of such a device is an important issue. (3,4) Figure 1 shows a cross-sectional view of an IGBT module. The power tip radiates large amounts of heat generated during high-voltage and high-current operation, and the heat spreader uniformly spreads the heat in the layer. However, thermal energy is conducted through the insulation sheet to the base plate, which is in contact with a radiator. Since they are assembled in a multilayer structure, each IGBT element is electrically isolated by inserting an insulation sheet as shown in Fig. 1. To ensure the stable operation of these IGBT modules under harsh conditions, excellent insulation sheets with high thermal conduction have been developed by various methods, such as the impregnation of an insulation sheet with nanoparticles. (5,6) However, a nonuniform electric field develops in the insulation sheet because of electric charge accumulation under high-dc and high-temperature conditions, causing the deterioration and electric breakdown of the insulation sheet. It is very important to determine the electric charge accumulation characteristics in the insulation layer of power devices for the safe operation of IGBT modules. Previously, the characteristics of insulation materials were conventionally evaluated by measuring the electric leakage current of insulation materials using a picoammeter. However, it was difficult to evaluate the electric charge accumulation from such measurements. Thus, the evaluation method was replaced; instead of depending on leakage current measured using a picoammeter, the characteristics of charge accumulation were evaluated by the pulsed electroacoustic (PEA) method. (7,8) However, it was also difficult to use the PEA method for devices with complicated shapes, such as electric power cables and power devices such as IGBT modules. More recently, we have developed the Q(t) method for the evaluation of the characteristics of charge accumulation in insulation materials with complex shapes. For the measurement of electric charges accumulated in electric devices, a new measurement system using the Q(t) method has been proposed for products with complicated shapes. For example, the Q(t) method was applied to the evaluation of gamma-ray-irradiated coaxial cables by measuring the amount of residual electron-hole pair charge. (9) The method was also applied to the evaluation of insulation of water-tree deteriorated power coaxial cables. (10) In this study, the Q(t) method was applied to the evaluation of electric charge accumulation in the insulation layer of power devices, particularly in IGBT modules. Although the shape of IGBT modules is complex, we can obtain the amount of electric charge accumulated in the insulation layer under various conditions, for example, at various applied voltages (100 to 5000 V) and temperatures (25 to 180 ℃) using the charge accumulation evaluation system we developed. In this report, we describe the fabrication and practical application of the compact system for the evaluation of charge accumulation whose measurement principle is based on the Q(t) method. Principles of Measurement of Insulation Characteristics Presently, the characteristics of insulation materials were conventionally evaluated by electric conduction measurement using a picoammeter. However, it was not easy to accurately evaluate the characteristics of insulation materials using the data obtained using a picoammeter. Figures 2 and 3 respectively show the measurement circuits and the results of their measurement by the conventional and new Q(t) methods for comparison. Conventional picoammeter method In the evaluation of the electric conduction of a dielectric material, a picoammeter is usually used to measure the leakage current of the insulation material under a dc electric field. As shown in Fig. 2, an instantaneous charging current [I disp (t)] appears immediately after the application of a dc voltage (V dc ), then an absorption current [I abs (t)] flows into the sample after the initial displacement current, and finally a conduction current [I cond (t)] flows upon reaching equilibrium. (11) The electric conductivity κ of the test sample can be obtained by measuring the conduction current I cond (t) and applied voltage V dc as follows. cond dc where S is the surface area of the measuring electrode and d is the thickness of the sample. The electric conductivity κ is defined by Eq. (2), where en is the electric carrier density (C/m 3 ) and μ is the mobility of electric carriers (m 2 /Vs). It is assumed that en and μ are constants inside the test sample. Here, the electric conductivity κ of the test samples of the polymeric insulation material is calculated. For a 100-μm-thick test sample, the surface area of the measuring electrode (S) is 5 cm 2 , the applied voltage V dc is 1000 V, and the conduction current is 10 pA. Then, the conductivity κ is 2 × 10 −15 S/m. In this measurement, the duration of voltage application to the test sample is usually about 5 min (300 s). We now consider the physical meaning of the obtained conductivity of 2 × 10 −15 S/m. Here, the dielectric relaxation time (τ) is given as where ε 0 is a dielectric constant in free space, which is 8.854 × 10 −12 F/m, and ε r is the relative permittivity of the test sample. By using ε r = 2.2 and κ = 2 × 10 −15 S/m for Eq. (3), we calculate the dielectric relaxation time τ to be 5000 s. The measurement time of 300 s is much shorter than the dielectric relaxation time of 5000 s. It means that the conduction current I cond (t) does not reach the equilibrium state at 300 s. That is, the front of electric charge carriers that started from one electrode does not reach the opposite electrode because en and E are nonuniform in the test sample. Therefore, it is not accurate to use Eq. (1) for calculating the electric conductivity obtained using κ = 2 × 10 −15 S/m, which was obtained relatively soon after the application of the test voltage. From the above argument, the charge movement in polymeric insulation materials is important in the evaluation of the characteristics of such materials. That is, it is important to determine whether electric charge accumulation occurs in the insulation material under a dc electric stress in the evaluation of the characteristics of insulation materials. Recently, the characteristics of electric insulation materials for use under high-dc-electric fields have been intensively studied because of the prevalent use of these insulation materials in dc power systems, for example, in dc-dc inverter systems of electric vehicles. To determine their characteristics, it is necessary to accurately evaluate the electric charge accumulation status in insulation materials under dc voltage application at operating temperatures. A new measurement system based on the Q(t) method is proposed to study the electric charge accumulation characteristics. The measurement principle of the Q(t) method is introduced next. Figure 3 shows the basic measurement concept of the Q(t) method. In the new method, the picoammeter shown in Fig. 2 is replaced with a capacitor C INT for integrating electric current in the circuit, as shown in Fig. 3. As a result, the electric current can be integrated as Q(t), which is given by Eq. (4), where Q 0 (= C s V dc ) is the initial charge given by the product of the capacitance of the test sample (C s ) and the applied voltage (V dc ), I abs (t) is the absorption current, and I cond (t) is the conduction current, as shown in Figs. 2 and 3. Current integration Q(t) method The first term Q 0 (= C s V dc ) is the amount of charge at the electrode surfaces induced by the applied dc voltage. The second term represents the amount of electric charge that is obtained by integrating the absorption current I abs (t) with respect to time, which indicates that electric charge carriers move in the sample and form space charges until the electric current reaches the equilibrium state. The third term represents the amount of electric charge that is obtained by integrating the conduction current I cond (t) with time after reaching the equilibrium state. In this case, the amount of electric charge supplied from the voltage power source is equal to that of conductive charge passing through the sample. Figure 4 shows the conceptual diagram of the measurement system for evaluating the characteristics of electric charge accumulation in the insulation layer of the IGBT module that was constructed on the basis of the concept of the Q(t) method (AD-9832A; A&D Company Limited). To all the connected terminals of the IGBT module, dc voltages from 100 to 5000 V are applied. The IGBT module is placed on a plate heater for temperature control. The base plate (see Fig. 1) of the IGBT module and the plate heater are connected to the ground. Therefore, in the Q(t) unit we developed, the integration capacitor C INT is placed on the highvoltage side of the power source, as shown in Fig. 4. Fabrication of measurement system The Q(t) measurement system is composed of two units as shown in Fig. 5. The first unit includes the capacitor C INT for current integration, an analog-digital converter (ADC), and a transmission unit (ZigBee transmitter) with an antenna. Figure 6 shows the inside view of the electronic circuit of the Q(t) measurement system whose diagram is shown in Fig. 5. The second unit is a personal computer connected to a wireless receiver, which receives data from the transmission unit. Since all the power required for the transmitter unit is supplied by a battery (6 V, 300 mAh), the Q(t) unit can be used as a floating device. It can also be used for on-site measurements. To detect a wide range of small integrating currents (10 −13 -10 −8 A) and guarantee stable measurements for a long time (1 h or longer), we chose a capacitor without leakage current (C INT ) made of polypropylene film (leakage relaxation time constant >10 4 h) and an operational (OP) amplifier (Texas Instruments LMC6482/NOPB) with the high-impedance input (>10 13 Ω). In addition, we chose a 16-bit ADC for the precise evaluation of the Q(t) data for IGBT devices in a wide range of applied voltages of 40 V-10 kV, as shown in Fig. 7. Equation (5) shows the relationships among the integration capacitance C INT , the capacitance C S of the test sample, the dc application voltage V dc , and the measuring voltage V INT at the capacitance C INT . As the condition for measurements, V dc (40 V-10 kV) >> V INT (20 mV-5 V) is required as shown in Fig. 7. We then approximate Eq. (5) using βC s , where the coefficient β is the ratio of V dc to V INT . In this study, β was 2000. As the maximum range of measuring voltages is V INT = ±5 V, the maximum value of AD conversion at V INT = ±5 V is 15 bits or 32768. Then, the V INT range of ±20 mV-±5 V is guaranteed to be 3-5 digits, which guarantees highly accurate measurements. In the case of GBT-M1 [Fuji Electric (serial number MBR50UA120)], Cs = 1.45 nF (see Fig. 7 as example), we obtained the value of the integration capacitance C INT = 7.0 μF from the relation of Eq. (5) and Fig. 7. Here, we tested the stability of the Q(t) system for long-term measurement, the results of which are shown in Fig. 8. We chose a small capacitance value (C INT = 0.1 μF). We then tested C INT under voltages V INT = +2.5, 0, and -2.5 V, and measured Q(t) for 12 h. The charge leakage ratios Q(t = 12 h)/Q 0 for V INT = +2.5 and -2.5 V were 0.994 and 0.995, respectively. We confirmed that the Q(t) system is very stable because practically no charge leakage was observed. Verification of Q(t) measurement results Basic measurements were carried out to confirm the measurement results. The electric charge induced (Q 0 = C s V dc ) on the electrode surface was measured immediately after applying a high voltage to the test sample. The result of the measurement at that time point indicates the amount of electric charge induced on the electrodes, which is proportional to the applied voltage V dc . Figure 9 shows the linear relationship between Q 0 and V dc at 25 ℃. In addition, we can obtain the capacitance of the insulation layer of IGBT-M1 from the slope of the relationship between Q 0 and V dc in Fig. 9, which is C S = 1.45 nF. Fig. 7. Figure 11 shows Q(t), which indicates the amount of charge accumulated Fig. 10, the Q(t) values were slightly higher than the Q 0 values in the wide range of applied voltages (100 to 5000 V). On the other hand, at a higher temperature of 180 ℃ in Fig. 11, the Q(t) values were significantly larger than the Q 0 values in this wide range of applied voltages. These results indicate that the electric charge accumulation characteristics strongly depend on the applied voltage and temperature of the module. Q(t)/Q 0 electric charge ratio To further study the charge accumulation characteristics, we selected IGBT-M1 as a test sample. The Q(t) at the applied voltage of 500 V at 160 ℃ was analyzed (Fig. 12). The initial amount of induced electric charge (Q 0 ) obtained was 335 nC. However, Q(t = 180 s) increased to 678 nC at 160 ℃ after the application of 500 V. Therefore, the electric charge ratio Q(t= 180 s)/Q 0 increased to 2.02. To discuss the characteristics of electric charge accumulation in the sample, Q(t = 180 s)/Q 0 was calculated and the results are shown in Fig. 13. The electric charge ratio Q(t)/Q 0 , is given by Eq. (6). The effect of temperature on the charge accumulation ratio is also shown in Fig. 13. The right-hand side of Eq. (6) has three terms. Immediately after voltage application When Q(t)/Q 0 = 1, which is the case for Eq. (7), the second and third terms on the righthand side of Eq. (6) are zero because there is neither absorption current [I abs (t)] nor conduction current [I cond (t)]. All the electric charge is stored on the surface of the electrode, indicating no electric charge accumulated in the insulation layer. In this case, the internal electric field is uniform throughout the insulation layer. The electric charge ratio Q(t = 180 s)/Q 0 is almost 1.05 in Fig. 13 at voltages from 100 to 5000 V at 40 ℃. Therefore, at near room temperature, no electric charge accumulation occurred in the IGBT-M1 tested in the wide range of applied voltages. Appearance of absorption current At temperatures from 60 to 140 ℃, the absorption current [I abs (t)] appears owing to electric charge movement from the electrodes into the insulation layer until the movement reaches equilibrium. In this case, the third term on the right-hand side of Eq. (6) I cond (t) can be zero; therefore, Eq. (8) holds before the equilibrium state is reached. The electric charge ratio in this state is within 1 < Q(t = 180 s)/Q 0 < 1.5 at voltages from 100 to 5000 V (Fig. 13). As electric charge accumulation proceeds in the insulation layer, the electric field distribution in this layer is non-uniform. This condition is caused by space charge accumulation. Period of conduction current During this period, the absorption current I abs (t) is almost zero, and the conduction current I cond (t) is dominant. In Fig. 13, the electric charge ratio Q(t = 180 s)/Q 0 during this period is greater than 2.0 under applied voltages from 100 to 5000 V and high temperatures from 160 to 180 ℃. When the conduction charge carriers move in the insulation layer, the electric field distribution in the insulator becomes almost uniform again. During this period, we can apply the ohmic law in Eq. (1) to describe the characteristics of the insulation layer. Regarding the results shown in Fig. 13, the electric charge ratio Q (t = 180 s)/Q 0 was analyzed only for 180 s. If we increase the measurement time to, for example, 3600 s, we may obtain a different Q(t = 3600 s)/Q 0 ratio for the IGBT-M1 sample tested. Figure 14 shows Q(t = 180 s)/Q 0 measured in a wide range of temperatures from 40 to 180 ℃. The Q(t) at a low temperature of 40 ℃ is shown in Fig. 10, where the electric charge ratio Q(t = 180 s)/Q 0 was almost 1.05 (see Fig. 13) even when the applied voltage V dc was increased to 5000 V. From this result, we can conclude that only a small amount of space charge accumulated near the electrodes at this temperature. On the other hand, at a higher temperature of 180 ℃ (Fig. 11), the electric charge ratios obtained from the results in Fig. 11 are from 2.4 to 3.0 when the applied voltage V dc was 5000 V (Fig. 14). We assume that space charge accumulation occurs in the insulation layer and conduction current is formed during this period. The electric charge ratio Q(t = 180 s)/Q 0 in Fig. 13 shows that the charge accumulation is more strongly dependent on the temperature of the sample than on the applied voltage. Figure 14, which is redrawn from the results of Fig. 13, shows the dependence of electric charge accumulation in terms of the ratio Q(t = 180 s)/Q 0 on temperature. The accumulated charge ratio at 40 ℃ is close to 1 at 180 s from the initiation of voltage application. However, at temperatures of 80 and 140 ℃, the ratios were 1.2 and 1.5, respectively. Finally, the ratios Q(t = 180 s)/Q 0 at this same time at higher temperatures of 160 and 180 ℃ were 1.9 and over 2.5, respectively. Comparison between different IGBT-Ms We used two types of test sample, IGBT-M1 and GBT-M2 Mitsubishi [(Mitsubishi Intelligent Power Module (serial number, PM30RSF060)], which had different insulation characteristics because they were produced by different manufacturers. We presented the results of analysis of the electric charge accumulation characteristics in the insulation layer of IGBT-M1 in the above section. We now compare the charge accumulation characteristics of the insulation between IGBT-M1 and GBT-M2. The dependence of the electric charge accumulation ratios of IGBT-M1 and GBT-M2 on temperature is shown in Figs. 14 and 15, respectively. In the case of GBT-M2, Q(t = 180 s)/Q 0 started to increase when the module temperature reached higher than 40 ℃, and the ratio reached 1.5 when the temperature further increased to 60 ℃. However, in the case of GBT-M1, the ratio started to increase gradually but did not reach 1.5 until the temperature increased to 140 ℃. These results indicate that the electric charge accumulation characteristics of IGBT-M1 are better than those of IGBT-M2. Finally, we discuss the physical meaning of charge accumulation characteristics in IGBT modules. Generally, electric charge accumulates between two conducting materials with an insulating material in the middle, which can be represented as a simple capacitor. For the purpose of this study, the IGBT modules used in this study can be represented by such a capacitor. Figure 1 simply represents the concept of an IGBT module and does not necessary indicate the possible locations of charge accumulation. Charge accumulation in IGBT modules should strongly depend on the insulating materials used for the insulation structure and molding. Recently, ceramic materials such as Si 3 N 4 and AlN have been widely used as insulators for power modules, which have a high thermal conductivity and excellent insulating properties. (12)(13)(14)(15) Silicone gel and epoxy resin are also used as packing materials. Thus, there must be marked differences in insulating properties among IGBT modules. Our experimental results only indicate the amount of charge accumulated in a bulk module, which can be used to detect the IGBT modules that are prone to electrical breakdown. Our experimental results do not provide information regarding the exact locations of charge accumulation in modules and this can be the limitation of the use of the device developed in this study. It would be ideal if we can provide the internal structures and find the exact locations of charge accumulation; however, it is beyond the scope of this study. Conclusions The electric charge accumulation characteristics of the insulation layer of IGBT modules under electric and temperature stresses were studied using a new two-unit charge detection system we developed. The measurement principle of the system is based on the Q(t) method we developed. Two types of power module, IGBT-M1 and IGBT-M2, were used as test samples, which had different insulation layer characteristics. The modules were subjected to various electric and thermal stresses, namely, dc voltages from 100 to 5000 V and temperatures from 40 to 180 ℃. The electric charge ratio Q(t = 180 s)/Q 0 for IGBT-M1 after the start of the application of an electric voltage of 1000 V at 80 ℃ is 1.2. On the other hand, the Q(t = 180 s)/Q 0 for IGBT-M2 under the same conditions is 4.1. These results indicate that IGBT-M1 has better insulation characteristics than IGBT-M2. For our future work, we plan to fabricate a compact one-chip charge-accumulation-sensing unit that can be used for on-site evaluation of insulation characteristics of IGBT modules and other power semiconductor devices.
5,395.6
2019-08-30T00:00:00.000
[ "Engineering", "Physics" ]
An Evaluation of the Impact of Multicollinearity on the Performance of Various Robust Regression Methods Objectives: To examine the performance of several regression methods comprising of Ordinary Least Square (OLS), and certain robust methods including; M-regression, Least Median of Squared (LMS), Least Trimmed Square (LTS), MMestimation and S-estimation, under fluctuating levels of collinearity, using the criterion, Total Absolute Deviation (TAB) and Total Mean Square Error (TMSE) with some graphical tools. Methods/Statistical Analysis: Robust Regression methods insure good performance even in case the fundamental assumption of normality is not satisfied. The presence of multicollinearity affects the results of robust regression methods and marks them unsatisfactory. A quantitative evaluation of these techniques is provided by using the criterion, TAB and TMSE. Results are summarised by using box plot of absolute bias, along with the graphs of TAB, and TMSE. Findings: The results show that for minor levels of collinearity the effect is low and similar, but at greater levels of collinearity the effect is high and performance wise all the methods give quite incompatible results. It is also illustrated that greater magnitude of collinearity along with higher percentages of outliers ranks the underlying methods quite differently, resulting in MM-estimation method to be the most unpleasant. Conclusion: While applying any statistical method it is necessary to consider all the assumption underlying that method as well as every aspect of our data to avoid misleading results. It is illustrated that MM-estimation method although a best candidate for higher percentages of outliers alone, become the most unpleasant, by a simultaneous interruption of high level of collinearity, hence robust ridge techniques need to be adopted. Introduction Regression analysis deals to model the relationship between variables approximated by some appropriate mathematical Eq. In case the regression model satisfies certain basic assumptions, the OLS estimators happen to be best linear unbiased estimates 1,2 . The estimate is too much sensitive to violation of these assumptions and even a single contaminated observation can result in the OLS estimator to be unreliable. Researchers have been attempting for alternative estimating procedures, known as robust regression methods which are robust to outliers. decreases reliability of estimates, potentially affecting estimation, forecasting and hypothesis testing. Robust regression being a good alternative in case of violation from normality, but are suspected to lose performance when the problem of non-normality is joined by multicollinearity concurrently. In this paper efforts are carried out to evaluate the performance of various methods (OLS, M, LTS, LMS, S, and MM) under various simulation settings, specifically investigating the influence of collinearity levels on the performance of these methods. Ordinary Least Squares Method Least square method is generally used technique to estimate the parameters in the model. In this technique, estimates of the parameters are obtained by principle which minimizes the totality of squared residuals. In case the linear regression model accomplishes the basic assumptions, OLS estimators stay best linear unbiased 1 . This method provides an explicit estimate of the true values from observed data as: The logic behind frequent use of this method is its computational easiness, but unfortunately this method depends upon a controlled set of assumptions, now being criticized to a greater extent for lacking robustness 3 . M-estimation Method M-estimation being a common robust regression procedure was primarily introduced by 2 . This technique in a sense is a general form of the least squares substituting the quadratic loss function by function ρ . ( ) is symmetric, continuous having a unique minimum at zero 4,11 . The function ρ . ( ) may be chosen in such a way that it denotes some weighting scheme of the residuals. The set of normal Eqs to be solved is given by the system: Where ψ e ( ) = , the weight function's being an estimate of the residuals scale. The choice of ψ to be monotone will not weight discrepant values as much as the least squares, whereas a re-descending ψ function results in a weighting scheme that assign the weights in decreasing order up to a definite distance (e.g. 3σ) and then declines the weight to zero as the remote distance is increased much. Some of the proposals for ρ . ( ) ψ and the weight function are given in Table 1. Least Median of Squares (LMS) Regression The LMS regression was suggested by 3 , using the idea of minimization of the median of the squared residuals not the sum of the squared residuals. Under this procedure the estimates for the model parameters are provided by the Eq: LMS regression estimator attaining a high breakdown of almost 0.5 is the first Equivariant estimator. Although the (LMS) estimator is robust to outliers in y-direction as well as in x-space, it has a drawback that the efficiency of this methodises quite low as compared to the least squares in the instance of Gaussian errors. Due to this deficiency LMS estimator has a very little direct use, but is often used as an initial estimator for diagnostics purposes or in some other robust techniques 5 . LTS Regression The LTS regression method is an alternative robust regression technique suggested by Rousseeuw 3 . To elude outliers, this procedure minimizes the totality of squared residuals after the largest α squared residuals are trimmed. The LTS regression estimator is given by: , this method achieves a breakdown point, (n/2p + 2)/n. A difficulty of the LTS method is the action rEquired for sorting the squared residuals in its objective function 5 . Several algorithms suggested in literature for this approach are simulated annealing based LTS-algorithm developed by 14 , 'Feasible Set Algorithm' by 12 . Another algorithm called FAST-LTS, has been given by 13 , which is too fast than all the existing algorithms. The high statistical efficiency and faster rate of convergence of the LTS over the LMS make LTS a more appropriate nominee than LMS as an initial step for two-stage estimators such as MM-estimator and the generalized M-estimators 13,15,16 . In 17 an L-1 penalty is imposed on LTS estimator a spars estimator has been developed. S-Estimator (S-Regression) The S-estimator an alternative estimator possessing a high breakdown is suggested 3 . The S-estimator minimizes an M-estimate of the residuals scale. This method estimates the true values as: A dominant choice of ρ is: The S-estimator may possess a high breakdown value of 50%, if K in Eq (6) and ρ in Eq (8) satisfies: K c S-estimator possesses the properties of high breakdown and asymptotic normality. The compromise between breakdown point and efficiency is determined by choice of the tuning constants and K. In 18 it is concluded that the S-estimator under Gaussian errors can achieve an efficiency of 0.33 with a breakdown of 50% has suggested fast-S, an approximating algorithm for obtaining S-estimator of regression 19 . MM-Estimation The MM-estimation recommended in 4 , a special brand of M-estimation is a multistage estimator joining the high breakdown from an initial robust estimator and high efficiency from another, bringing about high breakdown and good efficiency from another robust estimator. The calculation of MM-estimator contain, considering a high breakdown point initial estimator, compute an M-estimate of the residuals scale and obtaining an M-estimate of the true values on the basis of the M-estimate of residuals scale . The algorithm for MM-estimation procedure may be given by; 3. The third step use an M-estimate of coefficient β initial fromstep first and estimate of residuals scale S m from the second step, gives β final as a solution to: For a particular value of c 0 , the tuning constant. The MM-estimator enjoys high efficiency, a high breakdown value of (50%), unluckily may be influenced by the occurrence of high leverage observations [20][21][22] . In 23 a robust version of ridge estimator referred to as Weighted Ridge MM-estimator (WRMM) is offered by means of weighted ridge and the MM-estimation penalized MM-estimation called (MM-lasso) by using the L-1 penalty and the mechanism of MM-estimation 24 . Simulation Studies To compare the performance of different methods, numerous simulation options are examined. A simulation structure is implemented to allow on-normality and multicollinearity together. The particulars of various aspects considered in various settings of simulation are: Methods evaluated: Methods assessed under various simulation settings include, OLS, M-estimation, LMS, LTS,S and MM. Sample size: In different simulation settings we have considered, the sample size at 50,100, 150 and 200. Number of predictor variables: for fitting models using different multicollinearity levels and fractions of outliers, the number of predictors (P) is considered at 2, 4 and 6. Fractions of outliers: In various settings of simulation, particularly focus is on y-outliers. While judging the performance of different methods, numerous fractions of outliers; particularly 10%, 20%, 30%, and 40% outliers are generated in data sets. To let different levels of collinearity, the values on predictors are generated by means of a methodology used in [25][26][27][28] . To generate explanatory variables the following mechanism is used: z ij , is a standard normal variate and ρ is specified in such a way that the correlation coefficient between any two predictor variables is maintained at ρ 2 . The scatterplot matrices of predictors X 1 , X 2 , X 3 andX 4 generated by system in Eq (12) with different values of ρ are given in Figure 1-2 and correlation matrix in Table 2 For error distribution considered in Case I to Case V, atn = 50, 100, 150, 200 and P = 2, 4, 6, though the values of TMSE vary, increases with an increase in the value of P and decreases as the sample size grow. With respect to performance for all the methods nearly similar pattern is observed for each value of n(50,100,150,200) and P (2,4,6). It is observed that for lower collinearity levels, the values of TMSE for all techniques are not much divergent, but at greater collinearity levels, the TMSE values for all the methods are pretty different. The graphs of TMSE for (OLS, M,LTS, LMS, MM and S) for different collinearity levels and outliers percentages for n = 200 are given in Figure 3. Figure 3 reveals that at smaller levels of multi-collinearity with increasing percentages of outliers causes the (TMSE) to increase with little differences but the (TMSE) values at higher levels of multi-collinearity increased markedly with wider differences and give different ranking of the methods. In Figure 4, a graphical analysis for (OLS, M, LMS, LTS,S and MM) is given for the second performance measure total absolute bias. The results from Figure 4 are consistent with the results in Figure 3. The results over the two performance measures, TMSE and TAB for different scenarios are given in Table 3 and 4, respectively. In Figure 5, a graphical analysis of box plots for various situations is also given. Conclusions In this study efforts are carried out to compare the performance of different regression methods under the influence of varying levels of multicollinearity. A case wise discussion over performance of different methods is as under: Case I: ~ N (0, 1): For the error distribution considered standard normal, at lower collinearity levels all the methods performance is very similar but as the level of collinearity grows they behave quite differently. Moreover it is evident that at higher collinearity levels LTS, LMS seem the poor, S the next whereas OLS, M and MM appear to perform reasonably fine. Case II: ~ 0.9N (0, 1) + 0.1N (10,1): In this case the error distribution is considered in this case allow 10% outliers in y-direction. In this case at lower levels of collinearity OLS appear to be markedly different, while the other methods behave fairly alike. At higher collinearity levels and 10% fraction of outliers OLS appear the worse whereas (LTS, LMS and S) the next and (M, MM) perform sensibly well with MM the finest of all. Case III: ~ 0.8N (0, 1) + 0.2N (10,1): For a fraction of 20% outliers OLS is markedly different at all levels while the remaining all methods perform nearly similar at lower collinearity levels but behave very differently for greater values of collinearity levels. In this case the ranking of the methods nearly similar to that in Case II. Case IV: ~ 0.7N (0, 1) + 0.3N (10,1): In this case at lower levels of collinearity there seem to be two categories (M-Estimation and OLS) and (LTS, LMS, S and MM), together all the methods have quite diverse behavior at higher collinearity levels. Particularly at higher levels of collinearity MM is the best with (LTS,LMS, and S) the succeeding set of best, whereas OLS and M perfomance is very poor and M-Estimation give the worse result. Case V: ~ 0.6N (0, 1) + 0.4N (10,1): In this case 40% outliers all together with lower to moderate collinearity levels OLS and M method results in higher values of (TMSE), MM the next method giving subsEquent higher values of (TMSE), the other methods (LTS, LMS and S) havinglow and nearly similar values. However 40% outliers considered with a high multi-collinearity level, ranks the methods quite differently. The MM method which is unsurpassed one in all cases turn out to worse among all, M and OLS forming the second set resulting in higher (TMSE) values, and the other three methods (LTS, LMS and S) generating relatively small values of (TMSE).
3,108
2019-07-01T00:00:00.000
[ "Mathematics" ]
ATfiltR: A solution for managing and filtering detections from passive acoustic telemetry data Acoustic telemetry is a popular and cost-efficient method for tracking the movements of animals in the aquatic ecosystem. But data acquired via acoustic telemetry often contains spurious detections that must be identified and excluded by researchers to ensure valid results. Such data management is difficult as the amount of data collected often surpasses the capabilities of simple spreadsheet applications. ATfiltR is an open-source package programmed in R that allows users to integrate all telemetry data collected into a single file, to conditionally attribute animal data and location data to detections and to filter spurious detections based on customizable rules. Such tool will likely be useful to new researchers in acoustic telemetry and enhance results reproducibility.• ATfiltR compiles telemetry files and identifies and stores all data that was collected outside of your study period (e.g. when your receivers were on land for servicing) elsewhere.• As spurious detections are unlikely to appear sequentially in the data, ATfiltR finds all detections that occurred only once (per receiver or in the whole array) within a user-designated time period and stores them elsewhere.• ATfiltR identifies detections that are impossible given the animals’ swimming speeds and the receivers detection range and stores them elsewhere. a b s t r a c t Acoustic telemetry is a popular and cost-efficient method for tracking the movements of animals in the aquatic ecosystem. But data acquired via acoustic telemetry often contains spurious detections that must be identified and excluded by researchers to ensure valid results. Such data management is difficult as the amount of data collected often surpasses the capabilities of simple spreadsheet applications. ATfiltR is an open-source package programmed in R that allows users to integrate all telemetry data collected into a single file, to conditionally attribute animal data and location data to detections and to filter spurious detections based on customizable rules. Such tool will likely be useful to new researchers in acoustic telemetry and enhance results reproducibility. • ATfiltR compiles telemetry files and identifies and stores all data that was collected outside of your study period (e.g. when your receivers were on land for servicing) elsewhere. • As spurious detections are unlikely to appear sequentially in the data, ATfiltR finds all detections that occurred only once (per receiver or in the whole array) within a user-designated time period and stores them elsewhere. • ATfiltR identifies detections that are impossible given the animals' swimming speeds and the receivers detection range and stores them elsewhere. Method details Background Acoustic telemetry has become a key method in the field of aquatic movement ecology thanks to its cost efficiency and the spatial and temporal resolutions at which it allows organisms to be tracked, even in difficult to access habitats [10] . Acoustic telemetry uses underwater acoustic receivers, enabling researchers to record the local presence and sometimes exact location of animals fitted with transmitters that emit encoded acoustic signals (i.e. acoustic tags) [8] . While the hardware and sampling methods differ across studies, acoustic telemetry can be broadly classified into two categories: active acoustic telemetry, where animals are tracked in real-time by a mobile receiver (e.g. from a boat); and passive acoustic telemetry, where several remote receivers are deployed underwater at fixed positions and continuously listen and record detections and related data which can regularly be downloaded. Passive acoustic telemetry is broadly favored as it allows tracking animals around the clock, independent of the presence of researchers on site, and has an excellent ratio of data acquired to labor intensity [8] . But passive acoustic receivers have been known to record false positives (i.e. false or spurious detections; [13] ), which can significantly impact the interpretation of the results [2] . As a result, data handling and filtering is a critical step for passive acoustic telemetry studies [6 , 7] . Because data is acquired continuously, traditional spreadsheet applications are usually insufficient to handle datasets, and this has to be done using algorithms in programming languages instead, which is an especially time-consuming task [7] . In this context and to improve the reproducibility of passive acoustic telemetry studies, the package actel was released for R in 2021 [4 , 12] . actel allows users to process and filter passive acoustic telemetry data in a systematic and reproducible way (among other things) and, naturally, we turned to this solution for our own data (Dhellemmes et al. under review, Fisheries Research). Unfortunately, with over 29 million data points collected from more than 300 animals at 145 geographical locations, we found actel not to be a perfectly appropriate solution in our case. User input is central to actel , which identifies potentially problematic detections in the data, displays them and lets the user decide whether detections should be erased. While this allows for a very fine-tuning of the filtering process, it can become extremely time-consuming when datasets are large, and user decisions might be inconsistent over time, leading to potential biases. Here, we present ATfiltR , an alternative open-source R package that processes passive telemetry datasets and filters spurious detections according to a fully customizable set of rules. The package consists of a suite of five functions, one of which allows the user to prepare their data following processing in ATfiltR for use in actel , allowing researchers to potentially use ATfiltR for coarse batch processing and actel for finer filtering and data exploration. Description of the tool ATfiltR consists of five functions, each described below, that can be used consecutively to handle and filter passive acoustic telemetry data. The five functions in ATfiltR are designed to be used in sequential order, and will respectively, (1) load and organize raw detection files, (2) load and organize animal meta data and receiver deployment files, while attributing this information to the detection data and identifying detections outside of receiver deployments, (3) identify and filter unlikely solitary detections within a specified duration, (4) identify and filter detections that occur at impossible swimming speeds and 5) prepare the filtered dataset for further analysis with the actel package. One important detail is that ATfiltR is mostly project-based. This means that it is meant to operate within an R Studio project [1] which is an automatic way to set the root directory in which the work should be performed. This allows ATfiltR to be directly used across machines and collaboratively (e.g. through cloud-based directories) without having to amend the working directory. Two of the functions ( findSolo() and speedCheck() , described below) can be used outside of a project, if users are only interested in using one or both independently of the rest of the available functions. ATfiltR was partly created using the data.table syntax [3] , an alternative to the native R syntax allowing for higher processing speed, specifically for large datasets. compileData() This function identifies all data files that have the extension indicated by file.ext within the folder indicated by detection.folder ( Table 1 ). If files are found, one is loaded in R, and a dialogue with the user starts to identify key features in the data ( Fig. 1 ): (1) the Table 1 Arguments used in the different ATfiltR functions. x x x x data.folder Character string. The name of the folder containing the other data files (deployment, spatial and animal data). Can be the same as detection.folder. x x x file.ext Character string. The extension of the detection data files stored in the detection.folder. x sep.type Character string. The character that delimits columns in the detection data files (for compileData) and in the other data files (for wWindow). x x save TRUE or FALSE. Should the data be saved in the detections.folder post-processing? If FALSE, the data is only in the R environment. x x x x remove.duplicates TRUE or FALSE. Should the duplicates found in the data be removed? x save.duplicates TRUE or FALSE. Should the duplicates be saved in the detections.folder post processing? x split TRUE or FALSE. Should the data be compiled in small batches? This is recommended for large data files, as it prevents the software from running out of memory. x save.out.of. deployment TRUE or FALSE. Should the data obtained outside the deployment period be saved in the detections.folder post processing? x save.unknown. tags TRUE or FALSE. Should the data from unknown tags be saved in the detections.folder post processing? x discard.first Numeric. How many of the first hours after deploying a tag should be discarded? (e.g. 24 = the first 24h will be discarded). If save.unknown.tags is TRUE, the discarded data will be saved in the unknown tags file. row in which the column names are stored (with the option for the user to input their own column names), (2) whether some of the first rows should be omitted from the data, (3) whether some of the columns should be omitted from the data, (4) whether the date and the time are in separate columns (and then in which column are dates and times stored), (5) which column contains the IDs of the transmitters, (6) which column contains the ID of the receiver. This is done because different acoustic telemetry systems may have slightly different formats, and we wanted ATfiltR to be useable across platforms with minimal reformatting required from the user. The format of the time and date is especially critical on such spatio-temporal data, and the integration of lubridate 's parse_date_time function allows for a variety of different formats to be handled [5] . Once the dialogue is over, ATfiltR loads all the other data files (in batches if split = T ), formats them, and compiles them into one single RData file, which is saved in the detection.folder if save = T . The user also has the option of letting ATfiltR identify duplicated data points (detections of the same transmitter ID, on the same receiver at the same date and time), removing them and saving them in a separate file. wWindow() wWindow uses the previously compiled data frame (the most recently created data frame compiled via compileData that is found in the detection.folder is automatically loaded) and attributes animal data and receiver location data to the data frame. To do that, the user needs to provide three files that are saved in the data.folder ( Table 1 ). Everyone handles and enters data slightly differently, so we designed ATfiltR to work with user input to identify which data sets to use, what columns contain the relevant information etc. This way, users can use ATfiltR with minimal prior reformatting of the data. The three necessary files contain the spatial data, the deployment data, and the animal data. We will now describe the conditions these files must meet to be useable by ATfiltR . Spatial data must contain a longitude column, a latitude column (in any format), a station name column which represents the unique name of each location where receivers have been deployed, and (optionally) a range category column which indicates the names of the category used when attributing each receiver's ranges (if receivers have different ranges; only relevant for future speedCheck() ). Each row corresponds to one location (i.e. station) at which receivers have been deployed. Deployment data must contain a column with the receiver ID (in the same format as in the detection data), the name of the location at which it is deployed (station name, corresponding to the names in the spatial data file), the date and time at which a receiver is deployed, and the date and time at which it is retrieved ( Fig. 2 ). Each row corresponds to one deployment event (from deployment to retrieval) for a receiver. Receivers that are redeployed multiple times require multiple rows. Animal data must contain a transmitter ID column (in the same format as in your detection data), a unique animal ID column (to allow transmitters to be deployed consecutively in multiple animals), and a date and time column (date and time of tag implantation). Each row corresponds to the tag implantation of one animal. If there are more rows per individual (e.g. recapture events), users may indicate a tag.status column to keep track of the events. If the animal data includes a column with the name of the location where it was captured, the column can also be identified. All data files may contain more columns than the necessary ones. The names of the files and the names of the columns do not matter, as users may choose the files they wish to use ( Fig. 3 ) and will indicate which columns are relevant. Once this is done, the Fig. 3. Example of a user-input event. The names of the files do not matter, as the program will ask the user to identify the file that they wish to use each time. ATfiltR saves the three data files in a standard format in the data.folder . This means that the function can be used again without needing any user input: AtfiltR will automatically load the standard files and proceed with the data processing. wWindow then attributes the receiver station names, longitude and latitude data to all rows, and identifies data that is collected outside of a deployment "window " (the time span between the deployment and the retrieval of a receiver). Similarly, the animal IDs are attributed, and any data collected from an undeployed transmitter (or from other transmitters in the area which are not included in the animal data) is identified. Other columns (e.g. body size) in the animal file may be also added to the detection data. The data is then saved in a new, timestamped RData file, which is saved in the detection.folder if save = T . findSolo() When animals occur within the range of a receiver, they are usually there long enough to log multiple detections on that receiver. Consequently, spurious detections are commonly identified as detections that are recorded only once within a certain time frame (i.e. solitary detections). The time frame used and whether the detection should be detected only once on a given receiver or on the whole array differs from study to study. For instance, Meyer et al. [11] identified spurious detections as detections recorded only once on the whole array during a 24h time frame. Kessel et al. [9] considered detections that occurred only once on a given receiver within a 1h period as spurious. findSolo is a fully customizable rule-based tool allowing researchers to identify spurious detections ( Fig. 4 ). It uses the data previously generated via compileData and wWindow (and automatically loads the most recent file). The user can indicate whether solitary detections should be considered on a per receiver basis or over the whole array with per.receiver (TRUE or FALSE). The time frame can also be indicated by using the delay argument ( Table 1 ). Once the solitary detections are found, the user has the option to save them separately ( save.solo = TRUE). The data is then saved in a new, timestamped RData file, which is saved in the detection.folder if save = T . This function can be used outside of an R studio project by indicating project = F and the appropriate data.file (object that contains the data to be used), ID.col (name of the column containing the animal ID), DateTime.col (name of the column containing the timestamp) and Station.col (name of the column containing the location ID). speedCheck() Spurious detections may also be identified if they are logged on receivers that are at a distance that animals could not feasibly travel from their previous location within the time frame at which they were logged. speedCheck uses the distance between receivers, the theoretical swimming speed of the animals (customizable) and the receiver range (customizable) to estimate the feasibility of detections. Fig. 5. Example dialogue with the program ensuring that speed calculations are reasonable. Here a fish of 616mm total length is calculated to have a speed of 8457m.h − 1 via the critical speed formula for fish speed in [14] : base = 0.019, factor = "TL'', exponent = 0.75. Fig. 6. Example of the iterative process used to identify speed errors. The function requires a distance matrix among the stations with the column names and the row names in the matrix corresponding to each station name in the spatial data file. If no landmasses are present in the study area, users may use the Haversine formula to calculate the matrix themselves. Otherwise, we recommend using actel::distancesMatrix() with actel = F from the spatial data file [4] . If receivers have different ranges, these may be attributed using a range file containing a column with the range category, time step (if range varies through time) and the range in meters. The speed (m.h − 1 ) of each animal can be calculated via an equation, allowing speed to scale with body size for example ( Fig. 5 ). Speed can also be the same for all animals, in the case the user can indicate factor = NA, exponent = NA (default setting) and the speed they wish to use in m.h − 1 in the base argument. speedCheck operates by identifying detections that occur at an unreasonable speed, removing them, and reiterating this process until no detections that are logged above threshold speeds can be found ( Fig. 6 ). The process is then repeated as many times as is necessary until all unreasonably fast detections are removed (and saved elsewhere if save.speedy = T ). Similary to findSolo(), this function can be used outside of an R studio project by indicating project = F and the appropriate data.file (object that contains the data to be used), ID.col (name of the column containing the animal ID), DateTime.col (name of the column containing the timestamp) and Station.col (name of the column containing the location ID). toActel() actel requires some formatting to be used. ATfiltR has already identified the relevant columns in the detection file, in the animal, deployment and spatial data and so it can make the formatting for the user. Users may pick which detection file (already processed through ATfiltR ) to use, and the function automatically creates an actel.detection.RData file, a biometrics.csv , a deployments.csv and a spatial.csv in the users' choice directory. These files can be directly used for basic post-processing in the actel package, including data exploration tools. Test of the method We created a realistic test dataset consisting of four animal detections across three receivers, and ran all functions successfully. Detailed code and results can be found in the Appendix. Conclusions With ATfiltR, we were able to filter our own passive acoustic telemetry data in a fast, stable and reproducible way across different projects with minimal reformatting. ATfiltR can be used as a standalone solution or as a preliminary step before using the package actel . Because each project is different, users may not find ATfiltR fully compatible with their own data (just like we found our data difficult to filter in actel ). This package is an open-source cooperative project, hosted on GitHub, and we expect changes to the main functions as user needs arise, and users may use parts of all of the code we developed as well as suggest corrections to the functions. The publication of ATfiltR, as well as that of actel previously, will speed up the publication process of passive acoustic telemetry projects as researchers may handle their data without having to develop a proprietary algorithm. It may also improve replicability as the code used to filter the data can be easily described and published. Ethics statements None. Funding This work was supported by the European Maritime and Fisheries Fund and the State of Mecklenburg-Vorpommern, Ministry of agriculture and environment (Grant/Award numbers MV-I.18-LM-004 and B730117000069;BODDENHECHT). Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Data availability The code and test data are shared.
4,667.6
2023-05-01T00:00:00.000
[ "Computer Science" ]
The Axiflavon We show that solving the flavor problem of the Standard Model with a simple $U(1)_H$ flavor symmetry naturally leads to an axion that solves the strong CP problem and constitutes a viable Dark Matter candidate. In this framework, the ratio of the axion mass and its coupling to photons is related to the SM fermion masses and predicted within a small range, as a direct result of the observed hierarchies in quark and charged lepton masses. The same hierarchies determine the axion couplings to fermions, making the framework very predictive and experimentally testable by future axion and precision flavor experiments. Three of the major open questions in particle physics are (i) the strong CP problem -why is the QCD θ angle so small, (ii) what is the origin of Dark Matter (DM), and (iii) the Standard Model (SM) flavor puzzle -why are the masses of fermions so hierarchical. The first problem can be elegantly addressed by the QCD axion: the pseudo Goldstone boson of an approximate global U (1) symmetry that has a color anomaly [1][2][3]. The two main classes of axion models based on this mechanism are usually referred to as the KSVZ [4,5] and the DFSZ [6,7] axion solutions. It is well known [8][9][10] that in most regions of the parameter space the QCD axion serves as a viable DM candidate. Finally, the SM flavor problem can be elegantly resolved by introducing approximate flavor symmetries, which are spontaneously broken at large scales as in the original Froggatt-Nielsen (FN) mechanism [11]. In this letter we propose a unified framework where the approximate symmetry of the QCD axion is identified with the simplest flavor symmetry of the FN mechanism (the setup is the minimal realization of an old idea by F. Wilczek [12] that axion and flavor physics could be connected). The structure of quark and lepton masses and mixings follows from a spontaneously broken U (1) H flavor symmetry which generically has a QCD anomaly. The resulting Nambu-Goldstone boson, the axiflavon, solves automatically the strong CP problem by dynamically driving the theory to a CP conserving minimum [13]. Non-thermal production of the axiflavon from the misalignment mechanism can then reproduce the observed DM relic density, provided that the U (1) H breaking scale is sufficiently large. This simultaneous solution of flavor, strong CP and DM problem leads to sharp predictions for the proper-ties of the axiflavon that can be tested experimentally. 1 Of particular importance is the axion coupling to photons that is determined by the ratio E/N , i.e., the ratio of the electromagnetic over the QCD anomaly coefficient. This ratio is essentially a free parameter in generic axion models (see Refs. [15,16] for a recent discussion). In the axiflavon setup E/N is directly related to the U (1) H charges of SM fermions and thus to the hierarchies between SM fermion masses. Despite the considerable freedom of choosing these charges in the simplest U (1) H model, we find a surprisingly sharp prediction for E/N centered around 8/3, the prediction of the simplest DFSZ model, This result is a direct consequence of the strong hierarchies in up-and down-type quark masses and only weak hierarchies in the ratio of down-quark to charged lepton masses. A similarly restrictive range for E/N can be found also in a broad class of models with non-minimal flavor symmetries like U (2) (which are more predictive in the fermion sector). The above range for E/N can be translated into a prediction for the ratio of axion-photon coupling g aγγ and axion mass m a g aγγ m a ∈ [1.0, 2.2] 10 16 GeV 1 µeV . 1 A similar approach has been proposed in Ref. [14], where the requirement of gauge coupling unification was combined with the KSVZ axion solution to strong CP and DM problem in order to determine the phenomenology of the so-called unificaxion. arXiv:1612.08040v2 [hep-ph] 20 Jan 2017 For axion masses in the natural range for axion DM, m a ≈ (10 −3 ÷ 0.1) meV, this region will be tested in the near future by the ADMX experiment. The axiflavon can also be tested by precision flavor experiments looking for the decay K + → π + a. Indeed the flavor violating couplings of the axiflavon to quarks are also related to quark masses, but in contrast to E/N are more sensitive to model-dependent O(1) coefficients where κ sd /N ∼ O(1). In the natural range of axion DM this decay can be within the reach of the NA62 and ORKA experiments, depending on the model-dependent coefficients. We summarize our results along with the present and expected experimental constraints in Fig. 1 at the end of this letter. SETUP We assume that the masses of the SM fermions come from the vacuum expectation value (vev) v = 174 GeV of the SM Higgs H, while the hierarchies of the Yukawa couplings are due to a global horizontal symmetry respectively. Here Q i and L i are the quark and lepton electroweak doublets, the remaining fields are SU (2) L singlets, and i = 1, 2, 3 is the generation index. For simplicity we assume that the Higgs does not carry a U (1) H charge, so that the flavor hierarchies are explained entirely by the fermion sector. This assumption will be relaxed below. The U (1) H symmetry is spontaneously broken at a very high scale by the vev V Φ of a complex scalar field Φ with U (1) H charge of −1. All other fields in the model, the FN messengers, have masses of O(Λ) V Φ v and can be integrated out. Note that Λ is a scale above U (1) H breaking, implying that fermionic FN messengers are vector-like under the U (1) H . The Yukawa sector in the resulting effective theory is then given by where a u,d,e ij are complex numbers, assumed to be O(1). Setting Φ to its vev, Φ = V Φ / √ 2, gives the SM Yukawa couplings with where i in the charged lepton sector and we have defined the small parameter The hierarchy of masses follows from U (1) H charge assignments, giving Here V is the CKM matrix in the quark sector and the PMNS matrix in the charged lepton sector. The observed CKM structure is typically obtained for of the order of the Cabibbo angle, ∼ 0.23. The exact values of U (1) H charges can be obtained from a fit to fermion masses and mixings, and are subject to the uncertainties in the unknown O(1) numbers a u,d,e ij . As we are going to demonstrate, these uncertainties will only weakly influence the main phenomenological predictions. Note that the pattern of masses and mixings in the neutrino sector can also be explained in this setup, however, this sector of the SM is irrelevant for the prediction of color and electromagnetic U (1) H anomalies. The field Φ contains two excitations, the CP-even flavon, φ, and the CP-odd axiflavon, a, The flavon field φ has a mass m φ ∼ O(V Φ ), and thus is not directly relevant for low energy phenomenology, and can be integrated out. The axiflavon, a, is a Nambu-Goldstone boson. It is massless at the classical level, but receives a nonzero mass from the breaking of U (1) H by the QCD anomaly. Its couplings to SM fermions F i are given by with The couplings of the axiflavon to the SM fermions are in general not diagonal in the fermion mass eigenstate basis due to the generation-dependency of charges This induces flavor changing neutral currents (FCNCs), which are experimentally well constrained and will be discussed in the next section 2 . Note that several axion models with flavor-violating couplings to fermions have been proposed in the literature, see e.g. [18][19][20][21][22][23][24][25][26]. In the axiflavon setup they are directly related to the SM fermion masses and thus predicted up to O(1) uncertainties. The axiflavon couplings to gluons and photons are controlled by the color and electromagnetic anomalies, whereG µν = 1 2 µνρσ G ρσ and we have switched to the standard axion notation introducing f a = V Φ /2N . The two anomaly coefficients, N, E, are completely determined by the U (1) H charges of SM fermions in the minimal scenario where these are the only states with chiral U (1) H charge assignments (see a more detailed discussion below). Interestingly, these coefficients can be directly related to the determinants of the fermion mass matrices as [27][28][29] det m u det m d = α ud v 6 2N , det m d /det m e = α de where the quantities α ud = det a u det a d and α de = det a d /det a e contain the O(1) uncertainties, given by the anarchical coefficients in Eq. (4). Taking fermion masses at 10 9 GeV from Ref. [30], one finds det m u det m d /v 6 ≈ 5 · 10 −20 and det m d /det m e ≈ 0.7, which makes it clear that up to small model-dependent corrections we have E = 8/3 N and so are close to the simplest DFSZ axion solution [31]. Indeed the phenomenologically relevant ratio E/N is independent of and given by The most natural values for the coefficients are α ud = α de = 1, in the sense that Yukawa hierarchies are entirely explained by U (1) H charges, giving E/N ≈ 2.7. To estimate the freedom from O(1) uncertainties, we simply take flatly distributed O(1) coefficients in the range [1/3, 3] with random sign, resulting in a 99.9% range while the axion mass induced by the QCD anomaly is given by [33] m a = 5.7 µeV It is remarkable that the prediction for E/N in Eq. (15) is largely insensitive on the details of the underlying flavor model. We therefore briefly review the underlying assumptions that lead to the above results and discuss their relevance and generality. First of all we are assuming positive fermion charges. This assumption can be relaxed to the extent that just the sums of charges in each Yukawa entry are positive, or equivalently that only Φ enters in the effective operators but not Φ * . This assumption follows naturally from holomorphy of the superpotential, if we embed the setup into a supersymmetric model in order to address also the hierarchy problem. Our second assumption was that only the fermion fields and the flavon carry the U (1 . Finally we have assumed that only light fermions contribute to the QCD and electromagnetic anomalies, i.e., that all the other fields in the model are either bosons or vectorlike fermions under U (1) H . This is a natural feature of the FN messengers needed to UV-complete the effective setup in Eq. (4), see also the explicit UV completions in Refs. [34,35]. We also note that the same prediction for E/N holds in any flavor model where a global, anomalous U (1) factor determines exclusively the determinant of the SM Yukawa matrices. For example in U (2) flavor models [36][37][38][39], where the three fermion generations transform as 2 +1, one has a SU (2) breaking flavon and a U (1) breaking flavon. In the supersymmetric realization, or upon imposing positive charge sums in the non-SUSY realizations, one finds texture zeros for the 11, 13 and 31 entries of the Yukawa matrices. The determinant is therefore given by the 12, 21 and 33 entries which are SU (2) singlets and therefore depend only on U (1) charges, resulting in the same prediction for E/N when the U (1) breaking flavon contains the axiflavon (and the SU (2) is gauged). Finally we comment on the modification for the E/N range in the context of an additional light Higgs doublet, restricting for simplicity to the case of a 2HDM of Type-II. Then Eq. (12) is modified by the rescaling v 6 → sin 3 β cos 3 β v 6 where tan β = v u /v d is the ratio of Higgs vevs. Large values of tan β can reduce the suppression of the model-dependent term in Eq. (14), and we find essentially the same 99.9% ranges for tan β = 20, while for tan β = 50 the range is slightly increased, PHENOMENOLOGY Being a QCD axion, the axiflavon is a very light particle with a large decay constant making it stable on cosmological scales. Assuming that the phase transition corresponding to the U (1) H breaking happens before inflation, the energy density stored in the axion oscillations can be easily related to the present Dark Matter (DM) abundance [8][9][10]: For a given axion mass below roughly 10 −5 −10 −4 eV it is then always possible to choose a misalignment angle θ to get the correct dark matter abundance Ω DM h 2 ≈ 0.12. The axion domain wall problem is automatically solved in this setup, but interesting constraints can arise from isocurvature perturbations [40]. We show in Fig. 1 present and future bounds on the axiflavon both from axion searches and from flavor experiments in terms of its mass m a and its coupling to photons g aγγ . In this plane one can appreciate how the allowed range of E/N is considerably reduced compared to the standard axion window [32]. Assuming that the axiflavon is also accounting for the total DM abundance we give the corresponding value of θ for a given mass. In the high mass region with m a ∼ 0.1 − 10 meV stringent bounds on the axiflavon comes from its coupling to fermions and are hence independent of g aγγ . A mild lower bound on the axiflavon decay constant f a can be derived from axiflavon coupling to electrons which affects white dwarf cooling [41]. This bound cuts off our parameter space at around m a 10 meV. A stronger bound comes from the flavor-violating coupling of the axiflavon to down and strange quarks, asd, leading to (bounds from kaon decays are more restrictive than the bounds from kaon mixing) where m K,π are the kaon and pion masses, and B s = 4.6(8) is the nonperturbative parameter related to the quark condenstate [42]. The 90% CL combined bound from E787 and E949, BR(K + → π + a) < 7.3 · 10 −11 [43], gives Defining where κ sd /N ∼ O(1) are model-dependent coefficients controlled by the particular flavor charge assignments, and quark masses are taken at µ ∼ 2 GeV. Similarly in the B sector we find with f K 0 (0) = 0.331 [44] and the shorthand notation Defining |λ d 32 +λ d * 23 | ≡ 2κ bs √ m b m s /(2N f a ), this gives for the branching ratio where again κ bs /N ∼ O(1). A bound BR(B + → K + a) < 10 −6 ÷ 10 −8 , potentially in the reach of Belle II, would translate into m a < (8 ÷ 80) meV × N/κ bs . A careful experimental analysis of this decay would be very interesting, as suggested also in Ref. [23]. The solid blue line in Fig. 1 shows the lower bound on m a from flavor-violating kaon decays for κ sd /N = 1. The reach on BR(K + → π + a) is expected to be improved by a factor ∼ 70 by NA62 [45,46] (and possibly also ORKA [47] and KOTO [48]), giving sensitivity to scales as high as f a κ sd /N × 6.3 · 10 11 GeV. The expected sensitivity on the axion mass for κ sd /N = 1 is shown by the dashed blue line in Fig. 1. Therefore future flavor experiment will probe the axiflavon parameter space in the interesting region where it can account for the dark matter relic abundance with θ ∼ O(1). Going to lower axiflavon masses, below 0.1 keV, the phenomenology becomes essentially identical to the one of the original DFSZ model but with a sharper prediction for the value of E/N , given in Eq. (15). This corresponds to the brown band in Fig. 1. The gray shaded regions in Fig. 1 summarize the present constraints on axion-like particles. An upper bound on the photon coupling for the full range of masses of our interest comes from its indirect effects on stellar evolution in Globular Clusters [49]. A comparable bound is set by the CAST experiment [50]. Stronger constraints for axions lighter than 0.1 µeV can be derived from the lack of a gamma-ray signals emitted from the supernova SN1987A [51] and from the bounds on spectral irregularities of the Fermi-LAT and H.E.S.S. telescopes [52,53]. The region of very low axion masses below 10 −5 µeV is disfavoured by black hole superradiance independently on the photon coupling [54]. In the axion mass region between 1 µeV and 100 µeV present bounds from the ADMX experiment [55] do not put yet a constraint on the axiflavon band. This is a well-known feature of the original DFSZ model with E/N = 8/3 that is shared by the axiflavon and further motivates future developments in microcavity experiments. In Fig. 1 we also display the projections for the different axion future experiments. The combination of the upgraded ADMX experiment and its High Frequency version [56] can probe a wide range of the axiflavon parameter space in the mass window between 1 µeV and 100 µeV. This region is strongly preferred because the correct axion abundance can be obtained without a tuning of the initial misalignment angle. Dielectric Haloscopes [57] have a similar reach of ADMX-HF and are not displayed in the plot. The IAXO experiment [58] gives instead a bound only at large axiflavon masses m a meV. Such large masses are already robustly ruled out by flavor-violating kaon decays. The low mass window of the axiflavon band for m a 0.1 µeV will be probed by the resonant ABRACADABRA experiment and its upgrade [59]. Interestingly, the axiflavon band lives below the reach of the first phase of the broadband ABRACADABRA experiment. Axiflavon masses below 10 −3 µeV will eventually be probed in the final phase of the CASPEr experiment [60]. In conclusion, the axiflavon parameter space is considerably narrower than that of KSVZ/DFSZ models, as visible in Fig. 1, and will be covered in a wide range of masses by a combination of future axion searches and kaon experiments. In the high mass window with 10 −6 eV m a 10 −4 eV the comparable projected reaches of ADMX-HF and future kaon experiments leave the exciting possibility to tell apart the axiflavon scenario from other QCD axions. Note Added: During the completion of this manuscript another paper [61] has been submitted to the arXiv that presents an explicit implementation of the same idea.
4,405.8
2016-12-23T00:00:00.000
[ "Physics" ]
Provenance and Pollution Status of River Sediments in the Danube Watershed in Serbia : Heavy metals as environmental pollutants can have natural or anthropogenic origin. To determine the river sediment pollution status, it is crucial to have appropriate reference samples, free of anthropogenic impact, and natural reference samples should be used wherever and whenever possible. The collection of reference samples should be performed in the vicinity of the research area in a place that belongs to the same geological environment and is undisturbed by human activity. The main purpose of this study was to compare concentrations of heavy metals from different rivers with background values to show that the usage of natural background values is the best option when assessing pollution status, but also to underline that the natural background values have to correspond to the analyzed sediments. In this study, 5 river sediments from Sava, 17 from Great War Island (GWI), 11 from Danube, 24 from Tisa, 47 from Tamiš, and 11 from Timok were evaluated relative to reference samples from the Sava and Tisa Rivers. The results indicate that geological origin has a strong influence on the content of heavy metals in river sediments, primarily regarding concentrations of Ni and Co. Furthermore, Tamiš, Tisa, Sava, and Danube sediments are under strong anthropogenic influence. Introduction Rivers, which deliver approximately 20 billion metric tons of transported sediment to oceans annually, play a key role in Earth's surface processes, marine sedimentation, and biogeochemical cycles in oceans [1,2].Anthropogenic inputs of pollutants can considerably change the composition of river waters and sediments [3].The capacity of sediment to adsorb and retain contaminants depends on their characteristics, like the surface area and surface properties of the particles [4]. Heavy metals tend to accumulate in sediments and cause major environmental problems in river catchments [5].Once heavy metals are discharged into a river system, they are distributed between the aqueous phase and bed sediment [6].The main sources of heavy metals in drainage basins are the weathering of rocks and anthropogenic activities, and it is essential to distinguish between them.Understanding the source of heavy metals in river sediments is vital for understanding their impact on water ecosystems [7]. The main anthropogenic sources of heavy metals are mining and smelting, disposal of effluents containing heavy metals, industrial waste, and haphazard use of fertilizers and pesticides that contain heavy metals [6].The concentrations of heavy metals are affected by the sediment mineralogy and grain size.The content of organic matter, claysized fraction, and surface area control the microelement mobility [8].Microelements are adsorbed by organic substances and by Fe and Mn oxides, and the adsorption capacity increases as particle size decreases [4].Adsorbed heavy metals can be released again into the environment by processes dependent on the pH and redox potential. The geochemistry of river sediments depends on various controlling processes.To evaluate whether heavy metal contamination has occurred in sediments, it is necessary to compare the obtained results with a background concentration.Matchullat et al. [9] argued that there should not be an unmistakable definition of a background value and that it is "in principle almost impossible to quantify a true background value beyond doubt".Furthermore, global background data should be used only for global models, and they are practically useless in answering regional or local problems [9]. Both obtaining and using any of these suggested background values is challenging.Statistical methods are considered the most objective by some authors, e.g., [21], but they might be used only on large datasets containing both uncontaminated and contaminated sites; still, one must bear in mind that the results are influenced by the number of polluted samples and the amount of pollution.Using average shale values in areas where heavybearing minerals naturally occur can lead to false anomalies.Furthermore, false anomalies can arise since concentrations of heavy metals tend to vary with grain size [19]. While using natural background values, it is essential to ensure that the geochemical characteristics of these samples are natural, i.e., that no anthropogenic contamination has occurred.Determining a natural background is a very important task, and representative sampling should be done wherever and whenever possible.Representative sampling should be done in an area close to the area of interest, i.e., with the same geological setting, but undisturbed by human action. In this study, we use two groups of reference samples: The first group contains eight samples from a Belgrade water source, which stratigraphically belongs to the Quaternary (Pleistocene and Holocene).These reference samples are marked with Sava BV.The second group is four alluvial sediments of the Tisa River, which stratigraphically also correspond to the Holocene and the abandoned meander.These reference samples are designated as Tisa BV. Both groups have the following discriminants, crucial for natural background samples: 1.The background sediment samples petrologically correspond to the tested samples.Therefore, these samples structurally fall into the population of examined samples, which is, in this case, silty clays, clayey silts, and sandy-clayey silts.2. These samples have an identical or similar sedimentological origin, i.e., they were deposited from alluvial systems.3. The paleo-drainage areas of those alluvial systems partly or entirely correspond to the modern drainage areas of these rivers.4. The mineralogical-petrographic compositions of the examined and reference samples are relatively uniform.5.The reference samples do not have any anthropogenic influence.Dendievel et al. [5] argued that despite two main approaches used, one being regulatory monitoring on stream sites and the other being studies assessing pollution trends based on sediment cores taken at a certain river location, synthesis works at the scale of large rivers are rather rare.One of those rare large-scale studies regarding the Danube Basin was performed by Woitke et al. [22].The Danube River, with a length of ~2800 km and a catchment area of ~817,000 km 2 , is the second longest river in Europe [23].Woitke et al. [22] showed that pollution was relatively low in the Austrian and Hungarian parts of the Danube, with an increase in the concentration of heavy metals at the Iron Gate Reservoir, and then a constant level or a slight decrease was found down to the Danube Delta. Since up to 92% of Serbia lies within the Danube Basin, comprising ~10% of the total Danube Basin, it is necessary to explore the composition of river sediments.Further, about 90% of all Serbia's accessible water originates from outside its territory; therefore, international cooperation on water issues is vital for Serbia [24]. The study aims to show that when analyzing heavy metals in river sediments, the origin of the eroded material reaching the rivers must be considered, so it is necessary to compare the obtained concentrations with properly selected background values.The main purpose of this study was to compare concentrations of heavy metals from different rivers with background values to show that the usage of natural background values is the best option when assessing pollution status, but also to underline that the natural background values must correspond to the analyzed sediments. To prove the importance of this fact, the results of a multi-year examination of sediments from several locations, primarily from the Danube, Sava, Great War Island (GWI) Tisa, and Tamiš Rivers, are presented.Sediments from the Timok River are used as an example to further support this statement. Study Areas and Methods In this study, sediment samples from Tisa, Tamiš, Sava, Danube, GWI, and Timok were analyzed to determine the origin of heavy metals (Figure 1).A total of 127 samples were collected and analyzed for textural and chemical characteristics.Since up to 92% of Serbia lies within the Danube Basin, comprising ~10% of the total Danube Basin, it is necessary to explore the composition of river sediments.Further, about 90% of all Serbia's accessible water originates from outside its territory; therefore, international cooperation on water issues is vital for Serbia [24]. The study aims to show that when analyzing heavy metals in river sediments, the origin of the eroded material reaching the rivers must be considered, so it is necessary to compare the obtained concentrations with properly selected background values.The main purpose of this study was to compare concentrations of heavy metals from different rivers with background values to show that the usage of natural background values is the best option when assessing pollution status, but also to underline that the natural background values must correspond to the analyzed sediments. To prove the importance of this fact, the results of a multi-year examination of sediments from several locations, primarily from the Danube, Sava, Great War Island (GWI) Tisa, and Tamiš Rivers, are presented.Sediments from the Timok River are used as an example to further support this statement. Study Areas and Methods In this study, sediment samples from Tisa, Tamiš, Sava, Danube, GWI, and Timok were analyzed to determine the origin of heavy metals (Figure 1).A total of 127 samples were collected and analyzed for textural and chemical characteristics.The Tisa River Basin, as the largest sub-basin of the Danube watershed, covers an area of 157,186 km 2 , which is about 20% of the Danube Basin (Figure 1).The Tisa River Basin is divided into mountainous Upper Tisa, with tributaries in Ukraine, Romania, and east Slovak Republic, and the lowland part in Hungary and Serbia.Anthropogenic impacts over the Tisa River course are high with permanent pollution from industrial activities, mainly municipal sewage discharges and agriculture [16].In the past, the Tisa River has witnessed a large number of pollution accidents, of which the biggest one occurred in February 2000.During this incident, about 100,000 m 3 of water and sludge with a high concentration of cyanide and trace metals from flotation tailings from a gold mine in Baja The Tisa River Basin, as the largest sub-basin of the Danube watershed, covers an area of 157,186 km 2 , which is about 20% of the Danube Basin (Figure 1).The Tisa River Basin is divided into mountainous Upper Tisa, with tributaries in Ukraine, Romania, and east Slovak Republic, and the lowland part in Hungary and Serbia.Anthropogenic impacts over the Tisa River course are high with permanent pollution from industrial activities, mainly municipal sewage discharges and agriculture [16].In the past, the Tisa River has witnessed a large number of pollution accidents, of which the biggest one occurred in February 2000.During this incident, about 100,000 m 3 of water and sludge with a high concentration of cyanide and trace metals from flotation tailings from a gold mine in Baja Mare, Romania, reached Tisa River and was further carried into the Danube River [25].Along the 150 km Tisa River course in Serbia, a total of 24 surface sediment samples were collected [10]. Tamiš is the largest river in the Banat region, in northeast Serbia (Figure 1).It originates from Semenik Mountain in Romania, flows through the Banat region, and flows into the Danube 30 km east of Belgrade.The Tamiš River, with its main course 340 km long, 118 km of it in Serbia, is a small Danube tributary.Tamiš is draining Quaternary, mostly silty sediments.On several shorter sections of the course, Tamiš meanders laterally, eroding higher relief, loess plain, and loess terrace, forming high riverbanks in the form of steep sections and slopes.Tamiš River is polluted by water supply facilities, fish ponds, industry, agriculture, and urban settlements [26].Forty-seven samples were collected from the Tamiš River for this study. Five sediment samples from the Sava River were collected 30 km upstream from Belgrade, and 17 samples were collected at the GWI, which are places of sediment accumulation at the confluence of the Sava and Danube Rivers in Belgrade [27,28]. Great War Island is sedimentary and alluvial-accumulative, formed due to the slowing and stopping of sediments at the confluence of the Sava and Danube Rivers, and is constantly under process of changing shape and size (Figure 1).The total sediment thickness is estimated at about 25 m.The Great War Island has a special status based on its position, because it relies directly on the international waterways of the Danube and Sava Rivers.It is one of the repertoire points on the most important European waterway, which connects the North with the Black Sea via the Rhine-Main-Danube channel.The GWI covers a total area of 210.8 ha and is of unique ecological, cultural, historical, and recreational importance, located in the center of Belgrade. Djerdap Lake was chosen as a location for the collection of samples from the Danube River for two main reasons.The first is its uniform geological setting, represented by the Lower Carboniferous granitoids, and the second reason is that Djerdap Lake represents a very favorable sediment archive since water and sediment from the entire upstream Danube River Basin, shared by many countries with a total population of about 80 million, concentrate in the Serbian sector of the Danube, especially in Djerdap Lake [29].Eleven samples from the Danube River were collected before Djerdap Lake, at the Serbia-Romania border [30]. The Timok River, also known as Great Timok, is a river in eastern Serbia, a right tributary of the Danube River.For the last 15 km of its flow, it forms a border between eastern Serbia and western Bulgaria.The Timok River is 202 km long, and the watershed covers an area of 4626 km 2 .The geological setting consists of Neogene sediments, granitoides, metamorphic rocks, gabro, and limestones.Eleven sediment samples from the Timok River were collected for this study. As background values, we used the composition of core sediments from the Tisa River [10] and the Sava River [27].The reference samples were those whose geochemical characteristics are known to be natural, i.e., without any anthropogenic influences.In terms of mineralogical and structural characteristics, they are the closest to modern sediments from the Tisa River.Four samples of fine-grained clastic sediments from boreholes were selected.Stratigraphically, the reference samples belong to the Holocene.However, they are genetically linked to alluvial systems that are fed from petrologically very diverse and very wide margins.Facially, these are deposited partly from the flood plains and partly from abandoned meanders.Four core sediment samples were collected at ~10 km from the main Tisa River flow. Eight Sava background samples were collected from the Belgrade spring, which stratigraphically belongs to the Quaternary (Pleistocene and Holocene).The Pleistocene fine-grained clastites are related to the model of small braided rivers, and those of the Holocene are related to the paleo-Sava as a meandering river. The surface sediment samples were collected from the riverbanks using a small shovel, then stored in labeled plastic bags according to currently accepted international standards and transported to the laboratory [10,28,30]. To determine the composition of sediments, grain size and chemical analyses were conducted. Grain size analysis was performed according to a standard wet sieving procedure for the sand fractions (using 1, 0.5, 0.25, 0.125, and 0.063 mm sieve sizes) and the standard pipet method for the <0.063 mm fraction [30]. The concentrations of macroelements were determined via X-ray fluorescence (XRF).For XRF analysis, samples of sediment were dried until constant mass at 105 • C, mixed with tableting wax in an 80:20 ratio, and pressed into pellets.Semiquantitative and qualitative analysis was performed using a Spectro Xepos Energy-Dispersive XRF (EDXRF) instrument with a binary cobalt/palladium alloy thick-target anode X-ray tube (50 W/60 kV) and combined polarized/direct excitation. Determination of the heavy metal concentrations (As, Cd, Co, Cr, Cu, Ni, Pb, and Zn) in sediments was performed using an inductively coupled plasma optical emission spectrometer with axial view (Thermo Scientific iCAP 6000 series ICP-Spectrometer, Waltham, MA, USA).The emission wavelengths (nm) for determination were as follows: As 189.[28,30]. The contamination factor (C f i ) was calculated as C f i = C o i /C n i , where C o i is the concentration of heavy metals in sediment and C n i is a background value.The C f i classes proposed by [31] are as follows: <1, low contamination factor; 1 ≤ C f i < 3, moderate contamination factor; 3 ≤ C f i ≤ 6, considerable contamination factor; >6, very high contamination factor.Factor analysis was applied to identify the source of heavy metals (natural or anthropogenic) in river sediments.The factor analysis of heavy metal concentrations was conducted using SPSS 17.0 (AppOnFly, Inc., San Francisco, CA, USA). Grain Size Composition The results of grain size analysis showed that the largest number of tested sediment samples from the Tisa, Tamiš, Sava, Danube, and Timok Rivers had a dominant silt fraction (0.05-0.005 mm).According to Sheppard's [32] classification, these sediments are silts, clayey silts, sandy-clayey silts, and clayey-sandy silts (Figure 2).Generally, 65% of the investigated samples were defined as silts, and a smaller number of samples were dominated by clayey or sandy fractions. The surface sediment samples were collected from the riverbanks using a small shovel, then stored in labeled plastic bags according to currently accepted international standards and transported to the laboratory [10,28,30]. To determine the composition of sediments, grain size and chemical analyses were conducted. Grain size analysis was performed according to a standard wet sieving procedure for the sand fractions (using 1, 0.5, 0.25, 0.125, and 0.063 mm sieve sizes) and the standard pipet method for the <0.063 mm fraction [30]. The concentrations of macroelements were determined via X-ray fluorescence (XRF).For XRF analysis, samples of sediment were dried until constant mass at 105 °C, mixed with tableting wax in an 80:20 ratio, and pressed into pellets.Semiquantitative and qualitative analysis was performed using a Spectro Xepos Energy-Dispersive XRF (EDXRF) instrument with a binary cobalt/palladium alloy thick-target anode X-ray tube (50 W/60 kV) and combined polarized/direct excitation. Determination of the heavy metal concentrations (As, Cd, Co, Cr, Cu, Ni, Pb, and Zn) in sediments was performed using an inductively coupled plasma optical emission spectrometer with axial view (Thermo Scientific iCAP 6000 series ICP-Spectrometer, Waltham, MA, USA).The emission wavelengths (nm) for determination were as follows: As 189.[28,30]. The contamination factor (Cf i ) was calculated as Cf i = Co i /Cn i , where Co i is the concentration of heavy metals in sediment and Cn i is a background value.The Cf i classes proposed by [31] are as follows: <1, low contamination factor; 1 ≤ Cf i < 3, moderate contamination factor; 3 > Cf i < 6, considerable contamination factor; >6, very high contamination factor. Factor analysis was applied to identify the source of heavy metals (natural or anthropogenic) in river sediments.The factor analysis of heavy metal concentrations was conducted using SPSS 17.0 (AppOnFly, Inc., San Francisco, CA, USA). Grain Size Composition The results of grain size analysis showed that the largest number of tested sediment samples from the Tisa, Tamiš, Sava, Danube, and Timok Rivers had a dominant silt fraction (0.05-0.005 mm).According to Sheppard's [32] classification, these sediments are silts, clayey silts, sandy-clayey silts, and clayey-sandy silts (Figure 2).Generally, 65% of the investigated samples were defined as silts, and a smaller number of samples were dominated by clayey or sandy fractions. Content of Macroelements The content of macroelements in sediments from the Tisa, Tamiš, Sava, Danube, and Timok Rivers was determined to explore differences in the sources of the materials these rivers carry (Table 1).SiO 2 , which in river sediments is most often present as quartz but is also a main constituent of all silicate minerals, was highest in both recent and reference sediments from the Tisa River, followed by the Timok River (46.51-58.07%)(Table 1).Aluminum, sodium, and potassium are mostly bound with clay minerals and feldspars.In the analyzed river sediments, these elements varied within small ranges: Al 2 O 3 , 13.32-16.45%;Na 2 O, 0.67-1.94%;K 2 O, 1.52-2.71%.Calcium and magnesium are primarily found in carbonate minerals, while Mg is also bound to chlorite. The concentrations of Fe 2 O 3 , MnO, and TiO 2 , found in sulphides and silicates, and P 2 O 5 , mostly bound with organic matter, showed little variation between samples and were, on average, in the ranges of 5.56-7.40%,2.22-3.80%,0.11-0.18%,and 0.10-0.94%,respectively (Table 1).However, it is interesting that the lower end of the range was found in the Sava BV sediments, and the highest was found in the Tamiš River sediments, indicating the different sources of geological material these rivers are draining. Content of Heavy Metals The average contents of heavy metals are given in Table 2.The average concentrations of zinc in the investigated samples were in the range from 73.09 in Timok up to 353.91 ppm in the Danube sediments; those of copper were from 28.71 ppm in Sava up to 146.02 ppm in GWI; those of chromium were from 12.69 ppm in the Tisa up to 126.27 ppm in the Timok sediments; those of nickel varied from 28.96 ppm in GWI to 82.17 ppm in the Danube; those of Pb varied from 24.27 ppm in Timok samples to 91.45 ppm in the Danube sediments; those of Co ranged from 11.39 ppm in Sava to 29.82 ppm in Timok samples; those of As were from 9.84 ppm in Sava to 28.10 ppm in Tamiš; and those of Cd were from 0.26 ppm in Timok to 2.75 ppm in the Danube sediments. Discussion CaO and MgO had the highest concentrations in the Sava, GWI, Danube, and Timok Rivers, indicating limestones as one of the sources of drainage materials, but from different locations.The source of material carried by the Sava River is the Dinaric Mountains.At the time of sampling, Timok River drains limestones from east Serbia (Figure 1). Statistical analysis was conducted using the obtained results to further explore the provenance of recent river sediments.The correlation between SiO 2 /Al 2 O 3 , as the main constituents of silicate minerals, and CaO/MgO, mostly originating in carbonate minerals, reveals subtle but important differences between the geological origins of the analyzed recent river sediments (Figure 3).The Sava, Danube, GWI, and sediments from the lower stretch of the Tamiš River had higher values of >2% CaO/MgO, indicating similar geological origin.Zinc, Cu, Ni, and Cr had the highest concentrations, while Cd had the lowest concentration in all tested river sediments.The contamination factor, representing the ratio between measured concentrations in river sediment and background values, can be an indication of pollution status (Table 3).The microelement concentrations of Tisa River were compared with Tisa BV values, and the obtained results coincide with the results Zinc, Cu, Ni, and Cr had the highest concentrations, while Cd had the lowest concentration in all tested river sediments.The contamination factor, representing the ratio between measured concentrations in river sediment and background values, can be an indication of pollution status (Table 3).The microelement concentrations of Tisa River were compared with Tisa BV values, and the obtained results coincide with the results obtained by Štrbac et al. [10].The concentrations of Cd were almost 12 times higher than Tisa BV values, and those of Zn and Pb were 4.4 and 3.5 times higher, respectively.Sediments at GWI, which represents sediment accumulation at the Sava and Danube Rivers' confluence, had concentrations higher than the background values of Cu, Cd, Cr, Zn, and Pb as established by Kašanin-Grubin et al. [28].Sediments of the Danube River showed a similar pollution status to GWI sediments and the Sava River samples, having 8.2 times higher content of Cd, 3.5 times more Cr, and 2.1 times more Zn than Sava BV samples (Table 3).Since we do not have natural background values for the Tamiš River, the concentrations of heavy metals in these sediments were compared with both Sava BV and Tisa BV.There is possible pollution with As, Cd, Cr, Cu, and Pb, but this cannot be stated as a fact, since the Tamiš River draining area is geologically different from those of the Tisa and Sava Rivers (Figure 1).This is an example where appropriate background values should be used when assessing the pollution status of a river.Factor analysis was used to find a possible grouping of microelements and to explain their origin.The first factor explained 35% of the data and grouped Cd, Cu, Pb, and Zn, which are elements that show strong anthropogenic influence.The second factor explained 23% of the data and grouped As, Cr, and Cu, which have mixed anthropogenic and geologic origin; in the third group, with 16% of the data, were Ni and Co, elements with geologic origin (Table 4).To further explore the origin of microelements in the tested river sediments, scatter graphs are presented in Figures 4-6.Based on factor analyses, we chose the Pb-Zn scatter Water 2023, 15, 3406 9 of 12 graph, with both parameters grouped under factor 1 as pollutants (Figure 4); the Cr-Cd scatter graph, with Cd being a parameter grouped under factor 1 as a pollutant and Cr being a parameter grouped under factor 2 for heavy metals with mixed geological and anthropogenic origin (Figure 5); and the Ni-Co scatter graph, with both elements grouped in the factor identified as being of geological origin (Figure 6).To further explore the origin of microelements in the tested river sediments, scatter graphs are presented in Figures 4-6.Based on factor analyses, we chose the Pb-Zn scatter graph, with both parameters grouped under factor 1 as pollutants (Figure 4); the Cr-Cd scatter graph, with Cd being a parameter grouped under factor 1 as a pollutant and Cr being a parameter grouped under factor 2 for heavy metals with mixed geological and anthropogenic origin (Figure 5); and the Ni-Co scatter graph, with both elements grouped in the factor identified as being of geological origin (Figure 6).According to the factor analysis, Cd has anthropogenic origin and Cr has mixed origins, which can also be seen from their scatter graph (Figure 5).The concentrations of Cd were highest in the Tisa, Danube, and Sava Rivers, followed by GWI.Tisa BV and Sava BV had the lowest concentrations of both elements.Sediments from the Timok River are interesting since they had high Cr and low Cd concentrations.This indicates that the Cr is of geological origin, since Timok River is not under anthropogenic pressure.Sediments from Tamiš River form two groups: one with Cd concentrations of <2 ppm and relatively high-up to 130 ppm-concentrations of Cr, which could indicate geological origin, and the other with Cr > 2.5 ppm and Cr > 100 ppm, which could be a consequence of pollution.However, this cannot be stated as a fact without comparing these sediments with appro- interesting since they had high Cr and low Cd concentrations.This indicates that the Cr is of geological origin, since Timok River is not under anthropogenic pressure.Sediments from Tamiš River form two groups: one with Cd concentrations of <2 ppm and relatively high-up to 130 ppm-concentrations of Cr, which could indicate geological origin, and the other with Cr > 2.5 ppm and Cr > 100 ppm, which could be a consequence of pollution.However, this cannot be stated as a fact without comparing these sediments with appropriate background values.According to the factor analysis, Cd has anthropogenic origin and Cr has mixed origins, which can also be seen from their scatter graph (Figure 5).The concentrations of Cd were highest in the Tisa, Danube, and Sava Rivers, followed by GWI.Tisa BV and Sava BV had the lowest concentrations of both elements.Sediments from the Timok River are interesting since they had high Cr and low Cd concentrations.This indicates that the Cr is of geological origin, since Timok River is not under anthropogenic pressure.Sediments from Tamiš River form two groups: one with Cd concentrations of <2 ppm and relatively high-up to 130 ppmconcentrations of Cr, which could indicate geological origin, and the other with Cr > 2.5 ppm and Cr > 100 ppm, which could be a consequence of pollution.However, this cannot be stated as a fact without comparing these sediments with appropriate background values. The correlation between Ni and Co, as elements of geological origin, reveals the source material in the drainage basins.Previous studies have shown elevated concentrations of Ni in sediments of the Tisa and Sava Rivers, as well as GWI, which are confirmed by the data obtained in this study.The origins of these sediments are varieties of fine-grained clastic sediments, predominantly silts, loess, and clays.However, the concentration of Co differs with the geology of the drainage basin.Sediments from the Timok River, and partly from the Tamiš River, have high concentrations of Co, indicating their geological origin represented by granite, gabro, and limestones (Figure 1). Conclusions This study highlights the importance of the selection of background values when assessing the pollution status of river sediments.Geochemical analyses proved to be a valuable tool in determining the origin of eroded material. The correlation between SiO 2 /Al 2 O 3 , as main constituents of silicates, and CaO/MgO, as main constituents of carbonate minerals, revealed important differences between the geological origins of the analyzed recent river sediments. Factor analysis grouped the microelements in the river sediments according to their provenance.The first group consists of elements that have strong anthropogenic influencein this case, Cd, Cu, Pb, and Zn.The second group, with As, Cr, and Cu, is of mixed anthropogenic and geologic origin, and in the third group, Ni and Co are elements of geologic origin. This study proves that natural background values should be used whenever feasible, but it is necessary that these values are specific for the studied watershed.Further investi- Water 2023 , 15, x FOR PEER REVIEW 3 of 12 Figure 1 . Figure 1.Map of Serbia with the analyzed rivers (right).Map of the Balkan Peninsula with mountain ranges (left). Figure 1 . Figure 1.Map of Serbia with the analyzed rivers (right).Map of the Balkan Peninsula with mountain ranges (left). Figure 4 . Figure 4.The Pb-Zn scatter graph for river sediments and background values from the Tamiš, Tisa, Tisa BV (background values), Sava, Sava BV (background values), Great War Island (GWI), Danube, and Timok Rivers.The Pb-Zn scatter graph shows the pollution level of the analyzed river sediments.Sediments from the lower river course of the Tamiš, GWI, and Danube samples were heavily influenced by Pb.Zinc concentrations were highest in the Tamiš, Tisa, and Danube samples, indicating strong anthropogenic influence.Timok sediments and reference samples had the lowest Pb concentrations, at <40 ppm. Water 2023 , 15, x FOR PEER REVIEW 10 of 12 Figure 5 . Figure 5.The Cr-Cd scatter graph for river sediments and background values from the Tamiš, Tisa, Tisa BV (background values), Sava, Sava BV (background values), Great War Island (GWI), Danube, and Timok Rivers.The Pb-Zn scatter graph shows the pollution level of the analyzed river sediments.Sediments from the lower river course of the Tamiš, GWI, and Danube samples were heavily influenced by Pb.Zinc concentrations were highest in the Tamiš, Tisa, and Danube samples, indicating strong anthropogenic influence.Timok sediments and reference samples had the lowest Pb concentrations, at <40 ppm. Table 4 . Factor analyses for elements in river sediments and background values.
7,072.8
2023-09-28T00:00:00.000
[ "Environmental Science", "Geology" ]
Phylogenetic analysis of simian Plasmodium spp. infecting Anopheles balabacensis Baisas in Sabah, Malaysia Background Anopheles balabacensis of the Leucospyrus group has been confirmed as the primary knowlesi malaria vector in Sabah, Malaysian Borneo for some time now. Presently, knowlesi malaria is the only zoonotic simian malaria in Malaysia with a high prevalence recorded in the states of Sabah and Sarawak. Methodology/Principal findings Anopheles spp. were sampled using human landing catch (HLC) method at Paradason village in Kudat district of Sabah. The collected Anopheles were identified morphologically and then subjected to total DNA extraction and polymerase chain reaction (PCR) to detect Plasmodium parasites in the mosquitoes. Identification of Plasmodium spp. was confirmed by sequencing the SSU rRNA gene with species specific primers. MEGA4 software was then used to analyse the SSU rRNA sequences and bulid the phylogenetic tree for inferring the relationship between simian malaria parasites in Sabah. PCR results showed that only 1.61% (23/1,425) of the screened An. balabacensis were infected with one or two of the five simian Plasmodium spp. found in Sabah, viz. Plasmodium coatneyi, P. inui, P. fieldi, P. cynomolgi and P. knowlesi. Sequence analysis of SSU rRNA of Plasmodium isolates showed high percentage of identity within the same Plasmodium sp. group. The phylogenetic tree based on the consensus sequences of P. knowlesi showed 99.7%–100.0% nucleotide identity among the isolates from An. balabacensis, human patients and a long-tailed macaque from the same locality. Conclusions/Significance This is the first study showing high molecular identity between the P. knowlesi isolates from An. balabacensis, human patients and a long-tailed macaque in Sabah. The other common simian Plasmodium spp. found in long-tailed macaques and also detected in An. balabacensis were P. coatneyi, P. inui, P. fieldi and P. cynomolgi. The high percentage identity of nucleotide sequences between the P. knowlesi isolates from the long-tailed macaque, An. balabacensis and human patients suggests a close genetic relationship between the parasites from these hosts. Introduction Anopheles species of the Leucosphyrus group have been identified as medically important vectors in Southeast Asia region [1,2]. The Leucosphyrus group has three main subgroups; Hackeri, Leucosphyrus and Riparis subgroups [3], with the Leucosphyrus subgroup further divided into Dirus complex and Leucosphyrus complex [2,4]. In Peninsular Malaysia, three species of the Leucosphyrus group namely An. hackeri, An. cracens and An. introlatus had been incriminated as primary vectors for P. knowlesi [5][6][7]. However, in East Malaysia, An. latens in Sarawak and An. balabacensis in Sabah had been confirmed as primary vectors for P. knowlesi [8,9]. A study in Cambodia in 1962 has shown that An. balabacensis (identified as An. dirus later [10]) preferred biting human compared to monkeys placed at the ground level, but preferred monkeys at canopy level to monkeys on the ground [11]. A study in Sabah comparing human landing catch (HLC) and monkey baited trap (MBT) at ground level showed that more An. balabacensis were caught using HLC than MBT [12]. Recent studies showed that this species is more active during the early night with a peak biting time between 7 pm to 8 pm [9,13], and also prefers to bite outdoors than indoors [13]. Such biting behaviors coupled with an abundant source of simian malaria parasites in the reservoir long-tailed macaques (Macaca fascicularis) contribute to An. balabacensis becoming an effective vector for transmitting P. knowlesi malaria in Sabah. Previous studies in Malaysia have shown that the long-tailed macaques harbor at least five species of simian Plasmodium [14,15], all of which have also been detected in An. balabacensis [9,16]. In Sabah, besides P. knowlesi, other simian malaria parasites recorded in An. balabacensis are P. coatneyi, P. inui, P. fieldi and P. cynomolgi [9,13]. Apart from recording these parasites in the mosquitoes, there is limited study on the phylogenetic relationship among these simian malaria parasites found in An. balabacensis, macaques and human. In this study, we compare the partial nucleotide sequences of SSU rRNA of simian malaria parasites isolated from An. balabacensis caught in Kudat district of Sabah, from macaques as well as human patients with other published sequences of human and simian malaria parasites available in the GeneBank database. Building a phylogenetic tree of these malaria parasites will give us a clearer picture about their genetic relationship especially for P. knowlesi isolated from long-tailed macaque, An. balabacensis and human. Study area Kudat district, located at the northern tip of Borneo under the Kudat Division, is about 153 kilometers from Kota Kinabalu, the state capital of Sabah. Paradason village where the study was conducted is located in Kudat District and about 50 kilometers from Kudat town (Fig 1). Most of the villagers belong to the Rungus ethnic group who are dependent on small-scale farming (paddy), oil palm and rubber plantations as their primary source of income. Sampling of Anopheles Anopheles mosquitoes were sampled monthly from October, 2013 to December, 2014 using human landing catch (HLC) method. A total 70 nights of sampling were performed starting from 1800 to 0600 hours (12 hours). Two pairs of volunteers were assigned working in shifts at a randomly selected habitat during each night of sampling. Anopheles was lured by the Phylogeny of simian Plasmodium in Anopheles balabacensis, Sabah, Malaysia volunteers exposing their legs. The mosquitoes landing on the legs were caught by the volunteers using plastic specimen tubes (2 cm diameter X 6 cm) aided by a flashlight. Morphological identification of Anopheles species The next morning, the Anopheles mosquitoes were killed by keeping them in the freezer (-20˚C) for a few minutes, then gently pinned onto Nu poly strip using ultra-thin micro-headless pins. Species identification was done under a compound microscope using published keys [2,17,18]. After identification, each individual specimen was stored separately in a new microfuge tube and transported to Faculty of Medicine & Health Sciences, Universiti Malaysia Sabah for further processing. Total DNA extraction of Anopheles Each individual Anopheles specimen was placed separately inside a sterilized mortar and the tissue homogenized using a sterile pestle. The total DNA was extracted from the tissues using DTAB-CTAB method [19] with some modifications (for example: incubation time was reduce to 30 minute instead of overnight and at the final step of precipitation before adding TE buffer, DNA pellet was incubated at 45˚C to completely evaporate any residue of ethanol). First, 600 μl of DTAB solution was added into the mortar and the tissue was ground using pestle until homogenized. Then, the homogenized tissue was transferred into a clean 1.5 ml microfuge tube and incubated at 68˚C for 30 min. Subsequently, 600 μl of chloroform was added into the microfuge tube which was inverted ten times to mix the contents and centrifuged at 13,000 rpm for 5 min. Then, 400 μl of the upper aqueous layer was carefully transferred into a new clean 1.5 ml microfuge tube and mixed with 900 μl sterile dH 2 O and 100 μl CTAB solution by gently inverting the microfuge tube for several times and allowed it to sit at room temperature for 5 min. The mixture was then spun at 13,000 rpm for 10 min. The supernatant was discarded and the DNA pellet was re-suspended in 300 μl of 1.2 M NaCl solution. Total DNA was precipitated by adding 750 μl of absolute ethanol and centrifuged at 13,000 rpm for 5 min. The supernatant was discarded, the DNA pellet washed with 500 μl of 70% ethanol and centrifuged at 13,000 rpm for 2 min. The DNA pellet was incubated at 45˚C for 10 min and re-suspended in 30 μl Tris-EDTA (pH8.0) buffer and stored at -30˚C. Amplification of Plasmodium DNA Presence of malaria parasites in the mosquitoes was detected using nested PCR by targeting the small subunit ribosomal RNA (SSU rRNA) gene of Plasmodium. A PCR primer pair, rPLU1 and rPLU5, was used in first PCR reaction, while another pair (rPLU3 and rPLU4) was used in the second PCR reaction [20]. For internal control, another set of nested PCR was performed separately to amplify the cytochrome c oxidase subunit II (COII) gene of Anopheles [12]. When a mosquito was confirmed positive for malaria parasites, the Plasmodium species was determined using species specific primers. Both PCR reactions were performed with 25.0 μl final volume. The reaction components were prepared by mixing 5.0 μl of 5X PCR buffer (Promega), 0.5 μl of (10 mM) dNTPs (Promega), 3.0 μl of (25 mM) MgCl 2 , 1.0 μl of (10 μM) forward and reverse primers, 0.3 μl of (5.0 U/μl) Taq DNA polymerase (Promega), 2.0 μl of DNA template and sterile dH 2 O to make up to 25.0 μl final volume. After completion of the first PCR, 2.0 μl of the PCR product was used as DNA template in the second PCR. The reaction was carried out using a thermal cycler (T100 Thermal Cycler, BioRad) with an initial denaturation at 95˚C for 5 min followed by 35 cycles of denaturation at 94˚C for 1 min, annealing for 1 min and extension at 72˚C for 1 min and one final extension step at 72˚C for 5 min. The annealing temperature was set at optimal temperature for each set of primers (see S1 Table). The PCR products were analyzed on 1.5% agarose gel electrophoresis stained with RedSafe nucleic acid staining solution (iNtRON Biotechnology), and visualized with an UV transilluminator. Cloning and sequencing of SSU rRNA gene of simian Plasmodium The SSU rRNA gene of the five simian malaria parasite species extracted from An. balabacensis caught in Paradason were cloned and sequenced. In addition, we included in the study blood samples from two P. knowlesi patients and two long tail macaques, one infected with P. knowlesi while the other with P. inui. To make the data set larger, we included simian malaria parasites obtained from mosquitoes caught in three other villages (Tomohon, Mambatu Laut and Narandang) in Kudat district from another study. A new universal forward primer (UMSF) combined with species-specific primers were used to amplify the SSU rRNA gene of Plasmodium. Details of the primers are provided in S2 Table. Preparation of the reaction mixture and the PCR conditions programmed are as described above. After the PCR was completed, the PCR products were purified to remove impurity and excess reaction mixture using MEGA quick-spin PCR & Agarose Gel DNA Extraction System (iNtRON Biotechnology, Korea) according to manufacturer's procedure. Cloning the SSU rRNA gene was done using pGEM-TEasy vectors (Promega, USA) and the plasmids were extracted from the transformed E. coli (JM109) using DNA-spin Plasmid DNA Purification Kit (iNtRON Biotechnology, Korea), all according to the manufacturer's protocol. The extracted plasmid vectors were restricted using EcoRI restriction enzyme (Promega, USA) and sent to AITBIOTECH, Singapore for sequencing. Sequencing was carried in both directions using forward and reverse M13 primers. BLAST search of SSU rRNA sequence The nucleotide sequences of SSU rRNA of 21 Plasmodium isolates in this study were aligned and compared with other SSU rRNA sequences available at the GeneBank database to determine the percentage identity using Basic Local Alignment Search Tool (BLAST) available online at https://blast.ncbi.nlm.nih.gov/Blast. Sequence analysis and phylogenetic tree of SSU rRNA The SSU rRNA sequences were standardized to a fixed region for analysis based on the UMSF and UNR primers binding sites. Further analysis was performed using MEGA software, version 4.1 [21]. The nucleotide sequences were multi-aligned using ClustalW method [22] incorporated in the software and the number of variable nucleotides within each of the five Plasmodium spp. determined. Phylogenetic tree was constructed using neighbor-joining method [23] and the evolutionary distances computed using maximum composite likelihood model with a bootstrap test of 1000 replicates [24] and pairwise deletion option. This method was adopted as it takes into account the different rates of evolution or substitution between nucleotides. The selected region for constructing the phylogenetic tree was nucleotides numbered nt81 to nt1041, based on the published P. knowlesi sequence (AY327551) isolated in Kapit Sarawak where there was a large focus of infected people [25]. This region includes the binding sites for universal forward (UMSF, used in this study) and reverse primers (UNR, [26]) of SSU rRNA. In constructing the phylogenetic tree, Theileria spp. (AF162432) was used as the outgroup. Details of the other 66 nucleotide sequences that were used in constructing the phylogenetic tree are given in S3 Table. Both Plasmodium simium (AY579415) and P. brasilianum (AF130735, KT266778) were not included in the sequence analysis as the selected sequence used in this study was not available in GeneBank database. A second phylogenetic tree was constructed using the consensus sequences of five Plasmodium species found in Sabah to show the relationship between Plasmodium isolates found in the macaque, An. balabacensis and human. Ethical clearance This project was approved by the National Medical Ethics Committee (NMRR), Ministry of Health Malaysia (Ref. NMRR-12-786-13048). All volunteers who carried out mosquito collections signed informed consent forms and were provided with antimalarial prophylaxis during the study period. Blood spots on Whatman filter paper were collected from adult patients by Kudat hospital personnel, after they had signed informed consent forms. This human blood sample collection was also approved by the NMRR (Ref. NMRR-11-4539471). Blood spots on filter paper were collected by wild life department personel from ten wild macaques captured for relocation purposes and kept in cages following the guidelines in the Animals Abundance of Anopheles species A total of 1,599 Anopheles individuals belonging to ten species were caught during 14 months of sampling (Table 1). Anopheles balabacensis was the dominant species in Paradason village comprising 89.87% of the total catch, followed by An. barbumbrosus (5.75%), An. maculatus (1.38%) and An. donaldi (1.19%). Infection of Anopheles specimens with malaria parasites A total of 1,586 Anopheles mosquitoes (of which 1,425 were An. balabacensis) were tested for presence of malaria parasites using the PCR method. Only 23 An. balabacensis (1.61%) were found to have malaria parasites in them, being infected with one (78.3%) or two simian Plasmodium spp. (Table 2). The single infection was mostly by P. inui (n = 11). The Plasmodium species in Sabah show a high percentage identity within the same species groups (98.4%-99.6%) but less between different species groups. The highest percentage identity (99.6%) was observed between the P. cynomolgi samples isolated from Tomohon, Membatu Laut and Paradason villages, while the least was for P. coatneyi isolates (98.4%) obtained from Narandang and Paradason villages. The SSU rRNA sequences of Plasmodium spp. from Sabah also show high percentage identity with the same species from other Asian regions. Plasmodium coatneyi sequences showed 99% identity with P. coatneyi isolated from M. fascicularis in Kapit, Sarawak (FJ619094), as well as with CDC (AB265790) and Hackeri (CP016248) strains. Plasmodium cynomolgi sequences showed 99%-100% identity with P. cynomolgi isolated from M. fascicularis in Kapit, Sarawak (FJ619084), and from other macaque species viz. M. radiata (AB287290) of southern India and M. nemestrina (AB287289) from unspecified South-east Asian nation. Similarly, P. fieldi has high percentage identity with P. fieldi isolated from M. fascicularis in Kapit, Sarawak (KC662444). Of interest is P. inui, which not only has high identity (99%-100%) with those isolated in Kapit (FJ619074) but also with P. inui isolated from M. fascicularis from South China (HM032051), Southern Thailand (EU400388) and strain Taiwan II isolate from M. cyclopis (FN430725). The P. knowlesi samples of Sabah showed 99% identity with P. knowlesi isolated from both human (AY327551) and M. fascicularis (FJ619089) in Kapit, Sarawak, as well as with that from a Swedish traveler who was infected during his visit to Sarawak (EU807923) [27]. Further analysis of the P. knowlesi group using consensus sequences showed that there were three variable nucleotides between P. knowlesi isolated from the long-tailed macaque and human, two between long-tailed macaque and An. balabacensis isolates but none between An. balabacensis and human isolates (Fig 2). In the phylogenetic tree generated for 13 Plasmodium species infecting monkeys and humans (Fig 3), all the 21 Plasmodium isolates obtained in the study were placed in the correct species group. P. knowlesi group was positioned below P. coatneyi group whereas P. inui, P. fieldi and P. cynomolgi were placed at the upper branches. In the phylogenetic tree depicting relationship between the five Plasmodium species found in Sabah using consensus sequences, a similar tree topology was also observed (Fig 4). All Plasmodium group except for P. knowlesi group has two branches, each representing the host from which Plasmodium was isolated. However, P. knowlesi group has three branches with the isolates from both An. balabacensis and macaque closer to each other than to the isolates from humans. Discussion In this study, we analyzed 21 nucleotide sequences of partial SSU rRNA of five Plasmodium spp. isolated from An. balabacensis collected in Kudat district of Sabah, infected humans and a long-tailed macaque together with other nucleotide sequences downloaded from the Gene-Bank database. The results suggest that in Sabah, there is a close genetic relationship between the P. knowlesi specimens in the long-tailed macaques, An. balabacensis and human. Plasmodium inui appears to be a common simian malaria parasite found in 61% (14/23) of the infected An. balabacensis specimens. This was also the case in other investigations [9,28]. Hitherto, this simian malaria has not become zoonotic to humans yet although it has been proven experimentally to be infective to monkey through the bites of An. dirus [29]. The infection rate of P. knowlesi in An. balabacensis is low (0.14%, 2/1,425) with only two mosquitoes being infected along with other Plasmodium species. Nevertheless P. knowlesi is the dominant Plasmodium species recorded among the human cases in Sabah [30]. These cases were recorded mainly in the rural areas near to forests and also among the workers in the agricultural sector viz. in oil palm estates and vegetable farms [13,31]. Sequence data of the SSU rRNA of Plasmodium confirm that the five species of simian Plasmodium commonly harbored by the wild macaques in Malaysia are also found in An. balabacensis. BLAST results of Sabah's Plasmodium sequences showed high identity with other simian Plasmodium sequences published in the GeneBank database, especially with the simian malaria parasites in long-tailed macaques in Kapit, Sarawak (FJ619069 and FJ619089). This could suggest that a similar or closely related cluster of simian Plasmodium is circulating among the monkey populations and Anopheles mosquitoes in both Sabah and Sarawak. This is highly plausible as these two states share a common boundary, and there is a continual movement of humans between these two states. The total number of nucleotides in the analyzed region was different for the five simian Plasmodium spp. in Sabah, with P. knowlesi having a higher number. The differences in total Phylogeny of simian Plasmodium in Anopheles balabacensis, Sabah, Malaysia number of nucleotides in the SSU rRNA gene confer a unique signature to each Plasmodium species. Furthermore the presence of conserved and variable sequences in the gene makes it suitable for species identification and phylogenetic study [32,33]. The percentage of identity between consensus sequences of SSU rRNA of P. knowlesi isolates from the monkey, mosquito and man was high (Fig 2). For example, 100% identity was observed between P. knowlesi isolates from An. balabacensis and human, 99.8% between An. balabacensis and the long-tailed macaque, and 99.7% between long-tailed macaque and human. This indicates a great genetic similarity in P. knowlesi found in the long-tailed macaque, An. balabacensis and human populations. However, it is not certain if this would indicate the same cluster of P. knowlesi is circulating between these hosts, since we did not dissect the mosquitoes' salivary glands to detect for sporozoites, or carry out RT-PCR targeting the specific mRNA transcripts of the sporozoite stage. Thus further study is needed to determine this, using more P. knowlesi positive Anopheles balabacensis and analyzing other polymorphic markers or microsatellite loci of the parasite. Different P. knowlesi haplotypes have been observed in the macaque and human populations in Kapit Sarawak [14] as well as in the human population in Thailand [34]. Overall, the 13 Plasmodium species in the phylogenetic tree can be grouped into two main clusters, one containing the P. vivax/simian malaria parasites while the other human malaria parasites (Fig 3). Although P. simium (AY579415) and P. brasilianum (AF130735, KT266778) Phylogeny of simian Plasmodium in Anopheles balabacensis, Sabah, Malaysia were not included in our analysis as their nucleotide sequences in the GeneBank database do not contain the same analyzed region, P. simium is closely related to P. vivax [32]and can be placed in the first cluster, while P. brasilianum is closely related to P. malariae and can be placed in the second cluster. It may be noted that P. cynomolgi, P. fieldi and P. simiovale were not clearly resolved as some of the isolates were grouped in different branches. This could be due to the high percentage of nucleotide identity (99.6%) among these three species. The consensus tree (Fig 4) of Plasmodium species found in Sabah showed a very close relationship between the Plasmodium isolates from monkey as the reservoir, An. balabacensis as the vector, and human as the case. This is supported by P. knowlesi isolates from these three organisms having high nucleotide identity (99.7-100%). Currently in Sabah, An. balabacensis is the only species found to carry P. knowlesi. The phylogenetic analysis here indicates that the vector picks up the malaria parasites from monkeys and transmits them to humans when it feeds on them. However, there is a lot more about the transmission dynamics of P. knowlesi that is still unknown and needs to be unpacked. A clearer picture on the interrelationship of simian malaria parasites found in An. balabacensis will help us to understand more about Plasmodium itself. Future research may focus more on the hostvector relationship that requires longer nucleotide sequence analysis so that new informed alternatives for malaria elimination strategy targeting on P. knowlesi as well as other simian malaria parasites may be formulated. Supporting information S1
5,055.4
2017-10-01T00:00:00.000
[ "Biology" ]
Magnetron Sputtered Silicon Coatings as Oxidation Protection for Mo‐Based Alloys Mo‐based alloys with solidus temperatures around and above 2000 °C are attractive high‐temperature structural materials for future applications in the hot section of gas turbines. However, their oxidation behavior is poor due to pesting starting at 600 °C and nonprotective oxide growth at temperatures above 1000 °C. To ensure a sufficient oxidation resistance over a wide temperature range, protective coatings become inevitable. Herein, silicon coatings have been applied by magnetron sputtering on Mo‐9Si‐8B and on titan–zirconium–molybdenum alloy (TZM). The coating architecture is designed to minimize the intercolumnar gaps and porosity, thereby increasing the density. Specimens are tested at 800 and 1200 °C in air isothermally for up to 300 h. The focus is put on the chemical reactions at the coating–substrate interface, the phase formation, and the evolution of the thermally grown oxide. An initially globular SiO2 evolves into a uniform SiO2 layer providing excellent oxidation protection. The investigations reveal a rather slow interdiffusion between the coating and the alloys when tested in air. At the coating–substrate interface exclusively, the Mo3Si phase develops. Finally, the phase formation at the coating–substrate interface is studied in detail for various heat treatments in air and vacuum. Introduction The efficiency of a gas turbine can be improved by increasing the gas inlet temperature. [1][2][3] Nowadays, aeroengines are limited by the temperature capability of the materials used in the first stages of the high-pressure turbine. Ni-based superalloy blades are provided with internal cooling and coatings consisting of an outer thermal barrier layer and a bond coat to connect it to the respective substrate. Although they show excellent performance, a further increase in operating temperature with this material is unlikely due to the limit given by the solidus temperature of Ni-based superalloys. [2,3] Mo-based alloys with solidus temperatures around and above 2000 C are attractive high-temperature structural materials to overcome those limits. Therefore, these alloys are potential candidates for future applications in the hot section of gas turbines. Alloys with a composition in the three-phase field-Mo ss , Mo 3 Si, and Mo 5 SiB 2 -show favorable mechanical properties. [2][3][4] Mo ss forms a continuous matrix that provides sufficient fracture toughness, whereas the intermetallic phases Mo 3 Si and Mo 5 SiB 2 ensure promising creep resistance. Their low-tensile ductility at room temperature and oxidation behavior are still challenging. The oxidation and the creep behavior of these alloys can be somewhat improved by alloying with titanium, [5][6][7][8] which is also the topic of a companion paper in the same issue (see Matthias Weber et al., Effect of Water Vapor on the Oxidation Behavior of the Eutectic High-Temperature Alloy Mo-20Si-52.8Ti, this issue). However, the general oxidation behavior of Mo-based alloys is poor due to evaporation of MoO 3 at temperatures below 1000 C, well known as the pesting regime, [9] and rapid oxide growth at temperatures above 1000 C. To ensure a sufficient oxidation resistance over a wide temperature range, protective coatings become inevitable. A thermochemical compatible interface between coating and alloy as well as a coefficient of thermal expansion (CTE) which is close to that of the Mo-based alloys are prerequisites for a good coating performance. Substantial improvements could be demonstrated by Mo, Si, and B containing coatings applied by chemical vapor deposition (CVD) or physical vapor deposition (PVD) techniques. The CVD coatings were produced by copack cementation of Si, Si, and B, or Si and Al. After heat treatments and testing, the coatings mostly show either an oxidation protection based on silicon dioxide, borosilicate, or Al 2 O 3 . [10][11][12][13] The magnetron sputtered PVD coatings with a thickness of around 5-10 μm developed in our previous work showed promising oxidation behavior. Three-phase coatings consisting of Mo 5 Si 3 , MoSi 2 , and MoB as well as single-phase MoSi 2 and MoB coatings have been investigated. Mo 5 SiB 2 (T2) was introduced as diffusion barrier due to its high atomic packing density and to avoid early coating degradation by interdiffusion with the alloy. [2,14] In addition, aluminum containing coatings based on Mo-70Al and Mo-47Si-24Al have been investigated DOI: 10.1002/adem.202000218 Mo-based alloys with solidus temperatures around and above 2000 C are attractive high-temperature structural materials for future applications in the hot section of gas turbines. However, their oxidation behavior is poor due to pesting starting at 600 C and nonprotective oxide growth at temperatures above 1000 C. To ensure a sufficient oxidation resistance over a wide temperature range, protective coatings become inevitable. Herein, silicon coatings have been applied by magnetron sputtering on Mo-9Si-8B and on titan-zirconium-molybdenum alloy (TZM). The coating architecture is designed to minimize the intercolumnar gaps and porosity, thereby increasing the density. Specimens are tested at 800 and 1200 C in air isothermally for up to 300 h. The focus is put on the chemical reactions at the coating-substrate interface, the phase formation, and the evolution of the thermally grown oxide. An initially globular SiO 2 evolves into a uniform SiO 2 layer providing excellent oxidation protection. The investigations reveal a rather slow interdiffusion between the coating and the alloys when tested in air. At the coating-substrate interface exclusively, the Mo 3 Si phase develops. Finally, the phase formation at the coating-substrate interface is studied in detail for various heat treatments in air and vacuum. as well including the application of thermal barrier coatings on top. [15] In this research, a different approach has been chosen. To establish a larger coating thickness by magnetron sputtering, boron was avoided in the procedure because it possesses a poor sputter rate. Furthermore, a one layer concept was pursued for simplification. Silicon is widely accepted as oxidation protection layer and used by various researchers for SiC/SiC ceramic matrix composites as a bond coat in environmental barrier coating systems. [16,17] Therefore, Si has been used as a PVD coating to protect Mo-based alloys. Its oxide SiO 2 can emerge in various polymorphs-in most studies cristobalite is found. This so-called thermally grown oxide (TGO) is one of the slowest forming oxides known to date being able to withstand several thousand hours of oxidation at temperatures above 1100 C. Moreover, silicon shows favorable mechanical properties because it becomes ductile at temperatures above 600 C. [18,19] The main emphasis of this study is to develop and characterize a single layer coating based on silicon for the Mo-9Si-8B alloy to ensure oxidation protection up to 1200 C. Experimental Section The Mo-based alloy (Mo-9Si-8B in at%) was fabricated at Karlsruher Institute of Technology (KIT) by arc melting the high-purity elements Mo, Si, and B with respective purities of 99.99%, 99.8%, and 99%. An arc-melter of type AM/0.5 by Edmund Buehler GmbH was used and the arc-melting procedure was performed in a water-cooled copper crucible under Ar atmosphere, as described in detail by Obert et al. [20] During arc melting of various Mo-Si-X alloys, the contamination by oxygen was routinely measured by hot gas carrier extraction. Typically, between 150 and 350 wt ppm of oxygen were detected which is substantially lower as the values observed in PMprocessed material being above 2000 wt ppm of oxygen. [21] A homogenous elemental distribution was ensured by repeating the melting procedure multiple times and the final chemical composition was measured and confirmed to vary less than 0.5 wt% from the target chemical composition. As expected from the phase diagram, the produced Mo-9Si-8B alloy consisted of the phases Mo ss (bcc), Mo 3 Si (A15), and Mo 5 SiB 2 (T2). [22] For the coating trails, the alloy was used in the as-cast state with substrate dimensions of 10 Â 15 Â 2 mm. For comparison, a commercial titan-zirconium-molybdenum alloy (TZM) with the nominal composition of Mo-0.5 Ti-0.08 Zr-0.01-0.04 C (in at%) provided by Plansee AG, Reutte, Austria, has been used as well. The Si coating was applied using a batch-type magnetron sputtering facility (Z400, Systec SVS vacuum coatings, Karlstadt, Germany). Before coating deposition, specimens were cleaned by Ar þ -ion etching to activate the specimen surface. Two dense polycrystalline disks of Si with a diameter of 100 mm were utilized as targets that were placed in a face-to-face arrangement. DC sputtering was performed at 1 kW target power to achieve a total thickness of the coating between 25 and 50 μm. After first successful tests with a 50 μm-thick Si coating, the thickness has been halved because the interdiffusion between coating and Mo-Si substrates was slower than expected. The majority of the experiments had been carried out on the thinner coatings. The total pressure during deposition was 0.45 Pa in Ar atmosphere (flow rate at 25 sccm). During the deposition process, the substrate temperature reached about 100 C without additional heating. The deposition rate was around 6 μm h À1 . Samples were constantly rotated during the application of the coating ensuring a full coverage of the samples by the coating and a nearly all-around constant thickness. Due to variation in sample position with respect to the sputtering source, a slight variation in coating thickness was noticeable. As the Si coating was X-ray amorphous after the deposition process, a crystallization treatment was applied for 1 h at 900 C in air. To achieve a nearly dense coating and to rapidly close the intercolumnar gaps, the specimen underwent a rapid heating up during crystallization as well as during the initial oxidation. This temperature was chosen to guarantee the crystallization of Si which was confirmed by high-temperature X-ray diffraction (XRD) investigations, and to compare the present results to previous data obtained on similar coatings applied on SiC substrates. Afterward, the coated specimens underwent isothermal oxidation testing up to 300 h in lab air in a box furnace. Two temperatures had been used for testing, 800 and 1200 C, to investigate both pesting and high temperature oxidation behavior. The crystallization treatment and isothermal oxidation had been performed without a cooling period in between. To compare the phase formation at the interface between coating and alloy in vacuum (10 À5 mbar) and in lab air, a comparison study had been done with silicon-coated TZM. For all coatings, phase analyses were performed using XRD (Bruker D8 Advance, Cu Kα radiation, EVA/Topas 4.2 software package, Bruker AXS, Karlsruhe, Germany). Microstructural analyses were carried out by scanning electron microscopy (SEM) (DSM Ultra 55, Carl Zeiss NTS, Wetzlar, Germany) equipped with an energy-dispersive X-ray spectroscopy (EDS) system (Aztec, Oxford Instruments, Abingdon, UK). EDS analyses have been performed at 15 kV. For further analysis, two lamellae were produced by a focused ion beam (FIB) (Dual Beam FEI Helios, FEI Philips, The Netherlands). To analyze the interdiffusion zone (IDZ), transmission electron microscopy (TEM) had been carried out using imaging, EDS analyses, and electron diffraction measurements applying selected area diffraction (SAD) (Tecnai F30 TEM/STEM FEI Philips, The Netherlands). Isothermal Oxidation Behavior of the Si-Coated Mo-Based Alloys The as-coated silicon coating possesses excellent adhesion on both substrate alloys. It has a columnar structure evident in cross section and a cauliflower pattern in top view (see Figure 1a,d for the TZM alloy substrate). This specimen serves as a reference micrograph to compare the tested coating to the initial as-coated state after the deposition. XRD, not shown here, proofed that the coatings were amorphous in this state. After the initial heat treatment for 1 h at 900 C in lab air, the Si coatings crystallized. In Figure 1b, the coating is shown after 1 h of crystallization annealing at 900 C and 10 h of isothermal testing at 1200 C in air. The SEM cross section in Figure 1b shows that intercolumnar gaps get closed already after 10 h of exposure due to the oxidation of silicon. A dense SiO 2 TGO is visible on top of the surface of the coating which shows also a bubbly structure (see Figure 1e). After 100 h heat treatment, the only change in the coating system is the obvious growth of the SiO 2 TGO that increased in thickness up to 1.1 μm (Figure 1c). The top view shows smaller bubbles of SiO 2 that appear now fully dense (see Figure 1f ). In Figure 1b,c, the formation and growth of a thermally grown oxide are evident after 10 and 100 h testing at 1200 C on Mo-9Si-8B. By analyzing the thicknesses of the TGO with increasing time which is exemplified in Figure 2 (Figure 2c), a parabolic growth rate of the thermally grown SiO 2 can be confirmed (see Figure 3). The SiO 2 consists of the cristobalite phase which has been proven by XRD. The TGO was dense and the initially formed bubbles seemed to be connected to the dense layer forming afterward. The bubbles were not included in the determination of the TGO thickness. To investigate the potential influence of boron on the formation of the SiO 2 bubbles, oxidation of the silicon coating has also been investigated using a TZM substrate which contains no boron. Figure 4a shows that after a crystallization treatment at 900 C for 1 h and 30 min at 1200 C, both done in air, no bubble-shaped structures were visible on the surface. After a total oxidation time of 2 h at 1200 C, the surface is covered by bubbles that appear similar to the morphology formed on Mo-9Si-8B (see Figure 4b). Again, the bubbles are well connected to the dense silica layer that forms underneath during prolonged oxidation of 5 h (see Figure 4c). The TEM-EDS mapping in Figure 5 shows an FIB lamella of the 10 h tested silicon coating on a Mo-9Si-8B substrate. The silicon coating is clearly visible as well as the SiO 2 TGO. A weak molybdenum signal is detectable in some isolated areas. There is no clear accumulation of Mo within the bubbles, although some Mo signals appear locally in the gaps between the silica bubbles. To additionally investigate the potential impact of MoO 3 sublimation that starts at about 700 C on the coating behavior, www.advancedsciencenews.com www.aem-journal.com an isothermal heat treatment has been applied at 800 C for 100 h after the crystallization treatment for 1 h at 900 C. The result shown in Figure 6 reveals a relatively dense coating on a Mo-9Si-8B substrate without a visible TGO. A few columns do not seem to be sealed by SiO 2 , but do not lead to oxidation of the substrate. The top view shows a quite dense coating replicating the substrate roughness and showing remaining signs of the typical PVD columnar structure, but to a much lower degree compared with the as-coated condition shown in Figure 1a,d. SEM-EDS (not shown here) proofed that no Mo is detected in the coating. Reaction between Coating and Substrate during Isothermal Oxidation During oxidation, an IDZ can be observed that slowly grows with time. Figure 7 shows the increase in thickness of this zone after 10 h (Figure 7a), 100 h (Figure 7b), and 300 h (Figure 7c) at 1200 C testing temperature. The growth rate of the IDZ versus the exposure time is shown in Figure 8 where it follows a parabolic growth rate. Although some pores appear at the interface between substrate and coating after 300 h, mainly located within the silicon in contact to the IDZ, the adhesion of the coating is still strong which assures the oxidation protective potential of the Si coating for the Mo-9Si-8B alloy. To clarify which phases have formed at the interface and in the IDZ, a sample tested for 300 h was analyzed in the TEM in different locations (see Figure 9). Figure 9a shows a high-angle annular dark-field image (HAADF) of the FIB lamella with the region of interest. The analysis clearly proofs that the interdiffusion zone consists exclusively of the Mo 3 Si phase. Therefore, two grains have been shown in Figure 9b,c. Comparing two different lamellae yielded that the IDZ is rather inhomogeneous in thickness. In Figure 9, the IDZ thickness is identical to the Mo 3 Si grain size because it consists only of one grain in thickness, whereas in other parts several Mo 3 Si grains are forming above each other further into the substrate, thereby resulting in a thicker IDZ. Most of the investigations in the literature about the interdiffusion of silicon and molybdenum are done with the pure www.advancedsciencenews.com www.aem-journal.com elements under vacuum. [23] To separate the phases within the IDZ clearly from the phases in the substrate and for a better comparison with the literature, further investigations have been done on TZM coated with silicon. To study the influence of the annealing atmosphere on the interdiffusion behavior, a vacuum heat treatment was compared with a lab air heat treatment, applying the same time and temperature. After vacuum annealing for 1 h at 900 C and 5 h at 1200 C, the phases Mo 5 Si 3 and MoSi 2 evolved to form the IDZ (see Figure 10a). Furthermore, most of the original Si coating spalled off this IDZ leaving it exposed to the atmosphere. In the EDS line scans using SEM (Figure 10), substrate, IDZ, and coating were analyzed. The sample annealed in air shows the same IDZ already shown in Figure 7. There, the line scan (Figure 10b) illustrates the formation of Mo 3 Si in the IDZ. Furthermore, it reveals an oxygen content of about 38 at% (measured by EDS) in the IDZ of the specimen tested in air, whereas there was no oxygen found in the IDZ of the vacuum annealed sample (Figure 10a). Figure 11 shows results of the XRD investigation. As the Si coatings used for this investigation were intentionally only 25 μm thick, the XRD information extracted from those samples also provide results on the phases within the IDZ located underneath the Si. Thus, diffraction peaks of the Mo 3 Si phase are also present for the air-tested sample, whereas the Mo 5 Si 3 phase is detectable for the vacuumtested sample. Due to the small peak heights of those phases, Figure 11 is shown with a logarithmic scale. Obviously, different phases developed within the IDZ after the vacuum (MoSi 2 , Mo 5 Si 3 ) and the lab air (MoSi 3 ) treatment as confirmed by EDS and XRD. The development of Mo 5 Si 3 and MoSi 2 in the IDZ is conclusive for the vacuum annealed specimen. For the lab air treated specimen, the interdiffusion phase shows exclusively the composition of Mo 3 Si and some excess oxygen. Oxidation of Silicon on Molybdenum-Based Alloys The applicability of silicon as an oxidation protective coating on a Mo-9Si-8B alloy was successfully demonstrated up to 300 h at 1200 C. Figure 2 and 3 show the exclusively diffusion controlled parabolic growth rate of the thermally grown SiO 2 layer on silicon which has already been studied in detail for other substrate materials. [24] XRD scans confirm that the TGO forming here is cristobalite which is in accordance with most findings on silica www.advancedsciencenews.com www.aem-journal.com formation on SiC and pure Si, although the temperatures for cristobalite formation given in the phase diagram under equilibrium conditions are higher. [24,25] In our previous work on HfO 2 -doped and pure silicon coatings on SiC substrates, we could confirm that a TGO consisting of cristobalite at about the same testing temperatures of 1250 C forms. [24] This is consistent with findings of ref. [26] where potential reasons for favored cristobalite formation at lower temperatures are discussed in detail. The TGO growth rate found in the present work is in the same order as the one for PVD silicon coatings on SiC substrates, although the oxidation temperature was 50 C higher there. [24] This implies that the Mo-based substrate does not influence the oxidation kinetics of this coating much. The SiO 2 polymorph cristobalite undergoes a high/low-temperature phase transitions with a CTE difference of about 7.2 Â 10 À6 K À1 and a volume change of approximately 2.8%. This transition causes potentially tension between the SiO 2 layer and the nonoxidized Si coating. [24,25,27] However, as there was no severe oxide spallation or cracking of the TGO detected after the 300 h testing applied here, the coating provides great potential for prolonged oxidation protection of Mo-based alloys. Interestingly, the growth of the silica starts by forming bubbles on top of the surface. After an incubation period of 30 min where no bubbles form at 1200 C, they are already present after 120 min of oxidation (see Figure 4). A dense and continuous TGO layer evolves as well, most likely in parallel to the formation of bubbles. Silica bubbles were not observed for the same PVD silicon coating applied on SiC substrates. [24] The bubbles do not seem to grow or increase in number, but stay nearly constant and unchanged over the entire annealing time up to 300 h. The silica grows continuously underneath the bubbles which do not change the local oxidation behavior substantially. This leads to the conclusion that some sort of gaseous species might form during the initial SiO 2 formation which initiate the bubble formation. As the bubbly structure appears on a TZM substrate as well as on the Mo-9Si-8B alloy, boron can most likely be excluded and molybdenum or the volatile MoO 3 is suspected to be trapped in these bubbles. MoO 3 forms and volatilizes already at temperatures around 650 C. [8] After the growth of both the dense TGO and the interdiffusion zone, no MoO 3 seems to be able to move to the surface. Therefore, it is likely that small amounts of MoO 3 diffuse through the silicon coating onto the surface. The formation of a dense and continuous TGO layer is already completed after 5 h at 1200 C. The TEM-EDS in Figure 5 provides at least some hints pointing toward molybdenum causing the initial formation of the silica bubbles. Figure 6 shows that the amount of MoO 3 reaching the surface seems to be quite small because no internal substrate oxidation can be found after 100 h at 800 C. To the best knowledge of the authors, the effect of an initially bubbly silica www.advancedsciencenews.com www.aem-journal.com formation on a silicon coating has not been published so far. More detailed research has to be conducted in the near future to give more insight into the formation mechanism of the bubbles. In the initial stages of oxidation, the columnar structure of the silicon coating gets dense by rapid formation of silica along the intercolumnar gaps. This densification seals the gaps and prevents further inward diffusion of oxygen because all gaps are filled by silica which possesses a rather low oxygen diffusivity. It is possible that those early stages of oxygen inward diffusion, silica formation in the intercolumnar gaps, transient formation of (volatile) Mo-oxides, and formation of silica bubbles are interconnected which is the topic of ongoing studies. However, as the Si coating can protect the Mo-9Si-8B substrate successfully from pest-oxidation after 100 h at 800 C, the process of Mo-oxide volatilization might even start at more elevated temperatures drastically. Interdiffusion of Silicon and Molybdenum in Different Atmospheres The interdiffusion of silicon and molybdenum is widely investigated and understood in atmospheres that do not contain oxygen. In this case, silicon diffuses much faster into molybdenum than vice versa to form the phases MoSi 2 , Mo 5 Si 3 , and lastly Mo 3 Si. MoSi 2 is thereby the most dominant phase followed by Mo 5 Si 3 . Mo 3 Si occurs later as a thin layer directly on the molybdenum. [12,[28][29][30][31] In the present investigation, silicon-coated TZM follows the same pattern when heat treated under vacuum (see Figure 10 and 11). The phase diagram of Mo-Si confirms that all three phases are possible to evolve at temperatures around 1200 C. The study of Yoon et al. provides evidence that silicon is not able to adhere over longer annealing times to the Mo-silicide phases due to the large mismatch of the CTE (Si: 3.8 Â 10 À6 K À1 , MoSi 2 : 9.5 Â 10 À6 K À1 ). [32,33] In contrast to the scenario under vacuum described earlier, under oxygen containing atmosphere such as lab air the silicon coating on the TZM substrate behaves differently in the current study. In this case only the Mo 3 Si phase forms predominantly by growing into the TZM substrate (see in Figure 10 and 11). The CTE of Mo 3 Si is about 6.03 Â 10 À6 K À1 [34] which fits better to both the CTE of Mo-based alloys, silicon and silica. Oxygen has been detected in the EDS line scan in substantial amounts in this phase. As there is no evidence of a second phase forming other than MoSi 3 , the oxygen seems to be dissolved in this phase. The study also shows that the IDZ of MoSi 3 increases in thickness with time ( Figure 8) in air. Yoon et al. describe that the Mo 3 Si phase grows into the Mo substrate rather than forming in the contact zone between Si and Mo. The same has been found here, i.e., formation of Mo 3 Si by mostly inward diffusion of silicon and formation of the phase within the former Mo alloy (see Figure 7). Silicon is in this case the faster diffusing species as seen before in the vacuum annealing, too. [35] This is in good agreement with the literature data, clearly indicating that silicon is the more mobile element in all relevant Mo-Si phases due to the much higher defect concentration within the Si sublattices. [23] A few studies aimed already at explaining the influence of oxygen on the interdiffusion between those two elements. [28,32] Although none of the studies found the total suppression of MoSi 2 and Mo 5 Si 3 as demonstrated here, the explanation given there seems to hold for the present case as well. Yoon et al. concluded that the MoSi 2 growth rate and activation energy are heavily influenced by impurities like oxygen. [32] The growth rate decreases in an oxygen containing atmosphere which is in good agreement with the results found in this study. The present findings clearly show that no other phase than Mo 3 Si forms in oxygen containing atmosphere. Moreover, it dissolves substantial amounts of oxygen because there is no known phase being constituted by all the three elements: Mo, Si, and O. It could be possible that oxygen supports the formation of the Mo 3 Si phase rather than the Si-rich Mo silicides. How oxygen hinders the silicon to diffuse further into molybdenum and thereby impeding the development of silicon-rich Mo silicide could not be finally clarified within this study. But this phenomenon can also be found in Rastogi et al. [28] There SiO 2 formed at the silicon/molybdenum interface hinders the formation of silicides. In this study, no SiO 2 could be found below the Si coating until now, which confirms the excellent oxidation protection of this PVD layer for Mo-based alloys. Conclusion In this study, a simple coating with pure Si has been successfully applied as single layer for oxidation protection by magnetron sputtering on Mo-based alloys for the first time. The coating was able to protect the Mo-9Si-8B alloy until 300 h of isothermal exposure at 1200 C in lab air due to the development of a dense thermally grown silica scale on top. The protective Si coating showed excellent adhesion on the Mo-9Si-8B substrate during prolonged oxidation. In the initial phase of oxidation, the SiO 2 formed a bubbly structure. The silica bubbles did not change with time, whereas the dense and slowly growing TGO formed underneath and thickened with time. At intermediate temperatures of 800 C, the pest-oxidation could be successfully suppressed by the Si coating up to 100 h. An interdiffusion zone consisting exclusively of the Mo 3 Si phase has been formed during annealing in air by silicon inward diffusion into the Mo-based alloy. This phase contains substantial amounts of oxygen. During 300 h testing at 1200 C, this slow growing interdiffusion zone possess a varying thickness in the order of 1-2 μm. Testing in vacuum showed the formation of an interdiffusion zone being about 8 μm thick and containing the MoSi 2 and the Mo 5 Si 3 phases where no oxygen has been detected. It seems to be reasonable that oxygen containing atmospheres favor the formation of Mo 3 Si while suppressing the MoSi 2 and Mo 5 Si 3 phases.
6,624.8
2020-05-12T00:00:00.000
[ "Materials Science" ]
Chord diagrams, exact correlators in spin glasses and black hole bulk reconstruction The exact 2-point function of certain physically motivated operators in SYK-like spin glass models is computed, bypassing the Schwinger-Dyson equations. The models possess an IR low energy conformal window, but our results are exact at all time scales. The main tool developed is a new approach to the combinatorics of chord diagrams, allowing to rewrite the spin glass system using an auxiliary Hilbert space, and Hamiltonian, built on the space of open chord diagrams. We argue the latter can be interpreted as the bulk description and that it reduces to the Schwarzian action in the low energy limit. The SYK model at large N turns out to be solvable, in the IR and other limits, and exhibits several interesting features like conformal regime [1] and maximal chaos exponent [2][3][4]6]. In this work, we will be interested in a particular class of spin glass models introduced in [35], which are close relatives of the SYK model, and derive a formula for the exact 2-pt function of certain operators. The model is the following. Consider n sites with a spin 1 2 degree of freedom on each. Denote the Pauli matrices acting on site i = 1, 2, . . . , n by σ (a) i , with a = 1, 2, 3. Given an integer p, we define a random Hamiltonian H (p) as follows. Let e = (i 1 , ....i p ) be a vector of length p of distinct integers defining a subset of the n sites, and let a = (a 1 , ...a p ) be a second vector of length p, with entries being either 1,2 or 3. Denoting the pair (a, e) by J, we define σ J = σ (a,e) = σ (a 1 ) (1.1) and the spin glass Hamiltonian is where the sum runs over all possible J's, and α J are independent Gaussian variables with zero mean and unit standard deviation (we will drop the superscript p from now on). The relevant parameter controlling the asymptotic density of states is [35] q = e −λ with λ = 4 3 p 2 n , (1.3) and the exact asymptotic density of states of the model (1.2) was computed in [35] in the limit λ fixed, n → ∞ (1.4) We will refer to this as the λ-scaling limit. We will be interested in the limit of λ → 0, where the distribution of eigenvalues approaches a Gaussian distribution (point-wise) and hence we will refer to these models as "Almost Gaussian" spin glass models. The spin glass model (1.2) is quite similar to the SYK model. Apart from replacing Majorana fermions with Pauli matrices, the more critical difference is that in the λ-scaling limit (1.4) p is scaled with √ n, leaving λ as a parameter, whereas in SYK, p is held fixed as n → ∞, while scaling the energies properly to obtain a solution of the model. However, the λ-scaling model with Majorana fermions was discussed in [18] where it was shown to have a low energy limit λ → 0, E → 0, (dubbed "double scaled SYK" model) where the density of states is that of the Schwarzian theory. Hence, models (1.2) can just as well be used to study the physics of AdS 2 . In this work, we discuss the full Almost Gaussian model and use the "double scaled limit" to check our results. The main results in the paper are • We motivate why random operator observables are relevant for black hole physics, i.e., not just random Hamiltonians. This is done in section 2, where we also survey existing results and state the new result on the 2-pt function. • A new method of computing the distribution of eigenvalues of the Hamiltonian (1.2) in the λ-scaling limit. The new method relies on the reduction in [35] of the spin glass Hamiltonian to chord diagrams but then takes a different route in evaluating the latter. This is done in section 3. Section 4 is an analysis of the λ → 0 limit which parallels Appendix B of [18] in our notation. • The derivation in section 3 relies on an auxiliary Hilbert space and a Hamiltonian acting on it, which we denote by T . This new Hamiltonian is equivalent 1 to the full Hamiltonian of the spin glass in that whenever the unitary operator e iHt appears, acting on the original Hilbert space, it can be replaced by e iT t acting on the auxiliary Hilbert space. In section 5 we suggest that this is the analogue of the bulk Hamiltonian and show in what limit it reduces to the Schwarzian effective action in its Liouville form. • In section 6 we compute the exact time dependent 2-pt function of an additional random operator of length p ′ ∼ √ n. This can be reduced to another chord partition function in which one chord is marked. We use the technique developed in section 3 to evaluate it. Motivation, Setup and Summary of results We will analyse the spin glass Hamiltonian model (1.1)-(1.2). However, we will probe it using a random operator. The latter will be of a similar statistical type as the Hamiltonian, i.e. it will be defined by the same equation (1.1)-(1.2) but with • a different length parameter p ′ = p, and • a new set of independently drawn coefficients. In subsection 2.1 we motivate this specific choice of operator. The rest of the section is an "executive summary" of the setup of the model and known results in 2.2, and a summary of the new results in 2.3. Motivation -random observables and factorization 2.1.1 Why random operator probes ? Since black holes share some properties with chaotic systems [36][37][38][39], they can be thought of as described by a suitable random Hamiltonian. In particular, for AdS black holes, we might want to think about some core of states in the spectrum governed by a random Hamiltonian, describing the near horizon black hole physics, dressed by a "structured" non-random Hamiltonian describing excitations well separated from the horizon. In this picture, one needs to specify the statistical class of the random Hamiltonian. This is precisely what the SYK model achieves, as the relevant class for nearly-AdS 2 spacetimes. The next step is to probe the black hole (BH) using the available bulk probes, such as single trace operators or their analogues. The Hamiltonian in quantum mechanics, or the local energy-momentum tensor in higher dimensions, is one such operator. Probing with the full Hamiltonian does not provide any more information beyond the partition function, but the local energy momentum operator does. In practice, it is another massless field for which we can put sources on the boundary. Just as the full Hamiltonian is indistinguishable from a random operator when acting on the BH states, we can expect that the local energy momentum operator will also be effectively described by some random (local) operator acting on the Hilbert space of the states of the black hole. But the local energy-momentum tensor is just one of a tower of single trace operators with which we can probe the system. In N = 4 SYM we can use its primary tr(X 2 ) to probe the black hole, or we can just as well use any other of the tr(X n ) operators. If the former is a random operator on the states of the black hole, why should we not expect that all single trace operators be of a similar nature ? We would like to suggest that the relevant probes appearing in General Relativity are random operators on the BH states 2 . The main issue would then be from what ensemble these operators are drawn. If we have some idea about the statistical ensemble of the Hamiltonian, we can try and guess what is the ensemble for the other single trace operators. Another way to phrase the argument is that the SYK model is dual to AdS 2 in an appropriate large N and energy regimes. But there are other models which realize the same universality class (for example, the one discussed in this paper is based on different spin matrices). So there may be many ways of defining the statistics of the random Hamiltonian which give rise to the same physics -some may be similar to SYK and others may be different. Focusing on the computation of specific operators used to define a specific realization, such as χ i in the SYK model, certainly yields the maximal amount of information about the model but it may not be universal enough throughout the different models. Rather, motivated by the fact that the local energy-momentum tensor is a one "single trace operator" out of many, we would like to suggest that useful probes are random operators appropriately made out of the basic constituents of the theory, just like the Hamiltonian is. The statistical class of these random operators may be more universal throughout the different ways of building models (as we will see in our case). Yet another argument is the following. In the SYK model, the Hamiltonian is a sum of finite rank polynomials of the χ fields with random couplings. Viewing the χ's as the analogues of the single trace operators in higher dimensions (which is anyhow problematic since they live in SO(N) representations) implies that the Hamiltonian in the black hole regime can be written as a sum of polynomials of single trace operators. This seems to be a very strong assumption for the higher dimensional AdS/CFT dualities. A weaker assumption is that both the local energy momentum tensor and all other single trace operators can be written using some other operators which act on the BH states, which are just used to define the statistical class of the operator and probes. These operators need not be asymptotic observables outside the black hole but rather they just need to be a rich enough set to allow for the correct definition of the statistical class of the observables. This is somewhat against the usual application of the AdS/CFT correspondence where, in this context, the SYK model is taken to be the microscopic theory which defines all of spacetime. In this approach, one is committed to all the operators defined in the model. However, in practice if one is interested in the AdS 2 part, one glues it to an external region in order to break conformal invariance (and the gluing might eventually vary if, for example, one thinks about an AdS 2 near horizon of an object in higher dimensions). It is not clear to what extent the full SYK provides an extension which has an adequate gravity dual outside the AdS 2 region, and even if it does, it is not clear whether it is universal. The right probes on AdS 2 are determined just as much by this outside-of-AdS 2 region since the probe must be defined on the boundary. This means the choice of right probes in the AdS 2 region, within a given model, might be ambiguous in general. What random operator probes ? Having argued that random operators are suitable probes, with ensembles related to the one from which the Hamiltonian is taken, in this subsection we would like to discuss another constraint on the ensemble from which probes are drawn, originating from requiring factorization of correlation functions. We will see that it again points us in the direction of almost Gaussian random operators, similar to H (p) . Within the AdS/CFT correspondence correlation functions of single trace operators factorize at leading order when evaluated in the ground state or in any other state well described by a semiclassical background. This is usually taken to include black holes, although this assertion is on less solid footing there, as the detailed quantum state of the black hole may matter (and surely does over long enough time scales). So the extent to which correlation functions do not factorize will teach us about the role of the quantum state of the black hole, and may also teach us about deviations from the standard Einstein-Hilbert low energy effective action. In the field theory side of the AdS/CFT correspondence, factorization is a consequence of the large N limit when evaluated around the ground state. Around the black hole background it implies a non-trivial constraint on the statistics of probe operators [40]. Consider a microcanonical ensemble with a small enough energy spread, and consider the 4-pt function 3 Tr where the trace is over the states in this energy band. Since black holes are strongly mixing systems, one might have expected that M -when acting in this energy band -will be described by one of the ordinary random matrix ensembles. An example of this is the often assumed strong form of ETH where the most straightforward interpretation of this formula is as a statement about statistics of the matrix elements 4 . This relation actually comes about by a minimal set of assumptions -that A) only pairwise contraction of the operators matter -after all, we would like to obtain factorization, and B) that all the states in the microcanonical energy band are equivalent and hence the statistics should have full unitary invariance in this energy band -this is also an assumption often made in statistical physics. However, under these circumstances correlation functions do not factorize properly. The ansatz in (2.2) is the same as drawing the operator M from a distribution with measure where N is the number of states in the energy band. So we only need to compute a Gaussian integral. With this measure, in the large N limit, the 4-pt function (2.1) receives only one (planar) contribution. However, factorization implies that there are two contractions. It seems difficult to remedy this within the ordinary ensembles (for example, by changing the measure to e −N V (M,M † ) for a more general V ). Since there are restrictions to implementing factorization in the simplest ensembles, it is interesting to find additional examples in which correlators factorize. More precisely we would like them to almost factorize -the deviation from exact factorization is then interpreted as bulk interactions. At the level of a single operator, the most naive indicator of factorization -neglecting for the moment the issue of time dependence -is that for an hermitian operator M, where E() is the statistical average over the ensemble from which the operator is drawn (and A is a factor set by the normalization of the operator). The ensembles in [35] are precisely of this type. For any operator of the form (1.1)-(1.2), the distribution of eigenvalues approaches the Gaussian one in the limit λ → 0, so all operators with p √ n ≪ 1 will be approximately Gaussian. If the Hamiltonian has a specific (small) λ, then operators for all other values of (small) λ are in qualitatively a similar statistical class and approximately factorize. We will use them as our probes. Set up of the model and summary of known results The model discussed in [35] is defined in equations (1.1) to (1.4). One of the main results in that paper is that the asymptotic distribution of eigenvalues, in the limit n → ∞ λ fixed, (2.5) 1−e −λ , and vanishes outside this region. The proof proceeds by computing the moments in the following steps: 1) For the first step one needs to define what are chord diagrams. Consider L = 2n dots on a circle -a chord diagram is a pairing of these dots into n pairs. We draw a line connecting each paired dots. i.e., a total of n lines. Denote a specific chord diagram by π. We then denote by k(π) the number of crossings of lines (when we draw the diagram such that each pair of lines intersects at most once). An example of a chord diagram is shown in Figure 1 with n = 8 and with number of crossings k = 2. In the first step one shows that where the sum is over all the chord diagrams. The expression on the RHS is called the chord partition function, and q = e −λ was defined before in (1.3) in terms of the parameters of the spin glass. For example, the contribution to the sum by the chord diagram shown in Figure 1 would be e −2λ . Chord diagrams were also used in [23] for computing 1/N corrections in the SYK model. In section 3.1 we review this step of the proof in more details since that part of the proof will not change. Furthermore, we will also need to slightly temper with it when computing the 2-pt function. 2) In step 2, one uses the Riordan and Touchard formulae [42,43] and the results of [44] to show that (2.8) are the moments of the distribution (2.6) and further give an explicit formulae for the moments as Summary of new results In this paper we discuss a new proof for the value of m L and the energy eigenvalue distribution of the spin glass. We use this to compute the exact two point function for random operators, in the limit λ fixed, n → ∞. Denoting a new random operator by M , it has the form (1.1)-(1.2) (with new randomly chosen coefficients, uncorrelated with those of the Hamiltonian as mentioned in the beginning of section 2) but with a new parameter length parameter p ′ ∝ √ n. More precisely we show that pp ′ n and (a, q) ∞ is the q-Pochammer symbol (see (A.2)). To prove this one evaluates (2.11) We show that the relevant chord diagram which computes this two point function is a chord diagram in which one of the lines is marked, and intersections with this chord are assigned a different weight. An example of a marked chord diagram is given in Figure 2. More precisely : • Given 2n+2 points on a circle, two specific points are connected. This is the "marked" chord. The thick line in Figure 2 connecting the red dots represents the marked chord. • Between the special points at the ends of the marked chord there are k 1 regular points on one side, and k 2 regular points on the other side (k 1 + k 2 = 2n). • These remaining 2n points are paired. These will be called "regular" chords. • Intersection between regular chords is assigned weight q, and the intersection between the regular and marked chords is assigned weightq. • The marked chord partition function is defined as a sum over pairings of the 2n regular points, with k 1 and k 2 fixed and with weights as above, i.e., where k regular (k marked ) is the number of regular-regular (regular-marked) intersections. For example the 1-marked chord diagram in Figure 2 contributes qq to the m 1,5 . • Similar to [35] we show that and evaluate the right hand side to obtain (2.10) The evaluation of the various chord partition functions in this work relies on an auxiliary Hilbert space space where there is a natural Hamiltonian whose action is equivalent, in a sense that will be made precise below, to the one of the full Hamiltonian acting on the spin glass Hilbert space. We interpret this auxiliary structure as the bulk dual to the spin glass. Furthermore, we suggest how it reduces to the Schwarzian action in its Liouville form at low energies 5 . A new derivation of eigenvalue distribution Given a random Hamiltonian as in (1.1), the authors in [35] compute the asymptotic distribution of eigenvalues in the λ-limit (1.4), by evaluating the moments and by finding the unique distribution compatible with them. E stands for an ensemble average. In section 3.1 we review how [35] reduces the moments (3.1) to evaluating the chord partition function (2.8). [35] then uses the results of [44], and also the formulae of Touchard and Riordan [42,43], to show that the moments (2.8) arise from the distribution given in (2.6), and to give the explicit formulae (2.9) for the moments. Our proof, in section 3.2, replaces this second step, as well as generalizes it to other Chord diagrams, as the one that will appear in the exact 2-pt function in section 6. From spin glasses to chord diagrams Given the Hamiltonian (1.2), the computation of the moments proceeds by evaluating the ensemble average of the α J 's first. Any non-vanishing contribution requires at least two insertions of each α J . Moreover, Lemma (4) in [35] shows that the dominant contribution, in the λ-scaling limit (1.4), is when J 1 , ...J L appear exactly in pairs, with higher multiplicities being subleading in n in the large n limit. This gives us the basic structure of chord diagrams where pairing in the chord is defined by having the same J on two different nodes as in Figure 1. Summing over all the relevant values of J amounts to summing over all chord diagrams (i.e all possible pairings of J's), and then sum over all value of J (i.e., both e and a) for each chord. Given a chord diagram we therefore need to evaluate what is the weight that is associated with it, i.e., where there are only L/2 independent J's and the pairing is determined by the chord diagram. The obstruction to immediate evaluation is that σ a i for the same site index i can appear in different J's. However, [35] shows that with probability 1, in the λ-scaling limit, each node can appear in at most two of the chords, enabling the evaluation of the weight. More precisely, define the intersection of J's by the intersection of the site index, i.e. (3.4) [35] shows that, for a given J i and J j the size of the overlap is Poisson distributed, and that there is, with probability 1, no triple intersections. I.e., we can assume This statement is summarized in lemma (9) there, and subsequent discussion. Given two sets a and b of integers drawn out of the set {1, 2, ...n} (without repetition in each set), we can think about it as |a| independent processes in which the overlap between the sets increasing by 1 with probability |b| n (in the limit n → ∞). This is a Poisson distribution with mean size of overlap 3λ 4 = |a||b| n . Recall that we scale the size of the set with √ n so this remains finite in the limit n → ∞. The average size of an overlap with an additional index set -say c -is the latter times |c| n → 0, so with probability 1, triple overlaps are empty. The interplay of chord intersections and overlap of the index sets is the key for evaluating the weight of each chord diagram. As we sum over the J's of the chords, then if two chords do not intersect they will contribute whereas if the chords intersect they give a factor proportional to If there is a non-trivial overlap J 1 J 2 = 0, then these factors will be different. So there is some "penalty" that we pay for each intersection. More precisely, given a chord diagram (i.e. a pairing π), recall that k(π) is the number of pairwise chord intersection. Each chord intersection has a Poisson distributed overlap of sites. Each overlap is independent of the overlap of the other chord intersection. Each overlap of sites (for a given intersection) comes with a factor relative to 1 when the ordering is (aabb) which originates from an overlap in a pair of non-intersecting chords. Therefore, the size, m, of each overlap is Poisson distributed with expectation value 3λ 4 and comes with a weight (− 1 3 ) m . The expectation value of the weight for each chord intersection is therefore e −λ , and the total weight associated with each chord diagram is e −λk(π) . Hence, one finally obtains (2.8). Evaluation of the Chord partition function In this subsection, we will provide a alternative derivation of the chord partition function reproducing the expression for v(E|q) in (2.6). The proof is rather compact, generalizes to more complicated chord partition functions, such as the ones discussed in section 6 6 and suggests a bulk interpretation that we develop in section 5. The evaluation of (2.8) is based on a "hopscotch" recursion relation satisfied by the concept of a partial, or open, chord partition function as follows: • 2 −n Tr(H L ) involves L points in a chord diagram, as indicated in figure 1. Choose one point, i.e. choose one of the H factors, to be the first and begin moving clockwise in the chord diagram. Each time one reaches an extra point and hop over it, we shall refer to it as "a step". As we go along, denote the number of such steps by i, i.e. the number of H factors that were hopped over. In step 1 we hopped over the factor of H that we chose to be the first. Here k p (π) refers to the number of chord intersections to the left, in the past of our "hopscotch" process. It is convenient to think about v (i) as a column vector and l as its index. Given this set-up, one can write down a recursion relation for v i . At each step, one can either close a chord (as in Figure 3) or start a new one (as in Figure 4) at the point one is hopping over. If one starts a new line, l changes to l + 1 and the new line enters at the bottom. If one closes a line, it can be either of the l open chords with height between 1 and l. If one closes the line at height p, it crosses (p − 1) lines on its way down. This crossing generates a weight q p−1 when evaluating its contribution to the partial chord partition function. Altogether, the vector of such partition functions satisfies the following recursion relation v with initial condition v (0) l = δ l,0 . The latter can be rewritten in terms of an (L+1)×(L+1) transfer matrix T (L) propagating the partial chord partition function forward with matrix elements (indices running from 0 to L, l 1 (l 2 ) is the row (column) index) describing a matrix with 1's and η l 's on the diagonal below and above the main diagonal, respectively, i.e. To compute the chord partition function (2.8), define the vector of length L + 1, and then The initial condition v (0) l = δ l,0 dictates the use of the initial state |0 L . Ensuring our procedure counts only chord diagrams that close by the time we reach the L-th point, such that we are computing the usual chord partition function in which all lines are paired, determines the final state. Notice that we are computing the trace of H L in the original 2 n dimensional Hilbert space, using some auxiliary space based on partial chord diagrams. We shall develop a "bulk" interpretation for the latter in section 5. Next, given some fixed L, one can always consider a larger L'-sized Hilbert space This allows us to take L ′ → ∞, keeping L fixed. In this infinite dimensional Hilbert space, one can define Hence, T is the infinite dimensional extension of (3.13). This provides an auxiliary Hilbert space and a single matrix T in which one can evaluate all traces as The problem of computing the moments (2.8) reduces to the problem of computing the eigenvalues α of the operator T and expanding the vector |0 in terms of these eigenvectors |α , in an expression where Spec(T ) is the set of eigenvalues, ρ(α) is its density and ψ 0 (α) ≡ 0|α is the overlap of |0 with the |α eigenvector of T . Fortunately, Spec(T ) and the density are very easy to compute and the overlap is given by specific q-Hermite polynomials, as we will see below. In the notation of the spin glass model, comparing the L dependence in the original moment (3.1) with α L in equation (3.19), suggests the identification where E is the energy of the system, properly interpreted. The asymptotic distribution of the energies should then be identified as 21) A short example. It is worth while carrying out the procedure above in an explicit, low L case, and compare the result with (2.9). For example m 4 (q) = 2 + q, which can obtained also from the three chord diagrams in Figure 5. In our approach we start with v (0) , act on it 4 times with T (4) (or T ), and project on v (0) . Keeping track of chord histories give the following partial chord partition functions: The symmetric form of the transfer matrix T. The matrix T in (3.13) is not Hermitian, but one can conjugate it to a symmetric version by defining a new matrixT where P is a diagonal matrix with entries (P 0 , P 1 , P 2 . . . ) satisfying where (a; q) l is the q-Pochammer symbol (see (A.1)).T has matrix elements Thus, it is manifestly symmetric, and has the same original moments (3.18) since We will switch between the two transfer matrix descriptions depending on which is more convenient at each stage. The spectrum of T Obtaining the spectrum of T is straightforward. The matrix T asymptotes, down the diagonal, to a matrix with 1 one diagonal below the main diagonal and η ∞ = 1 1−q one diagonal above the main diagonal. i.e. We can think about the eigenvalue problem of T as a scattering problem with the distance along the diagonal playing the role of position. In this interpretation, infinity is captured by the asymptotic form of the operator T far down the diagonal. Hence, this is a scattering problem on the half line with δT acting as a scatterer close to the origin. Indeed, up to an overall rescaling, by conjugating the matrix T asymp , and adding the identity matrix with an appropriate weight, we can bring it to the form with -2 on the diagonal and 1 on the diagonals below and above the main. It is then an approximation to the 2nd derivative operator, making the asymptotic behaviour more familiar in the continuum limit. This interpretation is elaborated in section 5, where the connection between this eigenvalue problem and the Liouville equation is described. However, as far as the spectrum and its density is concerned, the details of the scatterer are not important as both can be read from the behaviour at infinity 7 . So the spectrum of T is the same as that of T asymp which is a Toeplitz tridiagonal matrix (i.e with constant elements one diagonal above and below the main diagonal [45]), for which there is a simple formula for the eigenvalues, which in this case is as was found in [35]. This formula gives us both the spectrum of the T and the density of states on it. Denote θ = sπ n+1 , then in the limit n → ∞ it covers the interval [0, π] with uniform distribution, i.e., inserting a complete set of energy eigenstates is simply done by the replacement (3.31) Eigensystem of T matrix The previous asymptotic discussion suggests to parametrise the eigenvalues of the matrix T as E(µ) ≡ 2µ √ 1−q . Let v (µ) be the corresponding eigenvector 8 . This allows us to write (3.11) together with the recursion relation (3.10) as Just for this subsection, we will allow l = −1 and define v For the wave functions, or form factors, we will be more specific below. Also, reading the spectrum and density from infinity also assumes that there are no bound states near the origin. 8 v (µ) ∝ ψ(α) of section 3. For now the normalization is different though. Using the full range θ is dictated by the the discussion of the spectrum in section 3.2.1. At this point, it is more useful to switch toT , since we need to conjugate the form factor. The eigenvectors of symmetric transfer matrixT are just P v µ . In components, the eigenvectors arê where N (µ, q) is a normalization which is fixed by the requirement that the states are delta function normalized in θ, which gives (see Appendix B) N (µ, q) = √ (q;q)∞|(e 2iθ ;q)∞| √ 2π . With this one can easily write down the matrix element 9 The moments of the distribution eq(3.19) can then be computed to be where we have defined the distribution Below we will show that this formula is the same distribution given in (2.6). Matching to the result in [35]. Recall that [35] obtained the moments m L (q) as moments of distribution v(E|q) given in (2.6), i.e Switching to angular variables via followed by a change of variables as to obtain that (3.40) matches our result (3.38). 9 Recall that the density of states ρ(θ) is uniform. The q → 1 limit of the distribution Our interest in approximately factorized correlators, suggests to work in the regime λ → 0 or q → 1. The analysis is most easily done by arranging the Pochhammer symbols into Jacobi Theta functions and performing modular transformations. The results in this section are similar to Appendix B in [18] and [24] after a suitable substitution that takes us between our model and the SYK model in the same scaling as above. To study the λ → 0 limit of the distribution Ψ(θ, q) (3.39), it is convenient to rewrite it in terms of Jacobi Theta functions. Using (A.4), Using the modular transformation (A.5) we rewrite it as (4.2) and the λ → 0 limit becomes The last equality follows since the exponential overcomes the hyperbolic cosine factor for m ≥ 1 given that θ ≤ π. Plugging the above λ → 0 expansion in (4.2), one gets where in the last step we used θ ≥ 0. This determines the dominant contribution to the distribution (3.39) to be Notice this function is symmetric under E → −E, and vanishes at the edges E max = −E min = 2 √ λ , which correspond to θ = 0 and θ = π, respectively, since E = 2 cos(θ) √ 1−q . The distribution (4.5) has several interesting regimes: • The λ → 0 with E fixed regime. As highlighted in [35], pointwise, which is the Gaussian limit of the distribution (4.5). • The other interesting behaviour is close to edges. Setting ϕ = π − θ, we begin with the limit ϕ = π − θ ∝ λ (4.7) (where by this we also include ϕ λ ≫ 1 fixed, λ → 0). In this case the distribution becomes The quadratic term in the exponential can also be neglected in this regime, giving rise to the density of states of the Schwarzian theory (∝ sinh(2π (E − E min )/λ 3 2 )) after recalling that near the edge • Note that we can actually neglect the quadratic term in the exponential already at λ ≪ ϕ ≪ √ λ, and extend the Schwarzian regime. This points to another simplification of the spectrum which actually covers the bulk of the spectrum at λ ≪ ϕ ≪ π−λ. In this range we can also expand the second sinh, and obtain that the distribution is just a gaussian in (ϕ − π/2). The center of this range includes the Gaussian-inenergy distribution, and its edges overlaps with the Schwarzian distribution. It would be interesting to find a symmetry argument for this entire range. The canonical ensemble in the q → 1 limit Similarly one can analyze the canonical partition function in the limit q → 1. Using the variable ϕ ≡ π − θ as before We can treat most of the spectrum using the discussion in bullet 3 above, leaving out only a very low temperature regime where φ ∼ λ. We will prefer however to split the discussion according to first two bullets, i.e., to a high temperature phase and a Schwarzian phase which then splits into a low temperature and a very low temperature phase. Both of the latter are obtained from the Schwarzian density of states and go smoothly into each other, and we make this division mainly for the sake of the discussion of the 2-pt function in section 6, for which the difference between these regimes is more meaningful. • High Temperature phase (when λ − 1 2 ≫ β) : localizing in the region |ϕ − π 2 | ≪ 1, reduces the partition function to a Gaussian around E = 0, and the partition function can then be written as It is clear that the gaussian cuts off the integral if ϕ deviates from π 2 . To evaluate Z(β), we set π 2 − ϕ ≡ x. The limit above translates to x ≪ 1, in which case we approximate the cosine by x 2 to obtain Z(β) ∼ e β 2 2 and the integral is supported at x = β √ λ 2 with a width of order √ λ. • Low Temperature phase (when λ − 3 2 ≫ β ≫ λ − 1 2 ) : generically, one expects ϕ ≪ 1, but the thermodynamic behaviour of the system is sensitive to how small ϕ is compared to λ, due to the argument in the sinh factor in (4.8). Consider the regime λ ≪ ϕ ≪ 1, where the distribution is approximated by Expanding the Boltzmann factor, the partition function reduces to 10 Notice the integral is mainly supported near ϕ = π β √ λ with a width of λ 1 4 √ β . The consistency with the assumption λ ≪ ϕ ≪ 1 requires β ≪ λ − 3 2 . This regime, along with the next one, are part of the conformal low energy limit of the theory. • Very Low Temperature phase (when β ≫ λ − 3 2 ) : consider the regime ϕ ≪ λ. After linearising both the sin and sinh factors, the distribution (4.8) simplifies to 2λ − 2ϕ 2 λ ϕ 2 (4.14) The Boltzmann factor in the partition function cuts off the integral around ϕ ∼ λ 1 4 √ β . This is consistent with our regime ϕ ≪ λ, since β ≫ λ − 3 2 . The partition function can then be evaluated as where in evaluating the integral, we have replaced the upper limit by ∞. Relation to previous work. The low energy behaviour identified in (4.8) is the one discussed in appendix B in [18] and in [24]. To make the comparison with [18] easier, notice the density of states (3.41) can be written as (4.16) 10 The term − 2ϕ 2 λ in the exponent is negligible compared to −βϕ 2 √ λ due to β √ λ ≫ 1. where µ = cos θ and N = 1−q 2k+1 . This matches equation (81) in [18] by identifying their parameters a, λ s , J as a = µ , J = √ λ e λ/8 , λ s = λ 2 (4.17) whereas both energies are the same 11 . The second matching follows from the observation that our variances equal unity, as in [35], whereas our normalisation was TrH 2 /TrI = 1 (see equation (80) in [18]). The third matching is due to the Majorana nature of the fermions in the SYK model. Thus the density of states in [35] is exactly the same as the doubled scaled SYK, up to these identifications. The further triple scaled limit, isolating the Schwarzian action in SYK, corresponds to the low energy behaviour captured by the density of states (4.8) in our set-up. Bulk reconstruction In section 3 we presented a new derivation of the density of energies in v(E|q) in the λ-scaling limit (1.4) keeping β finite and where E() on the left hand side is the average over the ensemble of Hamiltonians. The range of integration on the right hand side is the spectrum of the random Hamiltonian, and its "randomness" now hides in the 1/n corrections which are neglected in this limit. Loosely, one can hope that for a specific realization of the Hamiltonian H, one can write with probability 1 (or 1 − O(1/n)) on the space of random Hamiltonians, in the large n limit. In this case one is dealing with a specific Hamiltonian on the left hand side. This single Hamiltonian realization corresponds to the boundary field theory Hamiltonian in the AdS/CFT written in terms of the fundamental field theory objects -in our case the spin operators. The operator is random and only in the n → ∞ limit its spectrum converges to anything universal. In this section, we suggest that the operator T (orT ) is the bulk Hamiltonian, i.e. the analogue of the bulk Hamiltonian for the near-AdS background -whose low energy limit is given by the Schwarzian action -but extended to the full model. Recall that the parameter E appearing in the right hand side of (5.1) and (5.2) can be reinterpreted as the energy of 11 Our normalizations are different from [18] since their distribution ρs(E) integrates to 2 N 2 whereas our v(E|q) integrates to 1 the field theory Hamiltonian, but it is also the eigenvalue of the operator T (orT ) which acts on the (altogether different) Hilbert space of weights of open chord lines. Whereas the spectrum of H changes from realization to realization, the matrixT is fixed. There is no contradiction since we work in the limit n → ∞, fixed λ limit, where the spectrum of H is universal. Furthermore, the operatorT can be used as the Hamiltonian not only for the partition function, but for a much broader set of computations. It should be clear that the insertion of any finite polynomial of H in expectation values involving density matrices of the form for any analytic weight function f (E), can be turned into the insertion of the same polynomial withT as its argument, following the procedure described in section 3. In other words, the insertion of e −itH in expectation values involving (5.3) can be exactly replaced by e −itT , while the density matrix itself is mapped into the density matrix (as an operator in the Hilbert space defined on the chord diagram side) Having two different Hamiltonians, acting on different Hilbert spaces but propagating the system in exactly the same way, supports the dual interpretation we suggest forT . This means that we can access a large set of weights on the energy eigenstates as long as the function is smoother than the energy spacing (actually smoother than 1 n for the entire energy band). This is not in contradiction with what we know about the bulk Hamiltonian (anything which extends the low energy effective action), since it is not clear that it should be able to capture states whose support on close by energy states is rapidly varying 12 . Phrased differently we regard E, when used as the eigenvalue ofT , as a parameter which scans over the allowed energy range only after taking the limit n → ∞. It is not the discrete spectrum of energies of the finite n system. It should be viewed as a coarse grained version of the latter, very much like the energy measured in gravity is a coarse grained version of the discrete set of energies of the field theory (when defined on a compact space). Going from the eigenvalues ofT to the eigenvalues of H at finite n is an interesting problem, and it is similar to seeing -in General Relativity -the discreteness in energies of a black hole. The above discussion, together with the behaviour of the partition function in the low temperature regime, suggests the low energy physics for q → 1 should be governed by the Schwarzian action (in the gravity dual), as in the SYK model. In the following, we derive this connection by matching the continuum limit of the equation determining the spectrum ofT with Liouville quantum mechanics 13 , which can be written as the Schwarzian action, as discussed in [46,47] 14 . 12 Unless, for example, one believes in the microstate program in its strongest form where one can choose a specific energy eigenstate in the most extreme case. 13 We would like to thank D. Bagrets for a discussion of this point. 14 See [48] for a 2d CFT perspective on this matter. To take the continuum limit, it is convenient to define the matrixT ≡ ST S −1 where S is a diagonal matrix with entries S ii = (−1) i . Notice that solving for the eigenvalues of theT matrix still resembles a scattering problem on the half line, with the index i of the vector measuring the distance from the origin, just like it did for theT , T matrices. The asymptotic form of theT matrix is In the continuum limit, the above matrix includes the second derivative operator. To make this more precise, define φ = log(q)i (5.6) Using the form of theT operator in (3.25), its continuum limit equals Notice the potential term comes from the expansion 1−q i+1 (3.12), which is accurate since i is large and q → 1, from below. The eigenvalue problem then reduces to the quantum mechanical eigenvalue problem This is equivalent to the Liouville form of the Schwarzian action in equation 32 in [47], given by after a constant shift of φ. In [47] M was the scale M = N log N 64J √ π (for the SYK model with quartic interactions). For us it is set by | log(q)| −2 ∼ λ −2 . The prescription in [46,47] (and in [48] for 2D case) requires that, in the path integral, we sum over trajectories that begin and end in the strong coupling region φ → ∞. This is in qualitative agreement with our prescription since we place the state v 0 as initial and final states. Recall that v 0 = (1, 0, 0, 0....), i.e., only the i = 0 term is turned on, which where the term q i is the largest. In terms of φ, e φ is largest which is indeed the analogue of the Liouville strong coupling region. The models are of course not exactly the same since the model in [46] captures the low energy and theT matrix captures the full dynamics. This also gives an interpretation of the index i via its relation to φ. φ(t) measures where the AdS 2 space is glued to whatever non-universal UV we have (the leading effect being the Schwarzian action), i.e., φ(t) parametrizes the length of the AdS 2 throat. We see that in the full model the size of AdS 2 is actually quantized, giving rise to a minimal size AdS which corresponds to the state v 0 . It is worth reiterating that the density of states for the H Hamiltonian is different than the density of states for the matrixT , even in the large n limit. Rather the density of states in the former is related to the density of states in the latter H by equation (3.21) or, equivalently, by which means that we have to put a specific initial and final states forT in order to compute the partition function. It is tempting to interpret this in Minkowski space as a computation with initial and final states at the past and future singularities of the black hole. 6 The two point function The exact 2-pt function As explained in section 2, we want to compute correlators of random operators M taken from the same universality class as the Hamiltonian (1.2). Hence, these are defined by where J is now a string of p m distinct sites and Pauli matrices. The sum runs over all such possible J's, and m J are independent Gaussian variables with zero mean and unit standard deviation (in particular they are also independent of the coefficients α J in H). There are two relevant parameters that we will keep fixed in the limit n → ∞. The first is the analogue of λ (see (1.3)) for the random operator (6.1) for any value of β and t. The formalism developed below, based on the set-up in section 3, proceeds by evaluating, and then resumming, expressions of the form This formalism can be extended to compute any n-pt function [49]. The strategy is to reduce the computation to some relevant chord partition function, and then to evaluate it. The identification of the relevant partition function follows the discussion in section 3. The Gaussian integration over the random coefficients of the operators still pairs them. Hence, one can still think in terms of chord diagrams. The only difference is that the Gaussian integral over m J 's pairs the two M insertions, whereas . . . . . . To evaluate the marked chord diagram, we need to modify the "hopscotch" procedure described in section 3. where W (q) = Diag(1,q,q 2 , ....) (6.7) encodes the intersection of an H-line with an M -line, when the former hops over the latter. In the next subsection we will evaluate this expression for a special case of q andq. But before we do that, we will perform a quick check on our results above. A check. Before evaluating (6.11) for a special case of q andq, one can perform a check by taking theq → 1 limit. There should be no cost for the H lines crossing the M lines in this limit. Hence, it must be that To check (6.11) is compatible with this behaviour, notice that nearq → 1, (q 2 ; q) ∞ → 0, due to the first term in the product. Hence (6.11) vanishes, unless θ 1 → θ 2 , since an additional zero in the denominator occurs then 15 . Hence, the integrand in (6.11) behaves like a delta function whose strength is given by 15 Another zero may appear in the denominator when θ1 + θ2 = π but this appears in a co-dimension 2 in the range of integration. 6.2 The q → 1 limit withq = q m The exact 2-pt function of M (6.11) holds for all ranges of time (which are held fixed in the n → ∞ limit). In the remainder of this section we will compute the formula in a specific case, which is the low energy regime where conformal symmetry is expected to appear, as discussed in section 4 and in Appendix B of [18]. We will work in the limit q,q → 1 since we want to work in the limit in which the correlators of each operator separately approximately factorize. However, more and more terms contribute in the Pochammer symbols in this limit, similar to what we had for the partition function, and hence it is important how we take this limit. Since in gravity non-factorization of correlation functions for different operators is governed by the same parameters (e.g. the same 1/N), then the rates of q → 1 andq → 1 should be related. A particularly simple case to analyze isq = q m with m an integer. This has technical advantages, but it is also physically interesting because it corresponds to p m = m p . (6.15) That is, if the Hamiltonian is made out of a sum of strings of p spin operators (with random coefficients), then the random operator M is made out of a string of m · p spin operators. This is reminiscent of the statement that, say for 4D, N = 4 SYM, the Hamiltonian is a descendant of Tr(X 2 ), yet we can probe the system with low energy fields, which correspond to single trace operators of the form Tr(X n ), n > 2, and their conformal descendants. As discussed in [18] and section 4, our model has a conformal low energy limit. Hence, conformal symmetry should assign a dimension one to the Hamiltonian. If the fundamental fields (in this case the spin operators) can be assigned a specific conformal dimension, and if this conformal dimension is additive in composite operators -as in the SYK model on both counts -then one expects the conformal dimension of M to be m. We will see how our exact formula matches this, up to the existence of mixing with operators of lower dimension when we work at finite temperature. Despite this, our exact formula always has an overlap with an operator of the right dimension. Before doing the computation we would like to recall an additional formula to which we will compare our result. We will actually be computing the "two sided correlator" This computation is slightly easier than the ordinary thermal correlator. We refer to this as the two sided correlator since, in an eternal black hole in AdS, it is the relevant correlator when there is one operator on each of the boundaries. For an particle of mass M in the BTZ black hole this correlator is (see for example [37] in the Eikonal approximation, with a shock wave there) where l is the AdS 3 radius, M is the mass of the particle and M l is the conformal dimension of the associated operator. This is what is expected from conformal invariance. In the single sided correlator the cosh is replaced by a sinh, to obtain the expected short distance behaviour of 1/t 2M l , and the correct Euclidean time periodicity. The reduced formula Whenq = q m , the identity allows to write the 2-pt function (6.11) as Notice the finite product can be rewritten, within the integral, as (6.20) Taking the derivatives outside of the integral, allows to write the integrand in terms of ϑ functions (see (A.4)) depending only on q, The q → 1 limit of the Jacobi Theta functions is evaluated as in (4.4), bringing the 2-pt function to the form This is the exact 2-pt function for q =q m in the limit λ → 0. In the next subsections, we study the function I(β, t, q), from which all m > 1 correlators can be extracted, in the low temperature and very low temperature regimes (or long time, and very long time regimes). Low and very low temperature regimes Since the integral I(β, t, q) localizes near the edges at low energies, we define φ i = π − θ i . Expanding the integral near φ i ∼ 0, As explained in section 4.1, the low energy (and very low energy) regime satisfies β √ λ ≫ 1. To study the behaviour of the Gaussian factors in the above integral, it is convenient to rescale the integration variable ϕ i ≡ φ i λ together with the time and temperature parameters β ≡ λ 3/2 β,t ≡ λ 3/2 t. The low energy regime is equivalently described by allowing to approximate (6.23) by This integral has two regimes, following a similar discussion for the partition function in section 4.1 : • The low energy-long time regime characterised byβ,t ≪ 1, where the integral receives contributions from the range ϕ i ≫ 1. • The very low temperature regime, or very long time scale regime, characterised bỹ β ≫ 1 ort ≫ 1, where the integral receives contributions primarily from ϕ i ≪ 1. Low temperature regime The low energy-long time regimeβ,t ≪ 1 allows to extend the range of integration to ∞ since the gaussian in the integrand cuts off the integral well before the limits in (6.25). Notice also the integral is supported at large values of ϕ 1 , ϕ 2 , allowing us to approximate three of the sinh functions by their larger exponentials where we changed variables to ϕ = ϕ 1 +ϕ 2 2 , σ = ϕ 2 −ϕ 1 2 . Due to the 1 sinh(2πσ) term, the σ integral receives contributions from finite σ, whereas its limit of integration is ±ϕ, much larger quantities. This means we can trade the σ limits by ±∞. Furthermore, we can also neglect the e −βσ 2 term and the σ 2 term relative to ϕ 2 in the (ϕ 2 − σ 2 ) term. After these approximations, our integral reduces to Sinceβ ≪ 1 we can in any case neglect the ϕ s dependence in the numerator. The integral shows different behaviours depending on the scaling oft : • Whent √β ≪ 1, the ϕ s dependence in the denominator can be neglected and, to leading order inβ, the result is I(β, t, q) = λ where we changed the integration variable to ϕ ≡ ϕs √β + π−t β in the second step. Due to the large t/β, ort/β, (6.31) differs from (6.30) by an additional et 2 β . Very low temperature Whenβ = βλ 3/2 ≫ 1, the angles ϕ 1 and ϕ 2 are localized to a range much smaller than 1. This allows to expand the sinh functions in (6.25) to obtain I(β, t, q) = λ 6 2 e where we traded the upper limit with ∞. This may have the following interpretation. This quantity equals where E() is the statistical average and β 1 , β 2 are related to β, t. This means that where E i measures the energy of the state above the ground state. We can interpret this as if the operator M acts as an underlying gaussian random matrix which couples to low energy states with form factors ϕ 2 , i.e. consider a set of random vectors where the sum is up to some energy higher than the scale set by the very low temperature, and c i,α are independent complex Gaussian random variables with mean zero and unit standard deviation. Take M to be a random Gaussian Hermitian matrix in terms of these variables M = α,β |v α M α,β v β | (6.36) where andM are independent complex Gaussian variables with mean zero and standard deviation 1. In this case (6.34) is satisfied. 6.4 The final correlator The evaluation of the exact 2-pt function (6.22) requires to compute the action of the operator D m (t, β) in (6.20) on I(β, t, q) and interpret the result. It is easy to read the results without actually having to worry about the details of D m (t, β). • The low temperature regime corresponds to the conformal regime, when the fluctuations of the pseudo-Goldstone modes are still small. Assigning the Hamiltonian H the conformal dimension 1, one would expect an operator made of m spin operators to have dimension m. This is exactly what happens in our formulas. For m = 1, the operator D 1 (t, β) reduces to the identity. Hence, our result (6.30) is the correlator for an operator of dimension 1 i.e., ∼ 1 cosh 2 ( πt β ) . For m > 1, there exists operator mixing, but we can extract the operator content from the correlator as follows. To isolate the conformal dimensions of the participating operators, first insert the operators on the same side, or equivalently take t = iβ/2+t ′ . This turns the cosh into a sinh. Second, take the limit t ′ ≪ β. In this case the leading contribution 16 in D m (t, β) acts on it with ∂ 2m−2 t turning the correlator into 1/t 2m which is the 2-pt function for an operator of dimension m. • For the very low temperature/long time regime we can compare (6.32) with equation (67) in [47]. Although their discussion is for SYK model with quartic interactions, it is within the Liouville description of the Schwarzian action. Since our spin glass model reproduces the latter in this very low temperature regime, both results should be similar. The finite temperature 2-pt function of a pair of Majorana fermions in the SYK model at long times/low temperatures (in the conventions used in [47]) equals G(τ ) ∼ − M 2 β 1/2 √ J sgn(τ ) τ 3/2 (β − τ ) 3/2 , τ ≫ M ≡ N log N 64 √ π J (6.37) For a 2-pt function of higher dimension operators, the time dependence (at long time and low temperature) remains with the same power, except that the coefficient of power of M in front of the expression increases. To match with (6.32), one needs to work with Lorentzian time, τ = it and to shift the time imaginary axis by t → t − i β 2 . Altogether, where we analytically continued back to lorentzian time in the last step.
14,259
2018-06-12T00:00:00.000
[ "Physics" ]
Compilation of low-energy constraints on 4-fermion operators in the SMEFT We compile information from low-energy observables sensitive to flavor-conserving 4-fermion operators with two or four leptons. Our analysis includes data from e+e- colliders, neutrino scattering on electron or nucleon targets, atomic parity violation, parity-violating electron scattering, and the decay of pions, neutrons, nuclei and tau leptons. We recast these data as tree-level constraints on 4-fermion operators in the Standard Model Effective Field Theory (SMEFT) where the SM Lagrangian is extended by dimension-6 operators. We allow all independent dimension-6 operators to be simultaneously present with an arbitrary flavor structure. The results are presented as a multi-dimensional likelihood function in the space of dimension-6 Wilson coefficients, which retains information about the correlations. In this form, the results can be readily used to place limits on masses and couplings in a large class of new physics theories. Introduction The ongoing exploration of the high-energy frontier at the LHC strongly suggests that the only fundamental degrees of freedom at the weak scale are the Standard Model (SM) ones. Moreover, their perturbative interactions are well described by the most general renormalizable SM Lagrangian invariant under the SU(3) × SU(2) × U(1) local symmetry. A large number of precision measurements has been performed in order to test the SM predictions. The motivation is that some unknown heavy particles may affect the coupling strength or induce new effective interactions between the SM particles. One framework designed to describe such effects in a systematic fashion goes under the name of the SM Effective Field Theory (SMEFT). In this approach, the SM particle content and symmetry structure is retained, but the usual renormalizability requirement is abandoned such that interaction terms with canonical dimensions D > 4 are allowed in the Lagrangian. These higherdimensional operators encode, in a model-independent way, the effects of new particles with masses above the weak scale. One can then analyze experimental searches once and for all within this framework. The output of such analysis, namely numerical values for the Wilson coefficients of higher-dimensional operators, can then be applied to any new physics model covered by the SMEFT. Significant progress has been recently achieved concerning the automation of this EFT matching [1][2][3][4][5][6]. The efficient SMEFT program should be compared with model-dependent studies where non-trivial hadronic effects, PDFs, radiative corrections, experimental errors, cuts, etc., have to be taken into account for each model. Assuming lepton number conservation, leading SMEFT contributions are expected to originate from dimension-6 operators [7,8]. There is a vigorous program to characterize the effects of the dimension-6 operators on precision observables and derive constraints on their Wilson coefficients in the SMEFT Lagrangian . Most of these analyses assume that the dimension-6 operators respect some flavor symmetry in order to reduce the number of independent parameters. On the other hand, Refs. [33,44] allowed for a completely general set of dimension-6 operators, demonstrating that the more general approach is feasible. This paper further pursues the approach of Refs. [33,44], providing new constraints on the SMEFT where all independent dimension-6 operators may be simultaneously present with an arbitrary flavor structure. We compile information from a plethora of low-energy flavor-conserving experiments sensitive to electroweak gauge boson interactions with fermions and to 4-fermion operators with 2 leptons and 2 quarks (LLQQ) and 4 leptons (LLLL). There are two main novelties compared to the existing literature. First, precision constraints on the LLQQ operators have not been attempted previously in the flavor-generic situation. Therefore our results are relevant to a larger class of UV completions where new physics couples with a different strength to the SM generations. Note that, in particular, all models addressing the recent B-meson anomalies (see e.g. [50][51][52][53][54]) must necessarily involve exotic particles with flavor non-universal couplings to quarks and leptons. Our analysis provides model-independent constraints that have to be satisfied by all such constructions. Second, we include in our analysis the low-energy flavor observables (nuclear, baryon and meson decays) recently summarized in Ref. [55]. At the parton level these processes are mediated by the quark transitions d(s) → uℓν ℓ , hence they can probe the LLQQ operators. We will show that for certain operators the sensitivity of these observables is excellent, such that new stringent constraints can be obtained. Moreover, the low-energy flavor observables offer a sensitive probe of the W boson couplings to right-handed quarks. Our analysis is performed at the leading order in the SMEFT. We ignore the effects of dimension-6 operators suppressed by a loop factor, except for the renormalization group running within a small subset of the LLQQ operators. Moreover all dimension-8 and higher operators are neglected, and only the linear contributions of the dimension-6 Wilson coefficients are taken into account. The corollary is that the likelihood we obtain for the SMEFT parameters is Gaussian. All in all, we provide simultaneous constraints on 61 linear combinations of the dimension-6 Wilson coefficients. In this paper we quote the central values, the 68% confidence level (CL) intervals, while the correlation matrix is provided in the attached Mathematica notebook [56]. That file also contains the full likelihood function in an electronic form, so that it can be more easily integrated into other analyses. The outline of the paper is the following. Section 2 introduces the theoretical framework and the necessary notation. Section 3 presents the experimental input of our analysis. Section 4 contains the results of our fit, in the general case and in some interesting limits. Finally Section 5 discusses the interplay with LHC searches, and Section 6 contains our conclusions. Formalism and notation 2.1 SMEFT with dimension-6 operators Our framework is that of the baryon-and lepton-number conserving SMEFT [7,8]. The Lagrangian is organized as an expansion in 1/Λ 2 , where Λ is interpreted as the mass scale of new particles in the UV completion of the effective theory. We truncate the expansion at O(Λ −2 ), which corresponds to retaining operators up to the canonical dimension D=6 and neglecting operators with D ≥ 8. [57,58] for examples of such sets. In order to connect the SMEFT to observables it is convenient to rewrite Eq. (2.1) using the mass eigenstates after electroweak symmetry breaking. Then the effects of dimension-6 operators show up as corrections to the SM couplings between fermion, gauge and Higgs fields, or as new interaction terms not present in the SM Lagrangian. The discussion and notation below follows closely that in Section II.2.1 of Ref. [59]. We define the mass eigenstates such that all kinetic and mass terms are diagonal and canonically normalized. We also redefine couplings such that, at tree level, the relation between the usual SM input observables G F , α, m Z and the Lagrangian parameters g L , g Y , v is the same as in the SM. See Ref. [59] for complete definition of conventions and the complete list of interaction terms with up to 4 fields. In the following we only highlight the parts of the mass eigenstate Lagrangian directly relevant for the analysis in this paper. One important effect from the point of view of precision measurements is the shift of the interaction strength of the weak bosons. We parametrize the interactions between the electroweak gauge bosons and fermions as νf Here, g L , g Y are the gauge couplings of the SU(2) L ×U(1) Y local symmetry, the electric coupling is e = g L g Y / g 2 L + g 2 Y , the sine of the weak mixing angle is s θ = g Y / g 2 L + g 2 Y , and I, J = 1, 2, 3 are the generation indices. For the fermions we use the 2-component spinor formalism and we follow the conventions of Ref. [60], unless otherwise noted. 1 The SM fermions f J , f c J are in the basis where the mass terms are diagonal, and then the CKM matrix V appears in the quark doublets as q I = (u I , V IJ d J ). The effects of dimension-6 operators are parameterized by the vertex corrections δg that in general can be flavor-violating. For flavor-diagonal interactions we will employ the shorter notation In this paper we focus on flavor-conserving observables that target flavor-diagonal Wilson coefficients. We will express the experimental constraints using the following set of independent flavor-diagonal vertex corrections: The vertex corrections correspond to 24 linear combinations of dimension-6 Wilson coefficients, 3 of which are complex (those entering δg W q R ). We consider only CP-conserving observables, thus the imaginary part enters at the quadratic level and is neglected. To simplify the notation we will omit Re in front of complex Wilson coefficients. In this paper we will also discuss constraints on flavor-diagonal 4-fermion operators in the SMEFT Lagrangian of Eq. (2.1). We work with the same set of 4-fermion operators as in Ref. [57] 1 Compared to [60], we use a different normalization of the antisymmetric product of the σ matrices: σ µν = i 2 (σ µσν − σ νσµ ),σ µν = i 2 (σ µ σ ν −σ ν σ µ ). 2 More generally, it is often convenient to parametrize the space of dimension-6 operators using δg's and other independent parameters in the mass eigenstate Lagrangian that are in a 1-to-1 linear relation with the set of Wilson coefficients c i [24]. One example of such parametrization goes under the name of the Higgs basis and is defined in Ref. [59]. Chirality conserving (I, J = 1, 2, 3) Chirality violating (I, J = 1, 2, 3) One flavor (I = 1, 2, 3) Two flavors (I < J = 1, 2, 3) and employ a similar notation. 3 The main focus is on the flavor-conserving 2-lepton-2-quark dimension-6 operators (LLQQ) summarized in Table 1, and defined in the flavor basis where the up-quark Yukawa matrices are diagonal. Overall, there are 10 × 3 × 3 = 90 such operators, of which 27 (the chirality-violating ones) are complex. In the latter case the corresponding Wilson coefficient is complex, and the Hermitian conjugate operator is included in Eq. (2.1). For the sake of combining our results with those of Ref. [44], we also list in Table 2 Table 1, and Table 2. The observables discussed in this paper will not depend on all of them, and thus we will be able to constrain only a limited number of the combinations. In particular the operators involving the 3rd generation fermions are currently, with a few exceptions, poorly constrained by experiment. Nevertheless, the constraints we derive are robust, in the sense that they do not involve any strong assumptions about the unconstrained operators, other than the validity of the SMEFT description at the weak scale. We assume that our results are not invalidated by O 1 16π 2 Λ 2 corrections, which arise at one loop in the SMEFT and inevitably introduce dependence of our observables on other D=6 Wilson coefficients. We will also treat V as the unit matrix when it multiplies dimension-6 Wilson coefficients. This ignores all contributions to observables where the Wilson coefficients are multiplied by an off-diagonal CKM element. 4 Last, we will also particularize our results to more restrictive scenarios, such as the so-called flavor-universal SMEFT, where dimension-6 operators respect the U(3) 5 global flavor symmetry acting in the generation space on the SM fermion fields q, ℓ, u c , d c , e c . Weak interactions below the weak scale Precision experiments with a characteristic momentum transfer Q ≪ m Z can be conveniently described using the low-energy effective theory where the SM W and Z bosons are integrated out. In this framework, weak interactions between quark and leptons are mediated by a set of 4-fermion operators. Within the SM, these operators effectively appear due to the exchange of W and Z bosons at tree level or in loops, and their coefficients can be calculated by the standard matching procedure. Once the SM is extended by dimension-6 operators, these coefficients may be modified, either due to modified propagators and couplings of W and Z, or due to the presence of contact 4-fermion operators in the SMEFT Lagrangian. Below we define the low-energy operators that are relevant for the precision measurements we include in our analysis. We follow the PDG notation [61] (Section 10), and we present the matching between the coefficients of the low-energy operators and the parameters of the SMEFT. Charged-current (CC) interactions: qq ′ ℓν The low-energy CC interactions of leptons with the 1st generation quarks are described by the effective 4-fermion operators: To make contact with low-energy flavor observables, we defined the rescaled CKM matrix element V ud [55]. It is distinct from the actual V ud , i.e., the 11 element of the unitary matrix V that appears in the Lagrangian after rotating quarks to the mass eigenstate basis. The two are related by V ud =Ṽ ud (1 + δV ud ) where δV ud is chosen such as to impose the relationǭ de L = −ǫ de R in Eq. (2.4). 5 Let us note that in generalṼ ud is also different from the phenomenological value obtained within the SM, which we will denote by V PDG ud . Currently this value comes from superallowed nuclear beta decays [62] that depend on the vector couplings via the combinationǭ de L +ǫ de R . By settingǭ de L = −ǫ de R , this nonstandard effect has been conveniently absorbed into the definition ofṼ ud . However, the relevant transitions also depend, each in a different way, on the scalar coefficient ǫ de S . ThusṼ ud and V PDG ud only coincide if ǫ de S vanishes, whereas in general it is not possible to redefine away all new physics contributions throughṼ ud . For this reason we treatṼ ud as a free parameter that is fit together with the EFT Wilson coefficients [55]. In principle the difference betweenṼ ud and V PDG ud must be taken into account every time the latter is used to calculate any given SM prediction. In practice, this effect will be negligible in most cases, given the strong constraints on ǫ de S from the same nuclear decay data, cf. Eq. (3.17). At tree level, the low-energy parameters are related to the SMEFT parameters as As indicated earlier, at O(Λ −2 ) we treat the CKM matrix as the unit matrix. In this limit, the effective parameters in Eq. (2.4) depend only on flavor-diagonal vertex corrections and 4-fermion operators. See Appendix B for more general expressions where non-diagonal elements of V are 5 The bar in theǭ deJ L coefficient reminds the reader that this coefficient is not the usual ǫ deJ L (see e.g. Ref. [55]) where the shift of new physics effects intoṼ ud is not carried out. These two are trivially related by V ud (1 + ǫ deJ L ) = V ud (1 +ǭ deJ L ). retained. Note also that the rescaled CKM matrix is no longer unitary. In particular we have Although the extraction of the V us element is also affected by dimension-6 operators, their contribution to this unitarity test is suppressed by V us and therefore it can be neglected in our approximation (V ≈ 1 at order Λ −2 ). See Eq. (B.5) for the complete expression. Neutral-current (NC) neutrino interactions: qqνν The low-energy NC neutrino interactions with light quarks are described by the effective 4-fermion operators: At tree level, the low-energy parameters are related to the SMEFT parameters as The experiments probing these couplings usually normalize the NC cross section using its CC counterpart. Thus, it is convenient to define the following combinations of effective couplings: where we took into account that SMEFT dimension-6 operators modify in general both NC and CC processes. Let us notice that additional (linear) effects in the normalizing CC process due to ǫ de R and ǫ de J S,P,T can be neglected because they are suppressed by the ratio m u m d /E 2 and m e J /E respectively. The effect due to the possible difference betweenṼ ud and V PDG ud can also be safely neglected here, given the limited precision of the neutrino scattering experiments included in our fit. Last, the same holds for the δV ud contribution that appears if the unitarity of the CKM matrix is used in the SM determination. Neutral-current charged-lepton interactions: qqℓℓ We parametrize 6 the 4-fermion operators with 2 charged leptons and 2 light quarks as where we momentarily switch to the Dirac notation with γ 5 ψ L = −ψ L , γ 5 ψ R = +ψ R . At tree level, the parameters g e i q XY are related to the SMEFT parameters as (2.11) We do not display the expressions for g e i q V V here because they will not be needed in the following. Four-lepton interactions: ℓℓℓℓ and ℓℓνν Although the main focus of this work are the LLQQ operators, in this section we provide a few expressions concerning 4-lepton operators that will be needed in our subsequent phenomenological analysis. First, we parametrize the ν-e interaction in the effective theory below the weak scale as: Matching to the SMEFT one finds Last, we parameterize the parity-violating self-interaction of electrons in the effective theory below the weak scale as L ⊃ 1 2v 2 g ee AV [−(ēσ µ e)(ēσ µ e) + (e c σ µē c )(e c σ µē c )] , (2.14) with the following SMEFT expression Renormalization and scale running of the Wilson coefficients In general the Wilson coefficients display renormalization-scale dependence that is to be canceled in the observables by the opposite dependence in the quantum corrections to the matrix elements. Let us first discuss the QCD running, which can have a numerically significant impact due to the magnitude of the strong coupling constant α s . This effect is further enhanced by the large separation of scales of the experiments discussed in this work (from low-energy precision measurements to LHC collisions). Among the coefficients involved in our analysis, only the three chirality-violating ones, c lequ , c ledq , c lequ (i.e. ǫ dℓ S,P,T in the low-energy EFT), present a non-zero 1-loop QCD anomalous dimension, namely [63] where x refers to the SMEFT coefficients c = (c ledq , c lequ , c lequ ) if the scale µ is above the weak scale or to the low-energy EFT coefficients ǫ = (ǫ dℓ S , ǫ dℓ P , ǫ dℓ T ) below it. We find that higher-loop QCD corrections to the running are numerically significant, and we include them in our analysis. 7 On the other hand we neglect in this work the electromagnetic/weak running of the SMEFT Wilson coefficients, which is expected to have a much smaller numerical importance simply due to the smallness of the corresponding coupling constants. There is however one exception to this, namely the chirality-violating operators discussed above, for two reasons: (i) contrary to the QCD running, the QED/weak running involves mixing between these operators; (ii) pion decay makes possible to set bounds of order 10 −7 on the pseudoscalar coupling ǫ dℓ P (µ low ), which can give important bounds on scalar and tensor via mixing despite the smallness of α em . In order to take into account this effect, Eq. (2.16) has to be replaced by where we will use the 1-loop QED (electroweak) anomalous dimension, γ x = γ em(w) , to evolve the coefficients ǫ ( c) below (above) the weak scale [67][68][69][70]: where we neglect terms suppressed by Yukawa couplings [70,71]. Integrating numerically the coupled differential renormalization group equations we find . (2.20) These results use the QCD beta function and anomalous dimensions up to 3 loops, and we included the bottom and top quark thresholds effects, see Ref. [67] 3 Low-energy experiments Neutrino scattering Neutrino scattering experiments measure the ratio of neutral-and charged-current neutrino or anti-neutrino scattering cross sections on nuclei: . (3.1) At leading order and for isoscalar nucleus targets (equal number of protons and neutrons) one has the so-called Llewellyn-Smith relations [72]: where r is the ratio of ν toν charged-current cross sections on N that can be measured separately, and the effective couplings g ν i L/R are defined in Eq. (2.9). In some experiments the beam is a mixture of neutrinos and anti-neutrinos, and the following ratio is measured ν e data.-The CHARM experiment [73] made a measurement of electron-neutrino scattering cross sections: where the uncertainties quoted here and everywhere else in this work are 1-sigma (68%C.L.) errors. To avoid dealing with asymmetric errors we approximate it as R νeνe = 0.41 ± 0.14, and we estimate the SM expectation as R SM νeνe = 0.33. To our knowledge, this weakly constraining measurement is currently the best probe of the electron-neutrino neutral-current interactions. ν µ data.-For the muon-neutrino scattering the experimental data are much more abundant and precise. We summarize the relevant results in Table 3. The observable κ measured in CCFR probes the following combinations of couplings [76]: The additional small dependence on the difference of the up and down effective couplings appears when one takes into account that the target (in this case iron) is not exactly isoscalar. For the reasons explained in Ref. [61], in our fits we do not take into account the results of the NuTeV experiment. The observables in Table 3 constrain 3 independent combinations of the SMEFT coefficients. Rather then combining these results ourselves, we use the PDG combination [61] that also uses additional experimental input [78] from neutrino induced coherent neutral pion production from nuclei [79,80] and elastic neutrino-proton scattering [81,82]. Although their precision is quite limited, their inclusion allows one to constrain the 4 muon-neutrino effective couplings to quarks [77]. The results of the latest PDG fit are [61]: The correlations are quoted to be small in Ref. [61] and in the following we neglect them. We symmetrize the uncertainty on θ R taking the larger of the errors, so as to avoid dealing with asymmetric errors. The corresponding SM predictions are given in Table 4. To evaluate their dimension-6 EFT corrections in Eq. (2.8) we will use s 2 θ = 0.23865, which is the central value in the MS scheme at low energies [61]. We neglect the error of the SM predictions when it is much smaller than the experimental uncertainties; otherwise we combine it in quadrature. We note that LLQQ (and 4-lepton) operators can also be probed via matter effects in neutrino oscillations, see e.g. [83,84]. However, the resulting constraints are not available in the modelindependent form where all 4-fermion operators can be present simultaneously. Moreover, neutrino oscillations probe linear combinations of lepton-flavor-diagonal operators and of the off-diagonal ones (which we marginalize over). For these reasons, we do not include the oscillation constraints in this paper. Parity violation in atoms and in scattering Atomic parity violation (APV) and parity-violating electron scattering experiments access the parity-violating effective couplings of electrons to quarks g eq AV and g eq V A . In particular, APV and elastic scattering on a target with Z protons and N neutrons probe its so-called weak charge Q W that is given by up to small radiative corrections [61,77]. The most precise determination is performed in 133 Cs, Taking into account recent re-analyses [85] of the measured parity-violating transitions in cesium atoms [86], the latest edition of the PDG Review [61] quotes where the SM prediction is Q Cs W,SM = −73.25 ± 0.02 [61]. Other APV measurements, e.g. with thallium atoms, probe slightly different combinations of the g eq AV couplings, although with larger errors. Instead, a very different linear combination of g eu AV and g ed AV is precisely probed by measurements of the weak charge of the proton, Q p W = Q W (1, 0), in scattering experiments with low-energy polarized electrons. The QWEAK experiment [87] finds where the SM prediction is Q p W,SM = 0.0708 ± 0.0003 [61]. In order to access the effective couplings g eq V A one needs to resort to deep-inelastic scattering of polarized electrons. Currently, the most precise of these is the PVDIS experiment [88] that studies electron scattering on deuterium targets. The experiment is sensitive to the following two linear combinations of effective couplings [88]: The measured values are [88] A PVDIS where the SM predictions are A PVDIS 1,SM = −(87.7 ± 0.7) × 10 −6 , A PVDIS 2,SM = −(158.9 ± 1.0) × 10 −6 [88]. The PDG combines the results of APV, QWEAK, and PVDIS experiments into correlated constraints on 3 linear combinations of g eq V A and g eq AV [61]: To disentangle g eu V A and g ed V A one needs more input from earlier (less precise) measurements of parity-violating scattering. We include two results provided by the SAMPLE collaboration [89]: from the scattering of polarized electrons on deuterons in the quasi-elastic kinematic regime at two different values of the beam energy. Combining the likelihood obtained from Eq. (3.12) with the SAMPLE results we find the following constraints: (3.14) Here δg eq XY are shifts of the effective couplings away from the SM values, whose dependence on the dimension-6 Wilson coefficients can be read off from Eq. (2.11). There are also results concerning effective muon couplings to quarks. A CERN SPS experiment [90] measured a DIS asymmetry using polarized muon and anti-muon scattering on an isoscalar carbon target. The results can be recast as the measurement of the observable b SPS defined as where λ is the muon beam polarization fraction. Two measurements of b SPS at different beam energies and polarization fractions were carried out [90]: Low-energy flavor The partonic process d j → u i ℓν ℓ underlies a plethora of (semi)leptonic hadron decays. Ref. [55] studied d(s) → uℓν ℓ transitions, such as nuclear, baryon and meson decays, within the SMEFT framework and obtained bounds for 14 combinations of effective low-energy couplings between light quarks and leptons (ǫ d I e J i ). Ignoring the CKM mixing at O(Λ −2 ), the effective couplings of strange quarks depend only on flavor-off-diagonal Wilson coefficients (see Appendix B). Marginalizing over them, we obtain the likelihood for 6 combinations of effective couplings together with theṼ ud CKM parameter: 8 It is useful to recall the physics behind these bounds [55]. Roughly speaking,Ṽ ud and ǫ de R,S,P,T were obtained comparing the total rates of various superallowed nuclear decays and π → eν e , as well as using various differential distributions in π → eνγ and neutron decay. The comparison with Γ(π → µν µ ) provides us with ∆ d LP , and the combination of the obtainedṼ ud with V us , extracted from (semi)leptonic kaon decays, makes possible to extract ∆ CKM . Quark pair production in e + e − collisions Electron-positron colliders operating at center-of-mass energies above or below the Z mass provide complementary information about 4-fermion operators containing electrons. Unlike the low-energy experiments discussed above, they also probe flavor-conserving operators with strange, charm and bottom quarks. Typically, the experiments quote the total measured cross section for σ q ≡ σ(e + e − → qq) and the asymmetry is the difference between the cross sections with the electron going forward and backward in the center-of-mass frame. In the presence of dimension-6 operators, at O(Λ −2 ) these cross sections are modified as follows where √ s is the center-of-mass energy of the e + e − collision,ĝ Zf ≡ T 3 f − s 2 θ Q f (i.e., the SM values), and while for the down-type quark production, q = d J , ℓequ and O ℓeqd do not enter at O(Λ −2 ) because they do not interfere with the SM amplitudes due to the different chirality structure. The LEP-2 experiment studied e + e − collisions at energies above the Z-pole, ranging from √ s = 130 Gev to √ s = 209 GeV. Available data includes the total cross section σ(qq) ≡ q=u,d,s,c,b σ q [91], as well as the total cross section and forward-backward asymmetry for the charm and for the bottom quark production [92]. This amounts to 5 distinct observables, each measured at different √ s. From Eq. (3.18) and Eq. (3.19), given the energy dependence, each of these observables should resolve 4 different combinations of dimension-6 Wilson coefficients. 9 In practice, the energy range scanned by LEP-2 is not large enough to efficiently disentangle these different combinations. Therefore, in our fit we also include earlier, less precise measurements of heavy quark production below the Z-pole. Specifically, we include the measurements from the VENUS [93] and TOPAZ [94] collaborations of the cc and bb pair production at √ s = 58 GeV (total cross sections and FB asymmetries). Other measurements To increase the power of our global analysis, in this section we will combine the observables described above with those considered previously in Refs. [33,44]. At this point there are more parameters than observables, hence more experimental input is needed. The SMEFT corrections to low-energy observables typically depend on linear combinations of 4-fermion Wilson coefficients and vertex corrections δg. The latter can be independently constrained by the so-called pole observables where a single W or Z boson is on-shell. We use the set of pole observables described in Ref. [33]. As advertised in that reference, all diagonal δg can be simultaneously constrained with a very good precision. 10 Moreover, we use the low-energy and e + e − collider observables probing 4-lepton operators. Our analysis closely resembles that in Ref. [44] with the following differences: 1. Instead of combining ourselves the results of different experiments measuring the scattering of muon neutrinos on electrons, we use the PDG combination for the low-energy ν µ -e couplings from with the correlation coefficient ρ = −0.05. 9 Note that two of these combinations involve only vertex corrections though. 10 The observables in Ref. [33] do not constrain δg Zt R , which is however not needed in our analysis. 2. Instead of recasting the weak mixing angle measured in parity-violating electron scattering [95], we use the PDG result for the parity-violating effective self-coupling of electrons [61]: g ee AV = 0.0190 ± 0.0027. (3.24) 3. To evaluate SMEFT corrections to e + e − collider observables we use the electroweak couplings at the scale m Z (instead of 200 GeV). 4. We add the measurement of the τ polarization P τ and its FB asymmetry A P in e + e − → τ + τ − production at √ s = 58 GeV by the VENUS collaboration [96]: The analytic expressions for P τ and A P in function of the SMEFT parameters and √ s are easy to obtain but are too long to be quoted here. Instead, we give the numerical expressions at √ s = 58 GeV: 5. We include the constraints from the trident production ν µ γ * → ν µ µ + µ − [97][98][99]. Dimension-6 operators modify the trident cross section as 4 Global Fit Scope The main goal of this paper is to provide model-independent constraints on flavor-diagonal 2lepton-2-quark operators summarized in Table 1. Among the chirality-conserving ones, most of the observables considered in this paper probe the operators involving the 1st generation leptons. There are 21 such operators and for an easy reference we list here their Wilson coefficients: The specific numerical values can be found in the corresponding original references. We also use the set of pole observables described in Ref. [33] in order to independently constrain the vertex corrections δg. Finally, the likelihood in Eq. (3.17) summarizing the constraints from low-energy flavor observables gives us also access to chirality-violating operators involving 1st and 2nd generation leptons and and 1st generation quarks. There are 6 such operators: Scattering of muons and muon neutrinos on nucleons gives us access to chirality-conserving operators involving 2nd generation leptons and which should be understood as evaluated at the renormalization scale µ = m Z unless otherwise stated. We will use the observables summarized in Section 3 to constrain as many as possible of the 34 Wilson coefficients in Eqs. (4.1)-(4.3). We will also present simultaneous constraints on these parameter, together with the vertex corrections and 4-lepton Wilson coefficients. Flat directions Not all linear combinations of the parameters Eqs. (4.1)-(4.3) can be constrained by the observables we consider. Before venturing into a global fit, we need to count the independent constraints and determine the flat directions in the parameter space. In Table 4 we have the following probes of LLQQ operators: • 1 combination of the parameters in Eq. (4.1) is constrained (poorly) via the only ν e ν e qq measurement (R νeνe ); • 4 combinations in Eq. (4.2) are constrained via ν µ ν µ qq measurements; • 4 new combinations in Eq. (4.1) are constrained via PV low-energy eeqq measurements (g eq V A/AV ); • 1 different combination in Eq. (4.2) is constrained (poorly) via PV low-energy µµqq measurements (b SPS ), which also probe a second combination already constrained by ν µ ν µ qq data; The flat directions F1, F2, F3 arise because low-energy precision measurements do not probe the top quark couplings, which may be amended one day by e + e − collider operating above the tt threshold. F4 is due to the insufficient information about the strange quark couplings, and it could be lifted by off Z-pole measurements of the strange asymmetry. F5 is the consequence of the fact that the parity conserving operator (ēγ µ γ 5 e) q (qγ µ γ 5 q) and the axial neutrino-quark interaction (ν L γ µ ν L ) q (qγ µ γ 5 q) are unconstrained by low-energy measurements and by e + e − colliders. F6 and F7 are due to little data on muon scattering on nucleons. Finally, F8 and F9 appear because, with our approximations, the low-energy flavor observables probe only one combination of light quark couplings to muons (through π → µν). In order to isolate the flat directions we define Table 4 and main text for further details about the different experiments. The best constraint in each case is highlighted in blue, while 'x' signals that the operator is not probed at tree level by that experiment or combination. sides of Eqs. (4.5) only via theĉ and ǫ dµ P (2 GeV) combinations. 12 Moreover, the dependence on [ĉ eq ] 1111 appears only thanks to the loose R νeνe constraint, and thus we know beforehand that there is no sensitivity to [ĉ eq ] 1111 1. Reconnaissance We start by presenting the constraints in the case when only one of the LLQQ operators is present at a time, and all vertex corrections and 4-lepton operators vanish. We stress that this is just a warmup exercise and not our main result. Indeed, one-by-one constraints are basis dependent and could be different if another basis of dimension-6 operators was used. Only the global likelihood encoding the correlated constraints on all Wilson coefficients in a given basis has a model-independent meaning. The main purpose of this exercise is to compare the sensitivity of various experiments to a few particular directions in the space of Wilson coefficients. The one-by-one constraints on chirality-conserving LLQQ operators involving 1st generation quarks are shown in Table 5. One can see that atomic parity violation is the most sensitive probe for most of the operators with electrons and the first generation quarks. The exception is [O (3) ℓq ] 1111 , which contributes to charged-current transitions and can be probed in d → ueν e decays. 13 We stress however that the less sensitive experiments will be absolutely crucial to probe more independent directions in the space of dimension-6 operators. For the operators involving the 2nd generation lepton doublet the muon-neutrino scattering is a fairly sensitive probe. Again, [O (3) ℓq ] 2211 is very precisely probed by the low-energy flavor observables because it affects the charged current. The sensitivity of low-energy experiments to the operators involving the right-handed muons is very poor. However, this is not a pressing problem, given these directions are very well probed by the LHC [22], as will be discussed in Section 5. The (ee)(qq) bounds shown in Table 5 are in excellent agreement with the 1-by-1 results of Ref. [22], whereas our (µµ)(qq) bounds are more stringent due to the inclusion of additional experimental input. The LEP-2 constraints on operators involving 2nd generation or bottom quarks are similar as those shown in Table 5. We also give 1-by-1 constraints on the chirality-violating LLQQ operators from the low-energy flavor observables: (4.6) This exceptional sensitivity arises because these operators violate the approximate symmetries of the SM, leading potentially to a large enhancement of several decays of low-mass hadrons. 14 In particular, new physics generating the pseudo-scalar (ee)(qq) operator is probed up to Λ/g * ∼100 TeV. Let us note that they dominate the c (3) ℓequ bounds shown above, despite the fact that they probe them only via 1-loop QED mixing [67,101]. For consistency with the rest of this work, these individual limits are obtained using V = 1 at order Λ −2 . Working instead with the full nondiagonal CKM matrix the limits are slightly modified, but more importantly one can set strong 1-by-1 limits in a long list of other (offdiagonal) operators. Finally, for the sake of completeness we show the 1-by-1 bound on the W coupling to righthanded 1st-generation quarks δg W q 1 R = − (3.9 ± 2.9) · 10 −4 , (4.7) which is completely dominated by its contribution to the CKM-unitarity test of Eq. (2.6). All out We now combine all the experimental observables summarized in Table 4 along with the pole observables discussed in Ref. [33], which represent a total of 264 experimental input. These provide simultaneous constraints on 61 combinations of Wilson coefficients of dimension-6 operators in the SMEFT Lagrangian (21 vertex corrections δg, 25 LLQQ and 15 LLLL operators) and on theṼ ud 14 More specifically they violate the approximate flavor symmetry of the SM U (1) ℓ × U (1) e that suppresses the decay π → ℓν ℓ by a factor m 2 ℓ /Λ 2 QCD . Thus, their bounds benefit from this large Λ QCD /m ℓ chiral enhancement. This does not apply however to the tensor operator c (3) ℓequ , whose tree-level contribution to this specific decay is zero by simple Lorentz invariance considerations. SM parameter. Marginalizing overṼ ud we find the following constraints: (4.8) The correlation matrix is available in the Mathematica notebook attached as a supplemental material [56]. The complete Gaussian likelihood for the Wilson coefficients of dimension-6 SMEFT operators at the scale µ = m Z can be reproduced from Eq. (4.8) and that correlation matrix. For user's convenience, in the notebook the likelihood is displayed ready-made for cut and paste, and we also provide a translation to the Warsaw basis. That likelihood is relevant to constrain the masses and couplings of any new physics model whose leading effects at the weak scale can be approximated by tree-level contributions of vertex corrections and LLQQ and LLLL operators in the SMEFT. The model-independent bounds on the vertex corrections are practically the same as the ones obtained from the pole observables only in Ref. [33]. This is due to the fact that there are more 4fermion operators than independent off-pole observables. Hence the latter serve to bound 4-fermion Wilson coefficients but cannot further constrain δg. Nevertheless, there are non-zero correlations between the constraints on vertex corrections and 4-fermion operators that are captured by our analysis. It is worth stressing the CKM-unitarity test ∆ CKM of Eq. (2.6), which actually provides stronger one-by-one limits on the vertex corrections δg W q 1 L and δg W µ L than all pole observables combined. Furthermore, the low-energy flavor observables provide a percent level bound on the W boson coupling to right-handed light quarks δg W q 1 R [55]. Recall that δg W q R are not probed by the pole observables at tree level and O(Λ −2 ) in the SMEFT expansion, therefore the model-independent limit in Eq. (4.8) (from Ref. [55]) is a new result. It is weaker than the one in Eq. (4.7) because in the global fit the strong constraints from the CKM-unitarity test of Eq. (2.6) are diluted by marginalizing over less precisely probed dimension-6 parameters. Nevertheless, the constraint on δg W q 1 R will typically be stronger in specific new physics scenarios, unless they predict that the particular linear combination on the r.h.s of Eq. (2.6) approximately vanishes at the sub-per-mille level. The bounds on LLLL operators involving only electrons and/or muons are also similar to the ones previously obtained in Ref. [44], with the exception of [c ℓℓ ] 2222 which is now bound due to the inclusion of neutrino trident production data. For the eeτ τ operators the bounds are much stronger thanks to including the VENUS τ τ polarization data, which resolves the degeneracies present in the fit of Ref. [44]. The model-independent bounds on LLQQ operators in Eq. (4.8) are new. Previous global SMEFT analyses targeting these operators [9,10,40] were carried out assuming some simplifying flavor structure, such as the U(3) 5 symmetry [9], which greatly reduces the number of independent Wilson coefficients. On the other hand, previous analyses working in a flavor general setup provided 1-by-1 bounds (see e.g. Ref. [15,22]). Thus, the global bounds applicable for a completely arbitrary flavor structure are obtained for the first time in this paper, and they represent our main result. They are relevant for a large class of new physics scenarios with or without approximate flavor symmetries. In particular, models addressing various flavor anomalies necessarily do not respect the U(3) 5 symmetry, and therefore the global likelihood we obtained may provide new constraints on their parameters. We find several poorly constrained directions in the space of LLQQ operators. As discussed earlier, [ĉ eq ] 1111 is currently constrained only by very imprecise measurements of electron neutrino scattering on nucleons, such that the experiments are insensitive to [ĉ eq ] 1111 1. More surprisingly, another practically unconstrained direction emerges in our fit, which roughly corresponds to the linear combination [ĉ ed + 0.6ĉ ℓd ] 1122 . This can be traced to the fact that the LEP-2 collider was scanning a fairly narrow range of √ s in e + e − collisions. For this reason, not all theoretically available combinations discussed in Section 4.2 are resolved in practice. Again, it is should be noted that constraints in typical scenarios generating these LLQQ operators will be stronger unless the operators accidentally align with the flat directions in our fit. We stress that the global likelihood provided in the supplemental material [56] retains the full information about the correlations. Flavor-universal limit The general likelihood presented in Section 4.4 can be easily restricted to a smaller subspace relevant for any particular scenario. We present here the results for the flavor-universal limit, where dimension-6 operators are invariant under the global flavor symmetry U(3) 5 . The symmetry implies that 1) all off-diagonal and chirality-violating operators as well as δg W q R are absent, 2) the remaining operators do not carry the flavor index. The only subtlety concerns the [c ℓℓ ] IJKL coefficients, since two independent contractions of flavor indices are allowed by the U(3) 5 symmetry. We follow the common practice of parametrizing them in terms of the two U(3) 5 -symmetric op- . All in all, with the parameterization of the dimension-6 space used in this paper, the U(3) 5 symmetry corresponds to the following pattern: and all the remaining vertex corrections and 4-fermion Wilson coefficients vanish. This setup corresponds to the SMEFT limit studied in the pioneering work of Ref. [9]. 15 It turns out that the global likelihood constrains the entire restricted parameter set introduced in Eq. (4.9). Thus, unlike in the flavor-generic case, there is no need to define new variablesĉ in order to factor out the flat directions. Marginalizing overṼ ud , we find the following constraints: (4.11) 15 Let us note that the more recent analysis of Ref. [40] corresponds to a more restricted scenario, since the two independent coefficients c ℓℓ and c (3) ℓℓ are controlled by one single coefficient C ℓℓ in that work. The correlation matrix reads where the rows and columns correspond to the ordering of the parameters in Eq. (4.10) and Eq. (4.11). The correlation matrix with more significant digits (necessary for practical applications) is given in the Mathematica notebook attached as supplemental material [56]. Thanks to lifting the exact and approximate flat directions, in the U(3) 5 symmetric limit typical constraints on the dimension-6 parameters are at the per-mille level. We note that the vertex corrections are constrained slightly better than when only the pole observables are used [33], thanks to the precise input from low-energy flavor measurements. Most of the LLQQ operators are constrained at the percent level. Also working in the flavor-universal limit, Ref. [41] obtained bounds on 10 additional SMEFT coefficients using Higgs data and W W production at LEP2. The only flavor-universal SMEFT coefficients unconstrained by these two fits are those that are either CP-violating, or contain only quarks, only gluons or only higgses. Oblique parameters In the literature, precision constraints on new physics are often quoted in the language of oblique parameters S, T , W , Y [11,102]. These correspond to a further restriction of the pattern of the dimension-6 parameters in the U(3) 5 symmetric case [44,103]: (4.14) The constraints on the oblique corrections are dominated by the pole-observables and lepton-pair production in LEP-2. The new observables probing LLQQ operators do not affect these constraints significantly. In particular, the low-energy flavor observables do not probe the oblique corrections at all. Compared to the fit in Ref. [44], we only observe a small shift of the central values. 16 Comments on LHC reach Four-fermion LLQQ operators can be probed via the qq → ℓ + ℓ − processes in hadron colliders. Previously several groups set bounds on their Wilson coefficients through the reanalysis within the SMEFT of various ATLAS and CMS exotic searches (see e.g. [22,104,105]). In this section we derive analogue bounds using the recently published measurements of the differential Drell-Yan cross sections in the dielectron and dimuon channels [106]. Our main goal here is to present a brief comparison between the sensitivity of the LHC run-1 and of the low-energy observables discussed in this paper. Precision measurements in hadron collider environments are challenging. Individual observables are typically measured with much worse accuracy than in lepton colliders or very low-energy experiments. However, the effect of 4-fermion operators on scattering amplitudes grows with the collision energy E as ∼ c 4f E 2 /v 2 . As a consequence, the superior energy reach of the LHC compensates the inferior precision in this case [22,104]. This message was recently stressed in Ref. [107] in the context of the determination of the oblique parameters, which encode new physics corrections to propagators of the electroweak gauge bosons. It turns out that the effect of the oblique parameters W and Y [11] on the high invariant-mass tail of dσ(pp→ℓ + ℓ − ) dm ℓℓ also grows with E (as opposed to that of the more familiar S and T parameters [102]). The current LHC constraint on W and Y are already competitive with those obtained from low-energy precision experiments, and will become more accurate with the full run-2 dataset at √ s ≈ 13-14 TeV [107]. In the SMEFT framework, W and Y correspond to a particular pattern of vertex corrections and 4fermion operators [44,103], cf. Eq. (4.13). Therefore we expect that similar arguments apply, and that competitive bounds on the LLQQ operators can be extracted from ATLAS and CMS measurements of dσ(pp→ℓ + ℓ − ) dm ℓℓ . Below we present some quantitative illustrations of this message. In the situation when only one LLQQ operator is present at a time and all other dimension-6 operators are absent, the sensitivity of the LHC run-1 and of the low-energy observables is contrasted in Table 6. To estimate the LHC reach we use 3 bins in the range m ℓℓ ∈ [0.5-1.5] TeV of the ATLAS measurement of the differential e + e − and µ + µ − cross sections at the 8 TeV LHC (20.3 fb −1 ) [106]. This is shown under the label of LHC 1.5 constraints in Table 4, and it is compared to the combined constraints using the low-energy input. For the chirality conserving (ee) ( 0 ± 2.9 0 ± 3.7 0 ± 1.4 0 ± 2.9 0 ± 3.7 0 ± 1.4 LHC 0.7 0 ± 5.3 0 ± 6.6 0 ± 2.6 0 ± 5.5 0 ± 6.9 0 ± 2.6 Table 4. The LHC 1.5 constraints use the m ℓℓ ∈ [0.5-1.5] TeV bins of the measured differential e + e − and µ + µ − cross sections at the 8 TeV LHC [106]. We also separately show the constraints obtained when the m ℓℓ ∈ [0.5-1.0] TeV (LHC 1.0 ) and m ℓℓ ∈ [0.5-0.7] TeV (LHC 0.7 ) data range is used. operators the two are indeed similarly sensitive. For the chirality conserving (µµ)(qq) operators the low-energy bounds are relatively weaker, especially for the operators that do not affect the muon neutrino couplings. With the exception of [O (3) ℓq ] 2211 probed by the flavor observables, the LHC sensitivity is superior by at least an order of magnitude. Therefore in these directions in the parameter space of dimension-6 SMEFT the LHC is in a completely uncharted territory. The situation is quite opposite for the chirality-violating (ee)(qq) and (µµ)(qq) operators. There the light quark transitions offer a superior sensitivity with which the LHC cannot compete in most cases. The exception is the [O (3) ℓequ ] 2211 operator where the LHC reach is comparable. An important difference between the LHC and low-energy constraints should be emphasized. The latter are obtained in the energy regime where it is very plausible to assume the validity of the EFT. Here, by validity we mean that the SMEFT with dimension-6 operators adequately describes the physics of the underlying UV completion. First of all, if such completion contains new states at ∼ 1 TeV then clearly the LHC bounds in Table 6 cannot be applied and a modeldependent approach becomes necessary. This is however not the case for the SMEFT bounds derived from low-energy data in the previous section, which are still valid. On the other hand, even in the absence of such "light" states one should analyze the sensitivity to O(Λ −4 ) terms. The precisely measured low-energy observables are dominated by O(Λ −2 ) contributions of dimension-6 operators, whereas the quadratic terms in the Wilson coefficients, formally O(Λ −4 ), are negligible. In contrast, the one-by-one LHC constraints on 4-fermion operators in Table 6 have in general a similar sensitivity to linear and quadratic terms. 17 Notice that this problem becomes much more severe in a global fit and that in the particular case of the chirality-violating operators there is no interference at all. This may undermine the SMEFT 1/Λ 2 expansion for generic UV completions, and it is not clear whether the dimension-8 and higher operators can be neglected in the analysis. As discussed in Ref. [108], in such a case the EFT is still valid for strongly coupled UV completions, where the dimension-6 squared terms are parametrically enhanced with respect to the dimension-8 contributions by a large new physics coupling. On the other hand, for weakly coupled UV completion one should use weaker LHC bounds obtained by truncating the √ s range of the analyzed data at some M cut above which the SMEFT is no longer valid. For illustration, in Table 4 we show the analogous LHC constraints with M cut = 1 TeV (LHC 1.0 ) and M cut = 0.7 TeV (LHC 0.7 ). Another practical consequence of the quadratic terms domination at the LHC is that the likelihood for the Wilson coefficients is not approximately Gaussian. That means it is not fully characterized by the central values, 1 σ errors, and the correlation matrix, as is the case for the low-energy observables. This makes the presentation of the global fit results more cumbersome. Last, let us notice that the dilepton-production cross section is also sensitive to SMEFT coefficients that are flavor non-diagonal in the quark bilinear if we go beyond the V = 1 approximation at order Λ −2 . This was exploited in Ref. [55] to set bounds on the Wilson coefficients of chiralityviolating ℓℓ21 operators. Conclusions This paper compiles information from a number of experiments sensitive to flavor-conserving LLQQ operators. The main focus is on experiments probing physics well below the weak scale, such as neutrino scattering on nucleon targets, atomic parity violation, parity-violating electron scattering on nuclei, and so on. Information from e + e − collisions at the center-of-mass energies around the weak scale is also included. This is combined with previous analyses studying 4-lepton operators and the strength of the Z and W boson couplings to matter. The ensemble of data is interpreted as constraints on heavy new physics encoded in tree-level effects of dimension-6 operators in the SMEFT. The main strength of this analysis is that we allow all independent operators to be simultaneously present with an arbitrary flavor structure. Another novelty is the inclusion of lowenergy flavor constraints from pion, neutron, and nuclear decays, recently summarized in Ref. [55]. The leading renormalization group running effects from low energies to the weak scale are taken into account. We obtain simultaneous constraints on 61 linear combinations of Wilson coefficients in the SMEFT. The results are presented as a multi-dimensional likelihood function, which is provided in a Mathematica notebook attached as supplemental material [56]. The likelihood can easily be projected onto more restricted new physics scenarios. As an illustration, we provide constraints on the SMEFT operators in the U(3) 5 -symmetric scenario, and on the oblique parameters S, T , W , Y . The likelihood can be used to place limits on masses and couplings in a large class of theories beyond the SM when the mapping between these theories and the SMEFT is known. Finally, a brief comparison of the sensitivity of low-energy experiments to LLQQ operators with that of the LHC is provided. For many directions in the SMEFT parameters space, dilepton production at the LHC is exploring virgin territories not constrained by previous experiments. This is especially true for the chirality-conserving 2µ2q operators, where q are light quarks, while for the chirality-conserving 2e2q operators the LHC and low-energy probes are similarly sensitive. It would be beneficial to recast the LHC dilepton results in a model-independent form of a global likelihood on the SMEFT Wilson coefficients. We leave this task for future publications. The SMEFT constraints summarized in this paper should be improved in the near future. Measurements of the differential Drell-Yan production cross sections at the LHC run-2 will provide a more powerful probe of LLQQ operators, thanks to the increased center-of-mass energy of the collisions and higher luminosities. 18 Progress is imminent on the low-energy front as well, e.g. thanks to more precise measurements of low-energy electron scattering in the Q-weak, MOLLER and P2 experiments. In this paper we have stressed the importance of probing new physics in multiple low-and high-energy experiments. The huge number of independent SMEFT operators requires a rich and diverse set of observables in order to lift flat directions in the global likelihood. In fact, several poorly or not-at-all constrained directions in the SMEFT parameter space persist, as is evident from Eq. (4.8). This is especially true for operators involving the second and third generation quarks or the third generation leptons, but some flat directions involve the first generation fermions. The existence of these unexplored directions could be an inspiration to design new experiments and observables. A Translation to Warsaw Basis In this paper we parametrize the relevant part of the space of dimension-6 operators using an independent set of vertex corrections δg and Wilson coefficients of 4-fermion operators. The latter are directly inherited from the Warsaw basis, such that the translation is trivial. The former are related to the Wilson coefficients of dimension-6 operators in the Warsaw basis by the following linear transformation: δg W e L = c and I 3 is the 3×3 identity matrix in the generation space. Using Eq. (A.1) one can easily recast the results of this paper as a likelihood for the Wilson coefficients in the Warsaw basis. See Ref. [59] for the dictionary between δg and the Wilson coefficients in the SILH basis. B More general approach to low-energy flavor observables The low-energy flavor observables discussed in Ref. [55] also probe precisely 4-fermion operators with a strange quark. In the framework of the SMEFT the corresponding observables receive contributions from flavor off-diagonal dimension-6 operators, and in this paper we marginalized our likelihood over them. We also approximated the CKM matrix as V = 1 when acting on O(Λ −2 ) terms in the Lagrangian. For completeness, in this appendix we provide the formalism that allows one to take into account the constraints from strange observables and retrieve the terms suppressed by off-diagonal elements of the CKM matrix. First, the effective low-energy Lagrangian in Eq. such that it also includes charged currents with the strange quark (s → uℓν ℓ ). At tree level, the low-energy parameters are related to the SMEFT parameters as In addition toṼ ud we also introduce the the rescaled CKM matrix element parameterṼ us . Both are distinct from the elements of the unitary matrix V , to which they are related by V ud =Ṽ ud (1+δV ud ), V us =Ṽ us (1 + δV us ), where As before,Ṽ ud may be affected by new physics contributing to ǫ de S and should be treated as a free parameter in the fit. Ref. [55] obtained the following constraints on the low-energy parameters in the MS scheme at µ = 2 GeV. Here ∆ s L =ǭ sµ L −ǭ se L and ∆ d LP ≈ǭ de L −ǭ dµ L + 24ǫ dµ P . The associated correlation matrix is given in Ref. [55]. We note that some entries in this matrix are very close to one, so it is crucial to take it into account.
13,713
2017-06-12T00:00:00.000
[ "Physics" ]
Millimeter Wave and Sub-Terahertz Spatial Statistical Channel Model for an Indoor Office Building Millimeter-wave (mmWave) and sub-Terahertz (THz) frequencies are expected to play a vital role in 6G wireless systems and beyond due to the vast available bandwidth of many tens of GHz. This paper presents an indoor 3-D spatial statistical channel model for mmWave and sub-THz frequencies based on extensive radio propagation measurements at 28 and 140 GHz conducted in an indoor office environment from 2014 to 2020. Omnidirectional and directional path loss models and channel statistics such as the number of time clusters, cluster delays, and cluster powers were derived from over 15,000 measured power delay profiles. The resulting channel statistics show that the number of time clusters follows a Poisson distribution and the number of subpaths within each cluster follows a composite exponential distribution for both LOS and NLOS environments at 28 and 140 GHz. This paper proposes a unified indoor statistical channel model for mmWave and sub-Terahertz frequencies following the mathematical framework of the previous outdoor NYUSIM channel models. A corresponding indoor channel simulator is developed, which can recreate 3-D omnidirectional, directional, and multiple input multiple output (MIMO) channels for arbitrary mmWave and sub-THz carrier frequency up to 150 GHz, signal bandwidth, and antenna beamwidth. The presented statistical channel model and simulator will guide future air-interface, beamforming, and transceiver designs for 6G and beyond. I. INTRODUCTION Mobile data traffic is increasing rapidly throughout the world and is predicted to reach 77 exabytes per month by 2022 [1].A large proportion of the data traffic increase comes from emerging indoor wireless applications such as 8K ultra high definition streaming, wireless cognition, and centimeter-level position location, which will be enabled by millimeter-wave (mmWave) and sub-Terahertz (THz) wireless systems due to the vast bandwidths in 6G and beyond [2], [3]. Severe outdoor-to-indoor (O2I) penetration loss of up to 60 dB at mmWave frequencies is beneficial for deploying isolated indoor mmWave systems from outdoor co-channel cellular systems [4].The 60 GHz band has been well studied in the literature [5]- [8] and used in the standards IEEE 802.11ad/ay for wireless local area network (WLAN) [9], [10].However, only a few indoor channel measurements and modeling works at other emerging frequencies or across a vast swath of spectra, such as 28, 73, and 142 GHz, have been published [11], [12].Accurate channel models over mmWave and sub-THz frequencies are needed for the design and evaluation of 6G wireless communications and beyond [2]. MmWave and THz (i.e., 30 GHz -3 THz) have distinct propagation characteristics from sub-6 GHz [13].MmWaves do not diffract well and become more sensitive to the dynamic blockage by humans due to the short wavelength [14], [15].Directional, steerable high gain antennas with beamforming techniques are required to compensate for additional path loss within the first meter of propagation distance as the carrier frequency increases [16].Thus, time-variant directional channel models are vital for efficient beam tracking and selection algorithms and proper system design and deployment guidelines. The remainder of the paper is organized as follows.Section II provides a brief review of the existing works on channel modeling in indoor environments at mmWave and THz frequencies.Section III describes the 28 and 142 GHz measurement systems used in this work and the indoor office environment, as well as the step-by-step measurement procedure.Section IV presents the directional and omnidirectional path loss data and resulting models, showing that similar path loss exponents were observed at 28 and 142 GHz in the NLOS environment.Section V introduces the 3-D spatial statistical channel impulse response (CIR) model for indoor office scenarios, and Section VI provides empirical statistics and distribution fitting of channel parameters derived from the 28 and 140 GHz measurement datasets in both lineof-sight (LOS) and non-line-of-sight (NLOS) environments.Simulated secondary channel statistics (i.e., root mean square (RMS) delay spread (DS) and RMS angular spread (AS)) are generated from the NYUSIM indoor channel simulator and compared with the measured values, which yield good agreements.Finally, concluding marks in Section VIII show that the number of time clusters follows a Poisson distribution and the number of subpaths within each cluster follows a composite exponential distribution for both LOS and NLOS environments at 28 and 140 GHz, but the total number of observed subpaths at 140 GHz is much fewer than the number at 28 GHz. II. EXISTING WORKS ON INDOOR CHANNEL MODELS Numerous indoor channel measurements and studies have been focused on sub-6 GHz and 60 GHz [6], [8], [17]- [28].Saleh and Valenzuela conducted propagation measurements in an office building using radar-like pulses with 10 ns width at 1.5 GHz, and observed that multipath components (MPCs) arrived in clusters [17].A cluster-based statistical channel model was proposed, where the cluster arrival time and the subpath arrival time within each cluster were Poisson distributed, and the expected cluster power and subpath power were modeled as exponentially decaying functions of cluster arrival time and subpath arrival time within each cluster, respectively.This modeling approach has been extensively used in the past few decades.Rappaport conducted propagation measurements at 1.3 GHz in factories and showed that MPCs arrived independently rather than in clusters for factory and open plan building which contain reflecting objects spread throughout the workspace [19].Other indoor channel measurements and modeling efforts at mmWave frequencies started from the early 1990s, a majority of which were conducted at 60 GHz [6], [8], [20]- [28]. Standard documents such as IEEE 802.11 ad/ay and 3GPP TR 38.901 presented statistical channel models up to 100 GHz for indoor scenarios such as home, office, shopping mall, and factory [9], [10], [29].IEEE 802.11 ad/ay channel models adopted a double-directional CIR model for 60 GHz with dual polarizations based on field measurements and complimentary ray-tracing simulations, which provided detailed temporal and angular channel statistics for conference room, cubical environment, and living room [9], [10].3GPP TR 38.901 proposed a unified geometry-based statistical channel model for indoor and outdoor scenarios for frequencies from 0.5 to 100 GHz, where different scenarios have different values of large-scale parameters (i.e., DS, AS, Rician K factor, and shadow fading) which are required in the channel generation procedure [29]. THz communication systems will most likely be deployed in indoor environments to support extremely high data rates of over 100 Gbps [2].Considering the particular characteristics of THz signals such as high free space path loss (FSPL), high partition loss, dynamic shadowing loss due to human and vehicle blockages, deep understanding of the THz radio propagation channels is critical for 6G and beyond [30]- [36].First of all, atmospheric or molecular absorption induces non-negligible path loss at THz frequencies, especially at several absorption peaks such as 170 and 325 GHz, which causes about 100 dB/km attenuation [37].Thus, a frequencydependent atmospheric absorption term e −k(f )d was introduced in Friis formula [30], [31], where k(f ) is the atmospheric attenuation factor, f and d are the carrier frequency and transmission distance, respectively.A temporal-spatial stochastic channel model for 275-325 GHz was established based on ray-tracing channel simulations having LOS, first- and second-order reflected paths in an office room [32].This stochastic channel model can generate channel transfer function, power delay profile (PDP), and angular power spectrum (APS).The adopted ray tracer was calibrated using vector network analyzer (VNA)-based measurements at 275-325 GHz in an office [32].A generic multi-ray CIR model based on ray tracing consisted of LOS, reflected, diffracted, and scattered paths was proposed and used in channel capacity analysis [33].The reflection, diffraction, and scattering coefficients used in this multi-ray channel model were calibrated by measurements conducted at 0.06-1 THz.Most of the existing channel models for THz frequencies were built upon free space, reflection, and scattering measurements for various materials and constructed as a superposition of LOS, reflected and scattered paths in a ray tracing manner.Most propagation measurements were short-distance within a few meters and confined to a single room [30]- [32]. This paper derives empirical channel statistics based on extensive radio propagation measurements at 28 and 140 GHz conducted on the entire floor of an office building, and proposes a 3GPP-like indoor spatial statistical channel model following the mathematical framework of the NYUSIM outdoor channel models [38], which can generate directional and omnidirectional wideband CIRs from 28 to 140 GHz. III. 28 GHZ AND 140 GHZ WIDEBAND INDOOR CHANNEL MEASUREMENTS The 28 and 140 GHz measurement campaigns were conducted in the identical environment, NYU WIRELESS research center on the 9th floor of 2 MetroTech Center in downtown Brooklyn, New York at 2014 and 2019.A wideband sliding correlation-based channel sounder system was used in both measurement campaigns, providing a broad dynamic range of measurable path loss (152 dB at 28 GHz and 145 dB at 140 GHz) [11], [39].A wideband pseudorandom noise (PN) sequence of length 2047 was generated at baseband, then upconverted to a center frequency of 28 and 142 GHz, and transmitted through a directional and steerable horn antenna at the transmitter (TX).The receiver (RX) captured the RF signal via an identical steerable horn antenna and downconverted and demodulated the RF signal into its baseband I and Q signals [40].The demodulated signal was then correlated with a local copy of the transmitted signal with a slightly lower rate, which allowed the received signal to "slide" past the slower sequence [40].An average PDP over 20 instantaneous PDPs was sampled by a high-speed oscilloscope and recorded for further analysis.TX and RX antennas were mechanically steered by two electrically-controlled gimbals with sub-degree accuracy in azimuth and elevation planes and were switched between vertical-and horizontal-polarization modes by a 90degree waveguide twist for co-and cross-polarization studies.The 28 and 140 GHz channel sounder specifications are summarized in Table II.Null-to-null RF bandwidths of 800 MHz and 1 GHz were adopted in the 28 and 140 GHz measurement campaigns, resulting in a time resolution of MPC equal to 2.5 ns and 2 ns, respectively [11], [39]. Omnidirectional channel statistics are often preferred in channel models and channel simulations since arbitrary antenna patterns can be added [38].Thus, omnidirectional PDPs should be recovered from measured directional PDPs by aligning these measured PDPs with absolute time delays [38], [41], [42].However, the 28 GHz channel sounder did not have precise synchronization between TX and RX and cannot provide absolute timing information of measured PDPs since the PDP recording was triggered at the first MPC arrival and only had excess time delay information.A 3-D ray tracer NYURay was employed to provide the time of flight (i.e., absolute time delay) of the first arriving MPC in a measured PDP, which will be explained in Section III-B.The 140 GHz channel sounder was equipped with rubidium standard references at both TX and RX sides for frequency/timing synchronization [40]; however, we used the absolute time delays obtained from the 3-D ray tracer to synthesize omnidirectional PDPs for both 28 and 140 GHz data for processing consistency. A. Measurement Environment and Procedure The measurements were conducted in a typical indoor office environment (65.5 m × 35 m × 2.7 m) with offices, conference rooms, classrooms, long hallways, open-plan cubicles, and elevators, as shown in Fig. 1.Common obstructions are desks, chairs, cubicle partitions, glass doors, and walls made of drywall with metal studs. Five TX locations and 33 RX locations were selected in the 28 GHz measurement campaign in 2014.Overall, measurements were conducted at nine LOS location pairs (i.e., a pair of TX and RX locations) and 35 NLOS location pairs, where the 3-D TX-RX (T-R) separation distances ranged from 3.9 m to 45.9 m.The identical five TX locations and a subset of the RX locations were measured in the 140 GHz measurement campaign due to the limit of maximum transmit power in 2019 and 2020, resulting in nine LOS location pairs and 13 NLOS location pairs.The T-R separation distance ranged from 3.9 m to 39.2 m.In Fig. 1, TX and RX locations measured at both 28 GHz and 140 GHz are denoted as stars and circles [11], [43].was measured with TX2, TX3, and TX4). For each T-R location pair, eight unique antenna azimuth sweeps were measured to investigate the spatial statistics of arrival and departure, where six RX antenna azimuth sweeps and two TX antenna azimuth sweeps were performed.During each sweep, TX (RX) horn antenna was rotated in step increments of the antenna half-power beamwidth (HPBW) [11] so that the directional measurements can emulate channel measurements using omnidirectional antennas.The detailed description of each measurement sweep is listed in Table III. The equivalent omnidirectional received power can be synthesized by summing the received powers from all measured unique pointing angles obtained at antenna HPBW step increments in both planes [41].The sweeping step was equal to the antenna HPBW (30°for 28 GHz and 8°for 140 GHz), which corresponded to 12 and 45 rotation steps over the complete azimuth plane, respectively.At each rotation step, an averaged PDP over 20 instantaneous PDPs with accurate excess timing information with time resolutions of 2.5 and 2 ns was recorded for 28 GHz and 140 GHz, respectively.Note that two antenna polarization configurations, vertical-tovertical (V-V) and vertical-to-horizontal (V-H), were measured using the identical procedure described above, resulting in 16 measurement sweeps in total at each unique T-R location pair.This paper mainly focuses on the co-polarized (V-V) polarization to develop the omnidirectional and directional indoor channel models.For each T-R location pair, at most 96 (= 8×12) at 28 GHz and 360 (= 8×45) at 140 GHz directional PDPs were acquired with V-V polarization configuration.[11].TX and RX locations measured at both 28 GHz and 140 GHz are denoted as stars and circles with checkerboard texture, respectively.RX locations only measured at 28 GHz are denoted as solid circles.Each of the five TX locations is denoted in a different color, and the RX locations paired with a TX location is denoted in the same color. B. Synthesizing Omnidirectional PDPs A 3-D mmWave ray-tracing software, NYURay [44], was used to predict possible propagating rays between the TX and RX and provided the time of flight (i.e., absolute time delay) of the first arriving MPC of a measured directional PDP.Since the horn antennas had beamwidths of 30°and 8°for 28 GHz and 140 GHz, the exact angle of departure and angle of arrival of MPCs were unknown.Each measured directional PDP was assigned to a predicted ray which was closest to this directional PDP in space, then the absolute time delay of the first arriving MPC of the PDP was set to be the time of flight of the corresponding predicted ray.Directional PDPs were aligned in the temporal domain and summed to generate an omnidirectional PDP.Being closest in space between a measured PDP and the set of predicted rays means the antenna gain in the direction of a predicted ray when the antenna is pointing to the direction of the measured PDP is the highest among all predicted rays: (1) where P is the set of predicted rays from NYURay.∆φ AOD , ∆θ ZOD , ∆φ AOA , ∆θ ZOA denote the absolute difference of the azimuth angle of departure (AOD), zenith angle of departure (ZOD), azimuth angle of arrival (AOA), zenith angle of arrival (ZOA) between the measured directional PDP and a predicted ray, respectively.G φ and G θ represent the antenna pattern in the azimuth and elevation planes, respectively.The gain at the peak of the antenna main lobe is normalized to 0 dB.Thus, the antenna gain is -3 dB when the angle difference is 1/2 HPBW from the peak of the main lobe. MPCs recorded in different directional PDPs may have originated from the same predicted ray, in which case the measured MPC was an antenna-gain weighted version of the true MPC.Thus, the directional PDPs assigned to the same predicted ray were summed in powers and generate a partialomnidirectional PDP for MPC extraction to avoid double counting.The direction of the extracted MPC was assumed to be the direction of the measured directional PDP which was closest to the predicted ray in space.An omnidirectional PDP was recovered, and MPCs were extracted by applying this procedure to all directional PDPs measured at each T-R location pair.Due to the mismatching between the measured PDPs and predicted rays at a few locations, we recovered omnidirectional PDPs for 37 of 44 location pairs at 28 GHz and 20 of 22 location pairs at 140 GHz. A. Directional Path Loss Modeling Antenna arrays with many elements will enable steerable and narrow beams to compensate for the large free space path loss in the first meter at mmWave and sub-THz frequencies.Directional path loss modeling is increasingly critical for future 6G and beyond communication system design.Thus, in this work, rotatable high-gain horn antennas were used at both the TX and RX during the 28 GHz and 140 GHz measurements, as shown in Table II, to study double-directional channels. We use the close-in free space reference distance (CI) path loss model with 1 m reference distance [13], as this has The TX and RX antennas were pointed directly towards each other on boresight in both the azimuth and elevation planes (for LOS or NLOS environments).The RX antenna was then swept in the azimuth plane in steps of HPBW, for a fixed TX antenna at the boresight azimuth and elevation angles. RX sweep With respect to the boresight angle in elevation, the RX antenna was uptilted by HPBW and then swept in the azimuth plane in steps of HPBW, for a fixed TX antenna at the boresight azimuth and elevation angles. RX sweep With respect to the boresight angle in elevation, the RX antenna was downtilted by HPBW and then swept in the azimuth plane in steps of HPBW, for a fixed TX antenna at the boresight azimuth and elevation angles. RX sweep With respect to the boresight angle in elevation, the TX antenna was uptilted by HPBW.The RX antenna was fixed at the boresight elevation angle, and then swept in the azimuth plane in steps of HPBW. RX sweep With respect to the boresight angle in elevation, the TX antenna was downtilted by HPBW.The RX antenna was fixed at the boresight elevation angle, and then swept in the azimuth plane in steps of HPBW. TX sweep The TX and RX antennas were pointed directly towards each other on boresight in both the azimuth and elevation planes.The TX antenna was then swept in the azimuth plane in steps of HPBW, for a fixed RX antenna at the boresight azimuth and elevation angles. RX sweep This measurement was an RX sweep with the TX antenna set to the second strongest AOD in the azimuth and elevation plane.The second strongest AOD was determined by comparing the signal level from all the AODs during Measurement 6, except for the angles corresponding to the main angle of arrival.The RX antenna was fixed at the boresight elevation angle and then swept in steps of HPBW in the azimuth plane. TX sweep This measurement corresponds to the second TX sweep with TX antenna either uptilted or downtilted by HPBW after determining the elevation plane with the strongest received power from Measurement 4 and Measurement 5 during measurements.The RX antenna was pointed towards the initial boresight azimuth and elevation angles, and the TX was uptilted or downtilted by HPBW, and then swept in steps of HPBW in the azimuth plane. been proven to be superior for modeling path loss over many environments and frequencies [45].PL CI represents the path loss in dB scale, which is a function of distance and frequency: where n denotes the path loss exponent (PLE), and χ σ is the shadow fading (SF) that is commonly modeled as a lognormal random variable with zero mean and σ standard deviation in dB.d is the 3-D T-R separation distance.d 0 is the reference distance, and FSPL(f, d 0 ) = 20 log 10 (4πd 0 c/f ).The CI path loss model uses the FSPL at d 0 = 1 m as an anchor point and fits the measured path loss data with a straight line controlled by a single parameter n (PLE) obtained via the minimum mean square error (MMSE) method.Throughout this paper, LOS and NLOS locations are defined according to whether the TX and RX can see each other.Here, for the directional path loss modeling, we define the LOS direction for LOS locations as the direction when the TX and RX directional antennas are pointed directly to each other.The LOS direction can be calculated based on the relative position of the TX and RX -the LOS direction is along the line of bearing between the TX and RX.The NLOS-Best direction is only defined for NLOS locations, and represents the best pointing direction for which minimum path loss is measured, which can be found by thoroughly rotating TX and RX directional antennas in the 3D space.The NLOS direction is defined for both LOS and NLOS locations, and represent all pointing directions which received detectable powers other than the LOS and NLOS-Best directions [11], [46].Fig. 2 shows the directional CI path loss model using measured path loss data at 28 GHz and 142 GHz [11].The path loss in the LOS direction is represented by a green circle for the LOS locations, and the path loss in the NLOS-Best direction is represented by a blue diamond for the NLOS locations.Measurements pointed to other directions are denoted by red crosses as NLOS directions for both LOS and NLOS locations.Comparing Fig. 2a and Fig. 2b, the LOS PLEs at 28 and 142 GHz are 1.7 and 2.1 which may be due to the differences in antenna HPBWs (30°and 8°).Wider beamwidths may capture more energy through reflection and scattering in the vicinity of the LOS direction, causing a PLE of less than 2. Furthermore, the NLOS-Best PLE at 28 and 142 GHz are about 3.0, suggesting strong NLOS paths are available to provide a sufficient link margin and can be leveraged by intelligent reflecting surfaces [47]. B. Omnidirectional Path Loss Modeling Even though the directional path loss model will be widely used in future wireless system deployment, the omnidirectional path loss model is fundamental and serves as a reference model in various standard documents [10], [29].In Fig. 3, we present the omnidirectional path loss data and the fitted CI path loss model.The omnidirectional path loss is synthesized from received powers from all directions measured in the 3-D space [41].Fig. 3 marks LOS and NLOS scenarios in green and blue, respectively.The LOS PLEs at both frequencies are lower than 2.0, where 28 GHz shows a surprisingly low PLE of 1.2, which can be attributed to the waveguide effect in some corridor measurement locations.Note that both 28 and 142 GHz have a comparable PLE of about 2.7 in the NLOS environment, indicating that the signal power drops equally versus distances after the first meter in the mmWave band of 28 GHz and the sub-THz band of 140 GHz [11], [16]. V. 3-D SPATIAL STATISTICAL CHANNEL MODEL A received signal can be viewed as a superposition of multiple replicas of the transmitted signal with different delays and angles for any wireless propagation channel [48].An extended S-V channel model [17] was commonly used to represent the double directional channel in the 3-D space [8], [29].MPCs were observed to arrive in clusters in delay and angular domains from 28 GHz and 140 GHz indoor channel measurements, which agreed with many early works [8], [17].Current standards document such as the 3GPP TR 38.901 channel model defined a cluster as a group of MPCs closely spaced in the joint temporal-spatial domain, where each cluster represented a reflector or a scatterer in the environment [8], [9], [29]. We observed in the measurements that MPCs traveling close in time may arrive from very different directions due to the symmetric structure of the environment like hallways [13], [40].Conversely, MPCs arriving from a similar direction may have very different propagation times.A time cluster spatial lobe (TCSL) approach was introduced to characterize temporal and angular domains separately [38].A time cluster (TC) comprises MPCs traveling close in time and arriving from potentially different directions.A spatial lobe (SL) represents a main direction of arrival or departure where MPCs can arrive over hundreds of nanoseconds [38]. Both modeling methodologies are valid, where the 3GPP model is more widely used and the NYUSIM model using TCSL has a more straightforward and physically-based structure [49], [50].Performance evaluation with respect to spectrum efficiency, coverage, and hardware/signal processing requirements between the 3GPP and NYUSIM channel models were provided in [51]. The cluster-based omnidirectional CIR h omni (t, where t is the absolute propagation time, − → Θ = (φ AOD , θ ZOD ) is the AOD vector, and − → Φ = (φ AOA , θ ZOA ) is the AOA vector.N and M n denote the number of TCs and the number of subpaths within each TC, respectively.For the mth subpath in the nth TC, a m,n , ϕ m,n , τ m,n , − −− → Θ m,n , and − −− → Φ m,n represent the magnitude, phase, absolute time delay, AOD vector and AOA vector, respectively.Note that MPC and subpath are used interchangeably.The PDP and APS can be obtained by integrating the square of the CIR in space and time domains, respectively. The PDP and APS can be easily partitioned based on TCs and SLs, respectively (see Fig. 10 and Fig. 11 in [38]).The partition in the time domain is realized by defining a minimum inter-cluster time void interval (MTI).Two sequentially recorded MPCs belong to two distinct TCs if the difference of the excess time delays of these two MPCs is beyond MTI.These two MPCs are considered as the last MPC of the former TC and the first MPC of the latter TC, respectively.For example, 25 ns was used as MTI for an outdoor urban microcell (UMi) environment [38], while 6 ns is used as MTI in this paper for an indoor office (InO) environment since the width of a typical hallway in the measured indoor office environment is about 1.8 m (i.e., ∼6 ns propagation delay). The partition in the space domain is realized by defining a spatial lobe threshold (SLT) [38].The angular resolution of the measured APS depends on the antenna HPBW (30°and 8°for 28 GHz and 140 GHz, respectively).A linear interpolation of the directional received powers in azimuth and elevation planes with 1°resolution was used to reconstruct the 3-D spatial distribution of the received power.A power segment is generated for every 1°direction in the 3-D space.Neighboring power segments above the SLT form an SL.The SLT was -15 dB below the maximum directional power in the APS. As defined in [38], the primary statistics such as the number of TCs and SLs, cluster delays, and cluster powers are used in the channel generation procedure given in Section VI.The secondary statistics such as RMS DS and RMS AS are not required in the channel generation but necessary in the channel validation.The presented channel model will be validated in Section VII by showing that the simulated and measured secondary statistics yield good agreements. VI. STATISTICS OF CHANNEL GENERATION PARAMETERS As described in Section V, temporal and spatial channel parameters are extracted from the measured PDP and APS.Temporal parameters are the number of TCs (N ) and SPs in a TC (M n ), TC excess delay (τ n ) and intra-cluster subpath excess delay (ρ m,n ), TC power (P n ) and subpath power (Π m,n ).Spatial parameters are the number of SLs (L), the mean azimuth and elevation angle of an SL (φ and θ), and the azimuth and elevation angular offset of a subpath (∆φ and ∆θ) with respect to the mean angle of the SL. Since the 140 GHz measurement locations is a subset of the 28 GHz measurement locations, the 28 GHz common set was created out of the 28 GHz all set to have a fair comparison with the 140 GHz dataset (referred to the 140 GHz common set below).The 28 GHz common set and the 140 GHz common set have identical TX-RX location pairs.Table IV presents the channel parameters required for channel generation procedure.Table V provides statistics of channel parameters derived from the 28 GHz all set, 28 GHz common set, and 140 GHz common set, for LOS and NLOS scenarios.Step 2 # Cluster subpaths Mn Mn ∼ DU(1, Ms) Mn ∼ (1 − β)δ(Mn) + DE(µs) Step 3 Cluster delay τn (ns) Step 4 Intra-cluster delay ρm,n (ns) 1+Xn , m = 1, 2, ..., Mn, n = 1, 2, ..., N ρm,n ∼ Exp(µρ) Step 5 Cluster power Pn (mW) Zn ∼ N (0, σ Z ), n = 1, 2, ..., N Step 6 Subpath power Πm,n(mW) , Um,n ∼ N (0, σ U ), m = 1, 2, ..., Mn Step 7 Subpath phase ϕ (rad) Uniform(0, 2π) Step 9 Spatial lobe mean angle φ i , θ i (°) Step 10 Subpath angle offset ∆φ i , number of TCs, which is given by: The 28 GHz channel has about three more TCs than the 140 GHz channel in both NLOS and LOS scenarios, which can be attributed to the higher partition loss at 140 GHz (e.g., 4-8 dB higher than 28 GHz for different materials [43]).The channel sparsity at 140 GHz should be considered in the channel estimation and beamforming algorithms for sub-THz frequencies.The NLOS scenario has about one more TC than the LOS scenario.Note that the Poisson distribution of the number of TCs for the indoor scenario is different from the uniform distribution used for the outdoor scenario, as given in Table IV [38]. 2) Number of Cluster Subpaths: The number of cluster subpaths M n is negatively correlated to the number of TCs depending on the MTI.A larger MTI causes fewer TCs and more subpaths within each TC, and vice versa.Fig. 5 presents the empirical histograms for three datasets, indicating that the number of cluster subpaths is close to exponentially distributed.The exponential distribution is continuous and starts from zero, while the value of the number of subpaths is discrete and starts from one.Thus, a discrete exponential (DE) distribution is applied to fit the empirical histogram of M n = M n − 1. Fig. 5a shows that about half of the measured TCs only have one subpath at 28 GHz, making a simple DE distribution unsuitable.We proposed a composite distribution with a δ-function at M n = 0 and a DE distribution, which is given by where µ s is the mean of the DE distribution, and β is the weight of the DE distribution in the composite distribution.By maximizing the joint probability mass function (PMF) of all data samples over β and µ s simultaneously, the MLE of µ s and β is 5.3 and 0.7 for 28 GHz NLOS all set, respectively.The identical composite distribution for 28 GHz LOS all set shows that µ s = 3.7, β = 0.7, suggesting that the NLOS scenario forms larger clusters than the LOS scenario.The composite distribution yields a good agreement with empirical histograms of 28 GHz LOS and NLOS scenarios.Moreover, the large TCs with more than 25 subpaths were mainly from locations in the corridor environment (e.g., TX4 and RX15).for both LOS and NLOS scenarios is about 1, suggesting that LOS and NLOS scenarios have similar sizes of clusters which contain about two subpaths on average.The clusters at 140 GHz are much smaller than the clusters at 28 GHz, indicating that the 140 GHz channel is much sparser than the channel at 28 GHz.The detailed comparison of channel parameters between 28 GHz and 140 GHz common sets can be found in Table V. 3) Inter-cluster Excess Delay: The cluster excess delay τ n is defined as the time difference between the first arriving subpath in the PDP and the first arriving subpath in a cluster, as given in Table IV, where τ n−1 is the cluster excess delay of the former cluster, ρ Mn−1,n−1 is the intra-cluster excess delay of the last subpath in the former cluster.∆τ n is the intercluster excess delay without MTI (i.e., 6 ns).The empirical cumulative distribution function (CDF) of the inter-cluster delay for LOS and NLOS scenarios of 28 GHz all set are shown in Fig. 6.An exponential distribution with the mean 10.9 ns fits the 28 GHz NLOS scenario, while a lognormal distribution with the mean 2.1 ns and standard deviation 1.6 ns fits the 28 GHz LOS scenario well since a few clusters with long cluster delays were observed in the LOS corridor environment. Inter-cluster delays at 140 GHz can be well fitted using an exponential distribution for both LOS and NLOS scenarios, where the mean values are 14.6 ns and 21.0 ns, respectively.The distributions for the 28 GHz and 140 GHz LOS scenarios are different (lognormal and exponential), likely due to the higher partition loss and the smaller measurable range at 140 GHz.In addition, clusters with large inter-cluster delays were mainly observed in the corridor environment due to the waveguide effect, indicating that the corridor scenario may be considered a distinct indoor scenario and requires more channel measurements for accurate characterization. 4) Intra-cluster Excess Delay: The intra-cluster excess delay is defined as the time difference between the first arriving subpath and the targeted arriving subpath within the same TC.As shown in Fig. 7, an exponential distribution shows a good agreement with the empirical CDF for 28 GHz LOS and NLOS scenarios, where the mean intra-cluster excess delay is 3.4 ns and 22.7 ns for 28 GHz LOS and NLOS all set, suggesting a larger intra-cluster delay is usually observed in the NLOS scenario. 5) Cluster Power and Subpath Power: Cluster power is defined as the sum of the subpath powers in the cluster, and the normalized cluster power over the total received power in the PDP can be well modeled by an exponentially decaying function of cluster excess delay with a lognormal-distributed shadowing term, as given in Table IV.P0 is the mean power in the first arriving TC, Γ is the cluster decay time constant, and Z n is a lognormal distributed (normal in dB scale) shadowing term for the cluster power with zero-dB mean and standard deviation σ Z .P n represents the cluster power so that the sum of P n is equal to the total omnidirectional received power P r .The normalized cluster powers measured in the 28 GHz NLOS scenario is shown in Fig. 8, where P 0 is 0.68, and Γ is 23.6 ns, indicating that the expected first cluster occupies about 68% of the total received power and the expected cluster power is less than 34% of the total received power when the cluster excess time delay is over 23.6 ns. Similarly, the normalized subpath power over the cluster power can be modeled as an exponentially decaying function over the intra-cluster excess delay, as given in Table IV.Π0 is the mean power in the first arriving subpath in a TC.γ is the subpath decay time constant, and U m,n is a lognormal distributed shadowing term for the subpath power with zero-dB mean and standard deviation σ U .Fig. 9 shows that the first subpath in the cluster is about 42% of the cluster power on average, suggesting a relatively large RMS intra-cluster DS. The expected subpath power is less than 21% of the cluster power when the intra-cluster excess time delay is over 9.2 ns. B. Spatial Channel Parameters 1) The Number of Spatial Lobes: An SL represents a main direction of arrival or departure.The angular resolution of the measured APS depends on the antenna HPBW, which are 30°and 8°in 28 and 140 GHz measurements, respectively.A linear interpolation of the measured directional powers with 1°angular resolution in the azimuth and elevation planes was implemented to model the 3-D spatial distribution of the received power.The SLT is -15 dB below the peak power.Measurement results show that there are at most two main directions of arrival in the azimuth plane, except that there are a few NLOS locations measured at 28 GHz which observed three main directions of arrival, as shown in Fig. 10.Thus, a simple DU distribution is used to characterize the number of spatial lobes, which is given in Table IV.2) Mean Direction of Spatial Lobes: Each SL has a mean direction in the azimuth and elevation planes.A simple partition can be applied to generate the azimuth mean direction of an SL by equally dividing the azimuth plane into several sectors, each of which corresponds to an SL.The elevation mean direction of an SL is modeled as a normal random variable N (µ l , σ l ), as given in Table IV, where θ i is defined with respect to the horizontal plane.Considering that the TX height is usually higher than the RX height in a downlink of base station to mobile device setting, µ l of ZOD is typically negative, and µ l of ZOA is typically positive, which represents that ZOD and ZOA are below and above the horizon, respectively. 3) Subpath Angular Offset: For each spatial lobe, the RMS lobe AS is extracted from the partitioned AOA and AOD APS.A generated subpath is randomly assigned to one of the generated SLs, and the angles of this subpath (i.e., AOD, ZOD, AOA, and ZOA) are calculated by adding angular offsets with respect to the mean angle of the SL.The angular offset follows a normal distribution with zero mean and a standard deviation of the median of the measured RMS lobe AS, as given in Table IV.Such angular offset generation deviates from the 3GPP TR 38.901 channel model where angular offsets of 20 MPCs in a cluster are constant [29]. C. Discussions Each temporal and spatial channel parameter discussed above is generally fitted well by an identical distribution for 28 and 140 GHz, but the values of each parameter for these two frequencies are quite different.The channel at 140 GHz has fewer time clusters and fewer subpaths within each cluster than the channel at 28 GHz.Greater partition loss and higher path loss in the first meter of propagation distance at 140 GHz cause a smaller signal propagation range (the difference of maximum measurable path loss between two frequencies has been considered); thus, some of RX locations which could receive signals at 28 GHz were in outage at 140 GHz. VII. SIMULATION RESULTS The statistical channel model presented in Section VI was implemented in an indoor channel simulator based on NYUSIM outdoor channel simulator to investigate the accuracy of the simulated temporal and spatial statistics by comparing with the measured statistics.Note that the parameters listed in Table V are primary statistics used in the channel parameter generation procedure.The metrics used in this section for channel validation are secondary statistics such as RMS DS and RMS AS, which are not explicitly used in the channel generation, but the simulated and measured secondary statistics should yield good agreements.10,000 simulations were carried out for each of four frequency scenarios (i.e., 28 GHz LOS, 28 GHz NLOS, 140 GHz LOS, and 140 GHz NLOS) presented in this work by generating 10,000 omnidirectional and directional PDPs, and 3-D AOD and AOA PASs as sample functions of (3) using the NYUSIM indoor channel simulator. A. Simulated RMS Delay Spreads The RMS DS describes channel temporal dispersion, a critical metric to validate a statistical channel model.Fig. 11 shows the simulated and measured omnidirectional RMS DS at 28 GHz and 140 GHz in LOS and NLOS scenarios.As shown in Fig. 11, we obtained the empirical and simulated medians as 10.8 and 10.8 ns for the 28 GHz LOS scenario, 17.0 and 16.7 ns for the 28 GHz NLOS scenario, 3.0 and 2.6 ns for the 140 GHz LOS scenario, and 9.2 and 6.7 ns for the 140 GHz NLOS scenario.The simulated CDFs yield good agreements with the empirical CDFs for four frequency scenarios.By applying the antenna pattern, the directional CIR based on (3) can be given by where g TX ( − → Θ ) and g RX ( − → Θ ) can be arbitrary 3-D TX and RX complex amplitude antenna patterns.The antenna pattern of horn antennas were used in directional CIR simulations in NYUSIM to compare with measured directional RMS DS from 28 GHz and 140 GHz measurements.The antenna gain of a horn antenna can be calculated using the given antenna HPBW, which was given by ( 45)-( 46) in [38].For each omnidirectional channel realization (i.e.PDP), the simulated horn antenna was pointing to the direction of each generated MPC, which output the same number of directional RMS DSs as the number of MPCs.The comparison between the measured and simulated directional RMS DS is shown in Fig. 12.The simulated TX and RX antennas have 15 dBi gain with B. Simulated RMS Angular Spreads The omnidirectional azimuth and elevation AS describe the angular dispersion at a TX or RX over the entire 4π steradian sphere, also termed global AS.The AOA and AOD global AS were computed using the total (integrated over delay) received power over all measured azimuth/elevation pointing angles.The measured and simulated global AOA RMS AS were calculated using Appendix A-1,2 in [29].Fig. 13 shows that the simulated and measured median global ASs match well for 140 GHz but not well for 28 GHz due to the difference in the measured and simulated statistics of spatial lobes and the limited number of data samples.The simulated number of spatial lobes was uniformly distributed, which cannot perfectly recreate the specific statistics of spatial lobes measured in this environment, but may be well generalized to measurement data from various indoor office environments.The sheer increasing of the cumulative probability from 0 to 0.6 within 10°at 140 GHz indicates only one SL (one main direction of arrival) observed at the RX, where the global RMS AS would be close to the lobe RMS AS as shown in Fig. 14. The directional azimuth and elevation AS describe the degree of angular dispersion in a certain direction, which can be regarded as the lobe AS due to the definition of SLs.A -15 dB SLT was applied to obtain SLs before calculating the lobe RMS AS.The simulated and measured AOA RMS lobe AS for 28 and 140 GHz NLOS scenarios are compared in Fig. 14, where the median lobe AS of measured and simulated channels show an excellent agreement (within 0.5°).The measured lobe AS for 28 GHz is larger than the measured VIII. CONCLUSION The paper presented a 3-D spatial statistical channel model for mmWave and sub-THz frequencies in LOS and NLOS scenarios based on the extensive measurements at 28 and 140 GHz in an indoor office building.The omnidirectional and directional CI path loss models were derived from measurements, suggesting that NLOS propagation at both frequencies experience similar path loss over distance after removing the effect of the first meter of free space propagation loss.The extracted channel statistics showed that the number of TCs and the number of subpaths within each TC decrease as frequency increases.The channel generation procedure was listed step by step in Table IV, and the values for required parameters obtained from 28 and 140 GHz LOS and NLOS measurements were given in Table V.The indoor channel simulator NYUSIM 3.0 based on the presented statistical model was used to generate tens of thousands of PDP and APS samples.The simulated secondary channel statistics (i.e., omnidirectional and directional RMS DS, global and lobe RMS AS) yielded good agreements with the measured channel statistics.The empirical channel statistics and corresponding unified statistical channel models across mmWave and sub-THz frequencies will provide insights for future propagation measurement and modeling in such frequency range and support analysis and design of 6G indoor wireless systems and beyond. APPENDIX The processed data used to generate and calibrate the omnidirectional channel models in this paper are given in Table VI and VII as follows. TABLE VI: 28 GHz omnidirectional channel statistics with corresponding environment (Env.),TX IDs, RX IDs, T-R separation distance (T-R) in meters, path loss (PL) [11] in dB, the number of TCs (#TC), the number of SPs (#SP), and the omnidirectional RMS DS in ns. Fig. 1 : Fig.1: Floor plan of the 9th floor, 2 MetroTech Center[11].TX and RX locations measured at both 28 GHz and 140 GHz are denoted as stars and circles with checkerboard texture, respectively.RX locations only measured at 28 GHz are denoted as solid circles.Each of the five TX locations is denoted in a different color, and the RX locations paired with a TX location is denoted in the same color. (a) 28 GHz indoor directional CI path loss model and data[11].(b)142 GHz indoor directional CI path loss model and data. Fig. 2 : Fig. 2: 28 GHz and 142 GHz indoor directional CI path loss models and scatter plots with TX antenna height of 2.5 m and RX antenna height of 1.5 m for V-V polarization. (a) 28 GHz indoor omnidirectional CI path loss model and data[11].(b)142 GHz indoor omnidirectional CI path loss model and data. Fig. 3 : Fig. 3: 28 GHz and 142 GHz indoor omnidirectional CI path loss models and scatter plots with TX antenna height of 2.5 m and RX antenna height of 1.5 m for V-V polarization. (a) The number of TCs in the NLOS scenario.(b) The number of TCs in the LOS scenario. Fig. 4 : Fig. 4: Histograms and Poisson distribution fittings of the number of TCs of 28 GHz all set, 28 GHz common set, and 140 GHz common set in the (a) NLOS scenario (b) LOS scenario. A. Temporal Channel Parameters 1 ) The Number of Time Clusters: TCs are obtained by partitioning the measured PDPs based on the MTI.In Fig.4, the empirical histograms of the number of TCs (N ) of three datasets (i.e., 28 GHz all set, 28 GHz common set, and 140 GHz common set) for the LOS and NLOS scenarios with a 6 ns MTI are shown to be well fitted by the Poisson distribution.Since the Poisson distribution starts from zero while the number of TCs is at least one.Thus, N = N − 1 is used for distribution fitting, and the maximum likelihood estimator (MLE) of the parameter λ c of the Poisson distribution is the sample mean of N .The simulated number of TCs from the Poisson distribution is added by one to obtain the actual Fig. 5b shows that the a simple DE distribution matches the empirical histogram of 140 GHz LOS and NLOS common sets since the optimal β NLOS = 1.The mean number of M n (a) The number of subpaths for 28 GHz all set.(b) The number of subpaths for 140 GHz common set. Fig. 5 : Fig. 5: Histograms and composite distribution fittings of the number of subpaths of (a) 28 GHz all set and (b) 140 GHz common set. Fig. 10 : Fig. 10: Histograms of the number of AOA spatial lobes for 28 GHz and 140 GHz common sets. Fig. 11 : Fig. 11: Omnidirectional RMS DS for 28 GHz and 140 GHz LOS and NLOS scenarios.Meas stands for measurement, and Sims stands for simulations. Fig. 12 : Fig. 12: Directional RMS DS for 28 GHz and 140 GHz LOS and NLOS scenarios.The simulated TX and RX antenna HPBWs for 28 GHz and 140 GHz in the azimuth and elevation plane are 30°and 8°, respectively. Fig. 13 : Fig. 13: Simulated and measured RMS global AOA AS for 28 and 140 GHz LOS and NLOS scenarios.values for 140 GHz, which may be partly attributed to the difference in antenna HPBW (30°and 8°HPBW in 28 GHz and 140 GHz measurements, respectively). Fig. 14 : Fig. 14: Simulated and measured RMS lobe AOA AS for 28 and 140 GHz LOS and NLOS scenarios. TABLE I : ACRONYMS TABLE II : SPECIFICATIONS FOR THE 28 GHZ AND 142 GHZ SLIDING CORRELATOR CHANNEL SOUNDING SYSTEMS Table VI and Table VII in the Appendix give TX-RX location pairs used in this paper. TABLE III : TX/RX ANTENNA SWEEP DESCRIPTION TABLE IV : INPUT PARAMETERS FOR CHANNEL COEFFICIENT GENERATION PROCEDURE TABLE V : VALUES OF REQUIRED PARAMETERS IN THE CHANNEL GENERATION PROCEDURE DERIVED FROM 28 GHZ ALL SET, 28 GHZ COMMON SET, AND 140 GHZ COMMON SET FOR LOS AND NLOS SCENARIOS. 30°HPBW and 27 dBi gain with 8°HPBW in both azimuth and elevation planes in 28 and 140 GHz channel simulations.The measured directional RMS DSs in the LOS and NLOS scenarios are close with respect to both 28 GHz and 140 GHz.Furthermore, the median values of the measured and simulated directional RMS DS yield good agreements in the 28 GHz and 140 GHz LOS and NLOS scenarios. TABLE VII : 140 GHz omnidirectional channel statistics with corresponding environment (Env.),TX IDs, RX IDs, T-R separation distance (T-R) in meters, path loss (PL) in dB, the number of TCs (#TC), the number of SPs (#SP), and the omnidirectional RMS DS in ns.
11,330.2
2021-03-31T00:00:00.000
[ "Engineering", "Physics" ]
Nonperturbative Evolution of Parton Quasi-Distributions Using our formalism of parton virtuality distribution functions (VDFs) we establish a connection between the transverse momentum dependent distributions (TMDs) ${\cal F} (x, k_\perp^2)$ and quasi-distributions $Q(y,P_z)$ introduced recently by X. Ji for lattice QCD extraction of parton distributions $f(x)$. We build models for PQDs from the VDF-based models for soft TMDs, and analyze the $P_z$ dependence of the resulting PQDs. We observe a strong nonperturbative evolution of PQDs for small and moderately large values of $P_z$ reflecting the transverse momentum dependence of TMDs. Thus, the study of PQDs on the lattice in the domain of strong nonperturbative effects opens a new perspective for investigation of the 3-dimensional hadron structure. Introduction The parton distribution functions (PDFs) f (x), being related to matrix elements of nonlocal operators near the light cone z 2 = 0 are notoriously difficult objects for a calculation using the lattice gauge theory. The latter is formulated in the Euclidean space where light-like separations do not exist. Recently, X. Ji [1] proposed to use purely space-like separations z = (0, 0, 0, z 3 ) to overcome this problem. The parton quasi-distributions Q(y, p 3 ) introduced by X. Ji, differ from PDFs f (x), but tend to them in the p 3 → ∞ limit, displaying a usual perturbative evolution [2] - [5] with respect to p 3 for large p 3 . Refs. [1], [6] - [17] discuss the properties of PQDs in the large p 3 limit and their matching with scaledependent PDFs f (x, µ). The results of lattice calculations of PQDs were reported in Refs. [18] - [24]. These results show a significant variation of PQDs with p 3 . However, since the values of p 3 used in these calculations are not very large, the observed p 3 evolution does not have a perturbative form. The nonperturbative aspects of the p 3 -evolution were studied in diquark spectator models [25,26,27] for parton distributions. The evolution patterns observed in these papers are in a qualitative agreement with the lattice results. The authors also discuss the p 3 → ∞ extrapolation of results obtained for moderately large p 3 values. Our goal in the present paper is to study nonperturbative evolution of parton quasi-distributions using the formalism of virtuality distribution functions proposed and developed in our recent papers [28,29], where it was applied to the transverse momentum dependent pion distribution amplitude and the exclusive γ * γ → π 0 process. To this end, in Section 2 we extend the VDF formalism onto the parton distribution functions, and show how the basic VDF Φ(x, σ) is related to PDFs, to TMDs and to PQDs. In particular, we show that PQDs are completely determined by TMDs through a rather simple transformation. Since the basic rela-tions between the parton distributions are rather insensitive to complications brought by spin, in Section 2 we refer to a simple scalar model. In Section 3 we discuss modifications related to quark spin and gauge nature of gluons in quantum chromodynamics (QCD). In Section 4 we discuss VDF-based models for soft TMDs, and in Section 5 we present our results for nonperturbative evolution of PQDs obtained in these models. The transition to perturbative evolution for large p 3 is discussed in Section 6. Our conclusions are given in Section 7. Virtuality distribution functions Historically, parton distributions [30] were introduced to describe inclusive deep inelastic scattering involving spin-1/2 quarks. Since complications related to spin do not affect the very concept of parton distributions, we start with a simple example of a scalar theory. Then information about the target is accumulated in the generic matrix element p|φ(0)φ(z)|p . Transforming to the momentum space we switch to the description in terms of χ(k, p) which is an analog of the Bethe-Salpeter amplitude [31]. A crucial observation is that the contribution of any (uncut) diagram to χ(k, p) may be written as The reason is that for a general scalar handbag diagram d i one can write (see, e.g., [32]) where M 2 = p 2 , P(c.c.) is the relevant product of coupling constants, L is the number of loops of the diagram, and l is the number of its lines. For our purposes, the most important property of this representation is that A(α), B s (α), B u (α), C(α), D(α) are positive (or better, non-negative) functions (sums of products) of the non-negative α j -parameters of a diagram. Using it, we get the representation (2.2) with and a function F(x, λ; M 2 ) specific for each diagram. Evidently, 0 ≤ λ ≤ ∞. The limits for x in general case are −1 ≤ x ≤ 1, the negative x appearing when B u (α) 0, which happens for some nonplanar diagrams. Integrating over λ in Eq. (2.2) gives a Nakanishi-type representation (see, e.g. [33]) for this amplitude. We prefer, however, to use the representation involving both x and λ as integration variables. Note that no restrictions (like being lightlike, etc.) are imposed on k and p in Eq. (2.2). In particular, p is the actual external momentum with p 2 = M 2 . Basically, Eq. (2.2) expresses an obvious fact that, due to the Lorentz invariance, the function χ(k, p) depends on k through (kp) and k 2 . It may be treated as a double Fourier representation of χ(k, p) in both (kp) and k 2 . Transforming Eq. (2.2) to the coordinate representation and changing λ = 1/σ gives (2.6) Defining the Virtuality Distribution Function we arrive at the VDF representation that reflects the fact that the matrix element p|φ(0)φ(z)|p depends on z through (pz) and z 2 , and may be treated as a double Fourier representation with respect to these variables. On general grounds, one would expect that such a Fourier representation should be valid for a very wide class of functions. The main non-trivial feature of the representations (2.2), (2.8) is in their specific limits of integration over x and λ (or σ). For an arbitrary function, one cannot insist on such limits. However, our matrix element is not an arbitrary function. It is given by a sum of handbag Feynman diagrams, and the limits on x and λ (or σ) are dictated by the properties of these diagrams, in particular, by positivity of the functions A, B, D determining x and λ. It should be emphasized that these functions are determined purely by denominators of propagators, and are not affected by their numerators present in non-scalar theories. Thus, the VDF representation (2.8) is valid for any diagram and reflects very general features of quantum field theory. On these grounds, we will assume that it holds nonperturbatively. An important point is that Eq. (2.8) gives a covariant definition of x as a variable that is Fourier-conjugate to (pz). There is no need to assume that p 2 = 0 or z 2 = 0 to define x. The parameter σ, being conjugate to z 2 , may be interpreted as some measure of parton virtuality, hence the name of the function. In particular, VDF contains higher-twist contributions describing transverse momentum effects. Collinear PDFs and TMDs While the VDF representation holds for any z and p, nothing prevents us from considering some special cases, like a projection on the light cone z 2 = 0. This may be implemented, e.g., by choosing z that has the minus component only. Then one can parameterize the matrix element in terms of the twist-2 parton distribution f (x) that depends on the fraction x of the target momentum component p + carried by the parton. The relation between VDF Φ(x, σ) and the collinear twist-2 PDF f (x) is formally given by Of course, this construction of f (x) works only if the z 2 → 0 limit is finite, e.g. in the super-renormalizable ϕ 3 theory. In the renormalizable ϕ 4 theory, the function Φ(x, σ) has a ∼ 1/σ hard part, and the integral (2.10) is logarithmically divergent, reflecting the perturbative evolution of parton densities in such a theory. Treating the target momentum p as purely longitudinal, p = (E, 0 ⊥ , P), one can introduce the parton's transverse momentum. In the light-front variables [we use the convention (ab) Taking z that has z − and z ⊥ components only, i.e., projecting on the light front z + = 0, we define the transverse momentum dependent distribution in the usual way as a Fourier transform with respect to remaining coordinates z − and z ⊥ : Because of the rotational invariance in z ⊥ plane, this TMD depends on k 2 ⊥ only, the fact already reflected in the notation. The TMD may be written in terms of VDF as Note that having a covariantly defined VDF Φ(x, σ), one can use this representation to analytically continue F (x, k 2 ⊥ ) into a region of negative and even complex values of may be interpreted as a scale-dependent parton distribution. Indeed, when the µ 2 → ∞ limit exists, we have f (x, ∞) = f (x) . In a renormalizable theory, it makes sense to represent Φ(x, σ) as a sum of a soft part Φ soft (x, σ), generating a nonperturbative evolution of f (x, µ 2 ), and a ∼ 1/σ hard tail. Namely, the lowest-order hard-tail term with ∆(x) given by (where P(x/z) is the evolution kernel and a is the appropriate coupling constant) generates the perturbative evolution The theory of perturbative evolution (which includes also the subtleties of using the running coupling constant, higher-order corrections, scheme-dependence, etc.) is well developed, and it is not of much interest for us in this paper. Our main subject in what follows is the nonperturbative evolution generated by the , in application to parton quasi-distributions, introduced recently by X. Ji [1]. Quasi-Distributions The basic idea of Ref. [1] is to consider equal-time bilocal operator corresponding to z = (0, 0, 0, z 3 ) [or, for brevity, z = z 3 ]. Then Using again the frame in which p = (E, 0 ⊥ , P), and introducing quasi-distributions [1] through we get a relation between PQDs and VDFs, For large P, we have and Q(y, P → ∞) tends to the integral (2.10) producing f (y). This observation suggests that one may be able to extract the "light-cone" parton distribution f (y) from the studies of the purely "space-like" function Q(y, P) for large P, which can be done on the lattice [1]. The nonperturbative evolution of Q soft (y, P) with respect to P has the area-preserving property. Namely, since For the soft part, the integral over σ converges, and we may write Thus, the quasi-distribution Q(y, P) [both its soft and hard parts] is completely determined by the form of the TMD F (x, k 2 ⊥ ). Spinor quarks In spinor case, one deals with the matrix element of a type. It may be decomposed into p α and z α parts: B α (z, p) = p α B p (z, p) + z α B z (z, p), or in the VDF representation If we take z = (z − , z ⊥ ) in the α = + component of O α , the purely higher-twist z α -part drops out and we can introduce the TMD F (x, k 2 ⊥ ) that is related to the VDF Φ(x, σ) by the scalar formula (2.12). The easiest way to avoid the effects of the z α contamination in the quasi-distributions is to take the time component of B α (z = z 3 , p) and define (here we differ from the original definition of PQDs by X. Ji [1] who uses α = 3). The connection between Q(y, P) and Φ(x, σ) is given then by the same formula (2.21) as in the scalar case. As a result, we have the sum rules (2.25) and (2.28) corresponding to charge and momentum conservation. Furthermore, the quasi-distributions Q i (y, P) are related to TMDs F i (x, k 2 ⊥ ) by the scalar conversion formula (2.29). Gauge fields In QCD, one should take the operator involving a straight-line path-ordered exponential in the quark (adjoint) representation. As is well-known, its Taylor expansion has the same structure as that for the original ψ(0)γ α ψ(z) operator, with the only change that one should use covariant derivatives D ν = ∂ ν − igA ν instead of the ordinary ∂ ν ones:Ê Again, the z α contamination is avoided if the quasidistributions are defined through the time component of O α . Then we have the same relations between the VDFs and PQDs as in the scalar case. Sum Rules Converting Eq. (2.24) into the sum rule (2.25) we noted that in general it holds for the soft part only, because the hard part Φ hard (x, σ) (2.17) is proportional to 1/σ and its σ-integral logarithmically diverges. However, the x-integral of Φ hard (x, σ) vanishes (the zeroth x-moment of the evolution kernel P qq (x/z) is proportional to the anomalous dimension of the vector current, which is zero due to the vector current conservation). As a result, we have the valence quark sum rules involving full PQDs and PDFs. Since the first x-moment of P qq (x/z) is non-zero, Eq. (2.28) may be only used to derive the momentum sum rule involving the soft parts of quark distributions To include gluons, one should consider the operator HereẼ is the straight-line path-ordered exponential in the gluon (fundamental) representation. The matrix element of O αβ g (0, z; A) contains the basic p α p β structure that produces the twist-2 PDF, but it also has the contaminating structures containing z α , z β or g αβ . When one takes, as usual, α = β = + and z = (z − , z + ), the z-structures and g αβ do not contribute to the matrix element of the operator O ++ g defining the gluon PDF. In case 4 of the quasi-distribution, the contaminating structures containing z 3 are avoided when we take α = 0, β = 0 (again, another definition of the gluon PQD corresponding to α = 3, β = 3 was chosen in Ref. [1] ). Still, there remains contamination from the g αβ structure and the momentum sum rule for gluons is spoiled by the O(Λ 2 /P 2 ) term brought in by the g αβ admixture. Primordial TMDs One may notice that the O α (0, z; A) operator involves a straight-line link from 0 to z rather than a stapled link usually used in the definitions of TMDs appearing in the description of Drell-Yan and semi-inclusive DIS processes. As is well-known, the stapled links reflect initial or final state interactions inherent in these processes. The "straight-link" TMDs, in this sense, describe the structure of a hadron when it is in its non-disturbed or "primordial" state. While it is unlikely that such a TMD can be measured in a scattering experiment, it is a well-defined QFT object, and one may hope that it can be measured on the lattice through its connection (2.29) to the quasi-distributions. Models for soft part Let us now discuss some explicit models of the k ⊥ dependence of soft TMDs F (x, k 2 ⊥ ). In general, they are functions of two independent variables x and k 2 ⊥ . For simplicity, we will consider here the case of factorized models in which x-dependence and k ⊥ -dependence appear in separate factors. Since, with our definitions, the relations between VDFs and TMDs are the same in scalar and spinor cases, we will refer for brevity to scalar operators. Gaussian model It is popular to assume a Gaussian dependence on k ⊥ , we see that the integral here involves both positive and negative σ, i.e. formally F G (x, k 2 ⊥ ) cannot be written in the VDF representation (2.12). This is a consequence of the fact that the analytic continuation of F G (x, k 2 ⊥ ) into the region of negative k 2 ⊥ has an exponential increase. However, since we are interested in positive k 2 ⊥ only, in our modeling we will just use the conversion formula (2.29) for all k 2 ⊥ profiles for which it gives convergent results. For the Gaussian model we have then Simple non-Gaussian models In the space of impact parameters z ⊥ , the Gaussian model gives a e −z 2 ⊥ Λ 2 /4 fall-off, and one may argue that the decrease is too fast for large z ⊥ . In particular, propagators D c (z, m) of massive particles have an exponential e −m|z| fall-off for spacelike intervals z 2 . To build models for TMDs that resemble more closely the perturbative propagators in the deep spacelike region, we recall that the propagator of a scalar particle with mass m may be written as It is the mass term that assures that the propagator falls off exponentially ∼ e −|z|m for large spacelike distances. At small intervals z 2 , however, the free particle propagator has a 1/z 2 singularity while we want the soft part of p|φ(0)φ(z)|0 to be finite at z = 0. The simplest way is to add a constant term (−4/Λ 2 ) to z 2 in the VDF representation (2.8). So, we take as a model for the VDF, where K 1 is the modified Bessel function. The sign of the Λ 2 term is fixed from the requirement that (4/Λ 2 − z 2 ) −1 should not have singularities for space-like z 2 . This model corresponds to the following TMD It is finite for k ⊥ = 0 reflecting the exponential ∼ e −m|z ⊥ | fall-off for large z ⊥ . To avoid a two-parameter modeling, one may take m = 0, i.e. which corresponds to It has a logarithmic singularity for small k ⊥ that reflects a too slow ∼ 1/(1 + z 2 ⊥ Λ 2 /4) fall-off for large z ⊥ . For the quasidistribution, we have Note that the Gaussian model and the m = 0 models have the same ∼ (1 − z 2 ⊥ Λ 2 /4) behavior for small z ⊥ , i.e. they correspond to the same value of the p|ϕ(0)∂ 2 ϕ(0)|p matrix element, provided that one takes the same value of Λ in both models. For large z ⊥ , however, the fall-off of the Gaussian model is too fast, while that of the m = 0 model is too slow. Thus, they look like two extreme cases of one-parameter models, and we will use them for illustration of the nonperturbative evolution of quasidistributions, expecting that other models (e.g. m 0 model) will produce results somewhere in between of these two cases. Numerical results The full −1 ≤ x ≤ 1 PDF-support segment is usually split into the positive-x "quark" region and negative-x "antiquark" region. As we will see below, the PQDs Q(y, P) live on the whole −∞ < y < ∞ axis, even when they are generated from a TMD model that is non-zero for positive x only. Thus, to avoid confusion of what generates PQD for negative y, it makes sense to separate the parts of PQDs coming from positive-x and negative-x parts of TMDs. As one can see from Figs. 2, 4, the evolution patterns in our two models are very close to each other. They also resemble the pattern observed in actual lattice calculations [18]- [24] and in the diquark spectator model [25,26,27]. The quasidistributions are wider for small P, with their support visibly 10 15 20 extending beyond the 0 ≤ y ≤ 1 segment, becoming narrower (and higher in their maxima) with increasing P. The approach to the limiting (1 − y) 3 shape is not uniform, as illustrated in Figs. 3, 5. For large y = 0.7, the ratio Q(y, P)/ f (y) considerably exceeds 1 for small P tending to the limiting value from above. For smaller y = 0.1 and y = 0.3, the ratio curves tend to 1 from below. One can see that P/Λ 10 is needed (or P of the order of several GeV) to get Q(y, P)/ f (y) close to 1 for these y values. Leading-order hard tail The nonperturbative evolution of Q(y, P) essentially stops for P/Λ 20, and for larger values of P the dominant role is played by the perturbative evolution generated by the hard part. The simplest Φ ∼ 1/σ hard tail model (2.16) corresponds to a ∼ 1/k 2 ⊥ TMD. It is singular for k ⊥ = 0 while we want TMDs be finite in this limit. The simplest regularization 1/k 2 ⊥ → 1/(k 2 ⊥ + m 2 ) corresponds to the change 1/σ → e −im 2 /σ /σ in the hard part of VDF, To proceed with the conversion formula, one needs the integral over σ I(x, y, P) = i.e. for large P 2 the quasi-distributions evolve according to the perturbative evolution equation with respect to P 2 . The pattern of the sub-asymptotic m 2 /P 2 dependence for the hard part may be illustrated by taking P(x/z) → 1. Then Conclusions In this paper, we applied the formalism of parton virtuality distributions to study the p 3 -dependence of quasi-distributions Q(y, p 3 ). We established a simple relation between PQDs and TMDs that allows to derive models for PQDs from the models for TMDs. Our model results show a pronounced nonperturbative evolution of PQDs for small and moderately large values of p 3 reflecting the transverse momentum dependence of TMDs, i.e. the spatial structure of the hadrons. Using two rather different models for the k ⊥ dependence of TMDs, we obtained very similar patterns of the p 3 dependence of PQDs Q(y, p 3 ) for each particular y. This observation may be used for a guided extrapolation of the moderate-p 3 lattice results to the p 3 → ∞ limit. The basic idea is to find analytic models for soft TMDs that would successfully fit lattice PQDs for several values of p 3 , and then take the p 3 → ∞ limit. A practical implementation of this program should be a subject of future studies. Summarizing, the study of PQDs on the lattice in the domain of strong nonperturbative effects opens a new perspective in investigations of the three-dimensional structure of the hadrons.
5,280.8
2016-12-15T00:00:00.000
[ "Physics" ]
Effect of Substrate Grain Size on Structural and Corrosion Properties of Electrodeposited Nickel Layer Protected with Self-Assembled Film of Stearic Acid In the present study, the impact of copper substrate grain size on the structure of the succeeding electrodeposited nickel film and its consequent corrosion resistance in 3.5% NaCl medium were evaluated before and after functionalization with stearic acid. Nickel layers were electrodeposited on two different copper sheets with average grain size of 12 and 25 µm, followed by deposition of stearic acid film through self-assembly. X-ray diffraction analysis of the electrodeposited nickel films revealed that the deposition of nickel film on the Cu substrate with small (12 µm) and large (25 µm) grains is predominantly governed by growth in the (220) and (111) planes, respectively. Both electrodeposited films initially exhibited a hydrophilic nature, with water-contact angles of 56° and <10°, respectively. After functionalization with stearic acid, superhydrophobic films with contact angles of ~150° were obtained on both samples. In a 3.5% NaCl medium, the corrosion resistance of the nickel layer electrodeposited on the copper substrate with 25 µm grains was three times greater than that deposited on the copper substrate with 12 µm grains. After functionalization, the corrosion resistance of both films was greatly improved in both short and long immersion times in 3.5% NaCl medium. Introduction Hierarchical micro-/nanostructured materials with various shapes and properties are widely used in different applications like as optical materials [1], low adhesive surfaces [2], abrasion-resistant surfaces, anti-icing films [3][4][5][6], anticorrosion coatings [7][8][9], and in the fabrication of superhydrophobic surfaces [10,11]. The superhydrophobicity phenomenon was first observed in lotus leaves in nature [12,13], and it is frequently used to describe surface properties with a water contact angle (CA) larger than 150 • and sliding angle less than 10 • [14,15]. The fabrication of hierarchical structures for tuning surface hydrophobicity has been demonstrated for various metals, including aluminum [16], zinc [17], and nickel [8], among which nickel is especially attractive because of its high studies directly correlating substrate-grain size to the microstructure and physiochemical properties of electrodeposited films are scarce; hence, this study puts forward new insights on the fabrication of coatings with improved corrosion resistance. Materials and Methods Continuously cast, hot-rolled, and then soft-annealed pure copper sheets were cold-rolled at two different levels to form different grain sizes (12 and 25 micrometers as average grain diameter, D12 and D25). The ASTM E-112 standard [36] planimetric method was used in determining average grain diameter. Samples were cut into pieces of 1 cm in diameter and used as cathodes during electrodeposition. Prior to electrodeposition, copper samples were ground using successive grades of SiC paper up to a grade of 3000 and then polished with alumina slurry, followed by a rinsing step in deionized water. The polished samples were electropolished at 20 mA·cm −2 for 1 min in a solution containing 50 g·L −1 Na 2 CO 3 , 10 g·L −1 KOH, dipped in a 10 wt % HCl solution for 30 s, and washed using deionized water. A digital-camera-assisted optical microscope was used to visualize the microstructure and different grain sizes of the copper (D12 and D25) substrates. SEM (LEO 1450 VP, Zeiss, Oberkochen, Germany) and XRD (Explorer, GNR, Novara, Italy) were used to assess the microstructure and determine the crystalline structure of the electrodeposited layers. XRD cathodes (either D12 or D25) were immersed in an electrolyte containing 200 g·L −1 NiCl 2 ·6H 2 O, 30 g·L −1 NH 4 Cl, and 120 g·L −1 H 3 BO 3 for electrodeposition where a nickel ingot with 99.5% purity (40 mm × 40 mm × 1 mm) was used as an anode. The electrolyte was constantly stirred, and its temperature was kept at 60 ± 1 • C. To produce a hierarchical micro-/nanostructured nickel film on the copper substrates, two current levels were employed in two steps during electrodeposition; current density of 20 mA·cm −2 for 8 min, followed by current density of 50 mA·cm −2 for 1 min. To modify sample hydrophobicity, the electrodeposited Ni films were functionalized by SA molecules for 10 min in a 6 mM·L −1 SA solution in ethanol. CA measurements were performed with 4 µL water droplet at ambient temperature using an optical contact-angle meter (Adeeco static/dynamic, Tehran, Iran). ImageJ software (Version 1.151) was used to analyze the CA results. For corrosion-resistance evaluation, electrochemical-impedance-spectroscopy (EIS) measurements were performed with a potential amplitude of ±10 mV at a frequency range of 100-0.01 Hz in a 3.5% NaCl electrolyte using a three-electrode setup (Autolab, PGSTAT 302N, Utrecht, The Netherlands). Potentiodynamic-polarization (PDP) measurements were also performed from the −150 mV cathodic region to the ca +200 mV anodic region with a scan rate of 1 mV·s −1 . Pt mesh and saturated calomel electrodes (SCEs) were used in the electrochemical measurements as counter and reference electrodes, respectively. Figure 1a,b represents the optical microstructure of D12 and D25 that were used as substrates showing their different grain sizes. Sample D12, compared to Sample D25, showed smaller grain sizes, a larger number of grains, and higher density of twins with different sizes and plane directions. According to Hull et al. [37], samples with smaller grains and larger fractions of grain boundaries exhibit greater amounts of crystalline defects, such as subgrain boundaries and dislocations, comparing their larger grains counterparts. Consequently, samples with higher density of crystalline defects better accommodate nucleation and growth sites for electrochemical deposition [18]. A larger density of grain boundaries and high-energy lattice defects (e.g., dislocations, edges, and kinks) on substrates with smaller grain size provides predominant sites for crystal growth, especially based on the screw-dislocation-growth mechanism [18]. Therefore, it could be expected that the Ni electrodeposition process on the D12 substrate led to the formation of an Ni layer with different grain sizes compared to that electrodeposited on the D25 substrate. To assess whether the change of the grain size of the copper substrate affected the crystalline structure of the electrodeposited film, we performed XRD measurements on the electrodeposited Ni on Samples D12 and D25. The thickness of the electrodeposited nickel films on both samples was measured with combined SEM cross-section and elemental-line-profile analyses using energy-dispersive spectroscopy as described elsewhere [8]. The obtained film thickness for the electrodeposited Ni layers in this study was approximately 4-6 µm, consistent with our previous studies [37]. Therefore, the XRD patterns of the nickel films on the D12 and D25 samples (Figure 1c) revealed not only the crystalline structure of the electrodeposited nickel top film, but also different plane directions of the underlying crystalline copper substrate. The obtained film thickness for the electrodeposited Ni layers in this study was approximately 4-6 µm, consistent with our previous studies [37]. Therefore, the XRD patterns of the nickel films on the D12 and D25 samples (Figure 1c) revealed not only the crystalline structure of the electrodeposited nickel top film, but also different plane directions of the underlying crystalline copper substrate. Comparing XRD spectra in the above figure, some differences were observed between the structure of the Ni film deposited on Samples D12 and D25. The degree of preferred orientation of particular crystal plane of a polycrystalline nickel film can be determined using texture coefficient (TC) parameter for a specific (hkl) plane, as shown by the following equation [6]: Surface Microstructure where Ihkl(c) is the diffraction-peak intensity for the crystalline electrodeposited nickel film, Ihkl(s) is the diffraction peak intensity of the standard nickel powder (as the random state), and n is the number of the considered XRD peaks. By changing the copper substrate from D12 to D25, TC (111) and TC (200) increased from 1.04 to 1.24 and from 0.56 to 0.61, respectively, whereas TC (220) decreased from 1.39 to 1.15. In fact, when D12 was used as the substrate, the preferred growth of the electrodeposited nickel film was in the (220) direction, while with the D25 as the substrate, growth was preferential in the (111) direction. These observations revealed direct correlation between the crystalline structure of the electrodeposited nickel film and the substrate microstructure. Comparing XRD spectra in the above figure, some differences were observed between the structure of the Ni film deposited on Samples D12 and D25. The degree of preferred orientation of particular crystal plane of a polycrystalline nickel film can be determined using texture coefficient (TC) parameter for a specific (hkl) plane, as shown by the following equation [6]: where I hkl(c) is the diffraction-peak intensity for the crystalline electrodeposited nickel film, I hkl(s) is the diffraction peak intensity of the standard nickel powder (as the random state), and n is the number of the considered XRD peaks. By changing the copper substrate from D12 to D25, TC (111) and TC (200) increased from 1.04 to 1.24 and from 0.56 to 0.61, respectively, whereas TC (220) decreased from 1.39 to 1.15. In fact, when D12 was used as the substrate, the preferred growth of the electrodeposited nickel film was in the (220) direction, while with the D25 as the substrate, growth was preferential in the (111) direction. These observations revealed direct correlation between the crystalline structure of the electrodeposited nickel film and the substrate microstructure. SEM Investigations To visualize the effect of substrate-grain size on the micro-/nanostructure of the electrodeposited nickel film, SEM images were obtained on the nickel films deposited on Samples D12 and D25, as shown in Figure 2a,b, respectively. These SEM micrographs clearly show the hierarchical structure of the nickel crystals with their characteristic starlike structure on both substrates. As previously mentioned, Sample D12 provided more available nucleation and growth locations for the Ni film during electrodeposition when compared to Sample D25. Therefore, in the course of electrodeposition, the fusion of neighboring fine Ni grains resulted in the grain coarsening of the Ni film on the D12 copper substrate. Therefore, as shown in Figure 2, the size of starlike features in the film deposited on Sample D12 was slightly larger than that of the film deposited on Sample D25. After functionalization with SA, the surface morphology of the electrodeposited films was unchanged (not shown here), as the thickness of the SA layer is way smaller than the size of features observed in the SEM micrographs. SEM Investigations To visualize the effect of substrate-grain size on the micro-/nanostructure of the electrodeposited nickel film, SEM images were obtained on the nickel films deposited on Samples D12 and D25, as shown in Figure 2a,b, respectively. These SEM micrographs clearly show the hierarchical structure of the nickel crystals with their characteristic starlike structure on both substrates. As previously mentioned, Sample D12 provided more available nucleation and growth locations for the Ni film during electrodeposition when compared to Sample D25. Therefore, in the course of electrodeposition, the fusion of neighboring fine Ni grains resulted in the grain coarsening of the Ni film on the D12 copper substrate. Therefore, as shown in Figure 2, the size of starlike features in the film deposited on Sample D12 was slightly larger than that of the film deposited on Sample D25. After functionalization with SA, the surface morphology of the electrodeposited films was unchanged (not shown here), as the thickness of the SA layer is way smaller than the size of features observed in the SEM micrographs. Surface Hydrophobicity Several factors, such as surface microstructure, surface energy, and surface-oxide growth affect the interactions between an electrode (e.g., electrodeposited Ni film in this case) and an electrolyte. To evaluate the effect of substrate microstructure (i.e., grain size) on the wettability of the electrodeposited Ni films before and after functionalization, we performed water static CA measurements. As can be seen from the CA results in Figure 3a,b, the electrodeposited Ni films on Samples D12 and D25 showed a hydrophilic nature with CA values θ = 56° and θ < 10°, respectively. Surface Hydrophobicity Several factors, such as surface microstructure, surface energy, and surface-oxide growth affect the interactions between an electrode (e.g., electrodeposited Ni film in this case) and an electrolyte. To evaluate the effect of substrate microstructure (i.e., grain size) on the wettability of the electrodeposited Ni films before and after functionalization, we performed water static CA measurements. As can be seen from the CA results in Figure 3a,b, the electrodeposited Ni films on Samples D12 and D25 showed a hydrophilic nature with CA values θ = 56 • and θ < 10 • , respectively. The lower CA of the electrodeposited Ni film on Sample D25 compared to that on Sample D12 can be explained by the Wenzel model [38] that correlates a decrease in CA to an increase in surface roughness. Nevertheless, since CA measurements were performed in the open laboratory air, the effect of adventurous hydrocarbons on increasing surface hydrophobicity cannot be neglected. In contrast to hydrophilic Ni films before functionalization, functionalized Ni films exhibited a superhydrophobic nature (Figure 3c,d), with their CA close to 150 • . As reported earlier [33], adsorption of mono-or multilayer SA molecules on a flat substrate can increase its CA to 100 • −110 • . If deposited as a single layer, a well-ordered all-trans monolayer of SA molecules exposes the SA hydrophobic terminal methyl group to the water droplet, resulting in high CA. If defects are introduced in the structure of the SA monolayer (also known as gauche defects), the CA decreases, as backbone methylene groups are less hydrophobic than the terminal methyl group. In contrast, when multilayer SA is deposited on a flat substrate, overall CA is determined by all the functional groups of SA protruding the air. Similarly, when a multilayer SA film is formed on rough surfaces, a range of surface hydrophobicity (i.e., contact-angle values) can be expected. CA values observed on functionalized electrodeposited Ni films (i.e., results in Figure 3c,d) were almost 50% higher than the values observed for the SA mono-/multilayers adsorbed on flat surfaces. According to Cassie-Baxter theory [39], the presence of low-surface-energy substances (in this case, SA molecules) on hierarchical micro-/nanostructures results in the entrapment of air pockets between water droplet and surface. The formation of such air pockets can explain the superhydrophobicity that was obtained through functionalization of the electrodeposited Ni films on Samples D12 and D25. (Figure 3c,d), with their CA close to 150°. As reported earlier [33], adsorption of mono-or multilayer SA molecules on a flat substrate can increase its CA to 100°−110°. If deposited as a single layer, a well-ordered all-trans monolayer of SA molecules exposes the SA hydrophobic terminal methyl group to the water droplet, resulting in high CA. If defects are introduced in the structure of the SA monolayer (also known as gauche defects), the CA decreases, as backbone methylene groups are less hydrophobic than the terminal methyl group. In contrast, when multilayer SA is deposited on a flat substrate, overall CA is determined by all the functional groups of SA protruding the air. Similarly, when a multilayer SA film is formed on rough surfaces, a range of surface hydrophobicity (i.e., contact-angle values) can be expected. CA values observed on functionalized electrodeposited Ni films (i.e., results in Figure 3c,d) were almost 50% higher than the values observed for the SA mono-/multilayers adsorbed on flat surfaces. According to Cassie-Baxter theory [39], the presence of low-surface-energy substances (in this case, SA molecules) on hierarchical micro-/nanostructures results in the entrapment of air pockets between water droplet and surface. The formation of such air pockets can explain the superhydrophobicity that was obtained through functionalization of the electrodeposited Ni films on Samples D12 and D25. To evaluate the long-term stability of the functionalized electrodeposited Ni films on D12 and D25, the samples were exposed to a 3.5 wt % NaCl solution up to 5 days, and their CA was evaluated at different time intervals. As depicted in Figure 4, the CA of both samples gradually declined upon exposure to 3.5 wt % NaCl solution (by almost 15° after 5 days). Penetration of corrosive Clions through the SA layers, and corrosion/oxidation of the Ni substrate caused the increased concentration To evaluate the long-term stability of the functionalized electrodeposited Ni films on D12 and D25, the samples were exposed to a 3.5 wt % NaCl solution up to 5 days, and their CA was evaluated at different time intervals. As depicted in Figure 4, the CA of both samples gradually declined upon exposure to 3.5 wt % NaCl solution (by almost 15 • after 5 days). Penetration of corrosive Cl − ions through the SA layers, and corrosion/oxidation of the Ni substrate caused the increased concentration of gauche defects in the structure of the adsorbed SA molecules, thus resulting in the reduction of CA values. Oxidation and corrosion of Ni film underneath the SA layer could also result in the further formation of defects in the structure of the SA film. Similar deterioration of the organization of aliphatic organic molecules as a consequence of the oxidation and corrosion of the metallic substrate was demonstrated earlier [40][41][42] for self-assembled monolayers of octadecanethiol and octadecaneselenol on copper. From the high CA values observed on functionalized electrodeposited Ni films on Samples D12 and D25, it could be expected that the functionalized samples exhibited better corrosion-resistance properties compared to their nonfunctionalized counterpart when exposed to a corrosive medium. However, as we recently reported, the CA alone cannot be considered as a measure for the corrosion-protection efficiency of a hybrid substrate [8]. As demonstrated in Figure 4, the CAs of the functionalized and nonfunctionalized samples are shown to be very different. Hence it was expected that the corrosion resistance of these samples in aqueous media also varied to a large extent. To assess the effect of the copper substrate microstructure on the corrosion resistance of the electrodeposited Ni films before and after functionalization, we performed additional electrochemical measurements with potentiodynamic polarization (PDP) and electrochemical impedance spectroscopy (EIS). aliphatic organic molecules as a consequence of the oxidation and corrosion of the metallic substrate was demonstrated earlier [40][41][42] for self-assembled monolayers of octadecanethiol and octadecaneselenol on copper. From the high CA values observed on functionalized electrodeposited Ni films on Samples D12 and D25, it could be expected that the functionalized samples exhibited better corrosion-resistance properties compared to their nonfunctionalized counterpart when exposed to a corrosive medium. However, as we recently reported, the CA alone cannot be considered as a measure for the corrosion-protection efficiency of a hybrid substrate [8]. As demonstrated in Figure 4, the CAs of the functionalized and nonfunctionalized samples are shown to be very different. Hence it was expected that the corrosion resistance of these samples in aqueous media also varied to a large extent. To assess the effect of the copper substrate microstructure on the corrosion resistance of the electrodeposited Ni films before and after functionalization, we performed additional electrochemical measurements with potentiodynamic polarization (PDP) and electrochemical impedance spectroscopy (EIS). Electrochemical Analysis and Corrosion-Resistance Assessment of Electrodeposited Layers Results of EIS measurements on the electrodeposited Ni films on Samples D12 and D25 before and after functionalization are provided in Figure 5a,b in Nyquist and Bode representations, respectively. As is observed in Figure 5a, copper substrates with different grain sizes affected the overall corrosion resistance of the electrodeposited Ni films. Before and after surface functionalization, the electrodeposited Ni films on Sample D25 exhibited better corrosion-resistance performance compared to those on Sample D12. To quantitatively evaluate EIS data, we fit EIS spectra using equivalent circuit models as those shown in Figure 6. In this figure, the equivalent circuit models were superimposed on schematic representations of sample-surface constituents, and the obtained corresponding fitting parameters are provided in Table 1. The equivalent circuit models were chosen on the basis of the presence of different surface constituents (e.g., micro-/nanostructured surface, oxide layer, SA molecules, and air pockets) and in accordance with our previous studies on similar systems [8]. Surfaces of the functionalized samples are considered as dynamic systems and Electrochemical Analysis and Corrosion-Resistance Assessment of Electrodeposited Layers Results of EIS measurements on the electrodeposited Ni films on Samples D12 and D25 before and after functionalization are provided in Figure 5a,b in Nyquist and Bode representations, respectively. As is observed in Figure 5a, copper substrates with different grain sizes affected the overall corrosion resistance of the electrodeposited Ni films. Before and after surface functionalization, the electrodeposited Ni films on Sample D25 exhibited better corrosion-resistance performance compared to those on Sample D12. To quantitatively evaluate EIS data, we fit EIS spectra using equivalent circuit models as those shown in Figure 6. In this figure, the equivalent circuit models were superimposed on schematic representations of sample-surface constituents, and the obtained corresponding fitting parameters are provided in Table 1. The equivalent circuit models were chosen on the basis of the presence of different surface constituents (e.g., micro-/nanostructured surface, oxide layer, SA molecules, and air pockets) and in accordance with our previous studies on similar systems [8]. Surfaces of the functionalized samples are considered as dynamic systems and undergo changes after immersion in corrosive electrolytes. Therefore, different equivalent circuit models were used to fit the EIS results (Figure 5a,b). As schematically shown in Figure 6a-c, the equivalent circuit models for fitting EIS data consisted of a constant phase element (CPE), parallel-connected with a resistance element, and connected in series to a Warburg short element (W s ) before sample functionalization. The CPE parallel-connected with the resistance element is representative of the surface micro-/nanostructure, and W s is representative of the chlorine ion's penetration into the surface grooves. For n = 1, CPE had a unit of capacitance (F). Parameters C (capacitance), and CPE factors (Q and n) in Table 1 were calculated using Equations (2) and (3), where Q had a unit of Ω·cm −2 ·S −n . where Q is the CPE constant that nominally equals to the pure capacitance of the system for n = 1; j 2 = −1; ω is the angular frequency (rad/s); and the value of n ranged between 0 and 1. W s was thus defined as Equation (4), where R Ws is the short-range Warburg coefficient, T Ws = d 2 /D (d is the effective diffusion thickness, and D is the effective diffusion coefficient of the ion species). N values for the Warburg element in Table 1 were 0.27 and 0.24 for Samples D12 and D25, respectively. These values represent the finite length of diffusion in the micro-/nanostructured coating with a transmissive boundary. Furthermore, deviation from the 45 • line in the complex plane suggested diffusion in two or three dimensions. Such diffusion paths were also observed in porous media [43]. After functionalization, the EIS spectrum of the D12 sample could best fit with another CPE-resistance-element combination, separati46ng the micro-/nanosurface structure (including possible air gaps) from the nonuniform barrier properties of the Ni film. Nevertheless, the obtained n value was only 0.57, and this CPE-resistance-element combination could be potentially modeled with a Warburg element. According to Figure 5b and results in Table 1, the impedance value obtained for the electrodeposited Ni film on Sample D25 was almost three times larger than that on Sample D12. The different ratio of fine and coarse features in the topography of these two surfaces is responsible for this difference in the impedance values of the two samples. In this regard, r as surface-roughness parameter could be utilized to describe the ratio of the sample that comes into contact with the corrosive electrolyte as below: r = real sur f ace area apparent sur f ace area (5) Table 1. Electrochemical parameters obtained from fitting EIS results in Figure 5a,b using equivalent circuit models in Figure 6. As discussed in Section 3.2.2, the surface of the electrodeposited nickel film on the D25 substrate appeared to be rougher than that formed on Sample D12, thus exposing a larger surface area to electrolytes in comparison with the electrodeposited Ni film on Sample D12. Therefore, qualitatively, the r parameter for functionalized film on Sample D25 was greater than that of the functionalized film on Sample D12, which explains the former's better corrosion resistance (by 1.2 times) than that of the latter. As described earlier, when the electrodeposited Ni films were functionalized with SA, air pockets formed between sample surface and electrolyte that limited the access of aggressive electrolyte to the sample surface. Consequently, electron transfer decreased [7], which, in turn, increased the corrosion resistance of the functionalized samples. Due to the different ratio of fine and coarse features in the microstructure of the films formed on substrates with different grain sizes, the number of air pockets that were formed on the functionalized Ni film on Sample D12 was smaller than that formed on Sample D25. Therefore, properties of the surface/electrolyte on functionalized film on Sample D25 were governed by the large fraction of the air pockets, changing the equivalent As described earlier, when the electrodeposited Ni films were functionalized with SA, air pockets formed between sample surface and electrolyte that limited the access of aggressive electrolyte to the sample surface. Consequently, electron transfer decreased [7], which, in turn, increased the corrosion resistance of the functionalized samples. Due to the different ratio of fine and coarse features in the microstructure of the films formed on substrates with different grain sizes, the number of air pockets that were formed on the functionalized Ni film on Sample D12 was smaller than that formed on Sample D25. Therefore, properties of the surface/electrolyte on functionalized film on Sample D25 were governed by the large fraction of the air pockets, changing the equivalent circuit model that described its EIS data, compared to that used for functionalized film on Sample (a) (b) (c) Figure 6. Equivalent circuits models superimposed on schematic representations of (a) Samples D12 and D25 before functionalization, (b) functionalized D12, and (c) functionalized D25 in solution. (a-c) Micro-and nanostructures schematically presented out of scale. As described earlier, when the electrodeposited Ni films were functionalized with SA, air pockets formed between sample surface and electrolyte that limited the access of aggressive electrolyte to the sample surface. Consequently, electron transfer decreased [7], which, in turn, increased the corrosion resistance of the functionalized samples. Due to the different ratio of fine and coarse features in the microstructure of the films formed on substrates with different grain sizes, the number of air pockets that were formed on the functionalized Ni film on Sample D12 was smaller than that formed on Sample D25. Therefore, properties of the surface/electrolyte on functionalized film on Sample D25 were governed by the large fraction of the air pockets, changing the equivalent circuit model that described its EIS data, compared to that used for functionalized film on Sample D12. To verify the reliability of EIS data and their corresponding quantified parameters, we performed PDP measurements on the electrodeposited Ni films on Samples D12 and D25 before and after functionalization. Results from the PDP measurements after 30 min immersion of samples in 3.5 wt % NaCl solution are provided in Figure 5c. We estimated corrosion current density (i corr ) and corrosion potential (E corr ) using the Tafel extrapolation method, and report the corresponding values in Table 2, where apparent surface area (A) was used in the calculations. In this table, the corrosion efficiency of the inhibitor (%η), which is calculated via the following equation, is also provided. where i 0 corr and i corr are corrosion current densities of films in the absence and presence of SA, respectively [44]. As is evident from the PDP results, the corrosion resistance of Sample D25 was 2.5 times higher than that of Sample D12. After functionalization, corrosion resistance on both samples was comparable, and the functionalized Ni film on Sample D25 showed only slightly better corrosion resistance compared to that formed on Sample D12, due to the larger fraction of air pockets on its surface, the result of which was consistent with EIS results and CA measurements. Conclusions We examined the impact of copper-substrate-grain size on the micro-/nanostructure, hydrophobicity, and corrosion resistance of succeeding electrodeposited nickel films before and after functionalization with a self-assembled stearic acid film. Crystallography and topography analysis revealed that the copper substrate with larger grain size actuated the formation of a hierarchal nanostructure nickel film with preferred growth plane direction of (111) and contact angle lower than 10 • , whereas the nickel film, deposited a substrate with a smaller grain size, exhibited a more homogeneous structure, with preferred growth plane direction of (220) and a contact angle of about 56 • . After functionalization, deposition of stearic acid film on the samples resulted in a drastic increase in surface hydrophobicity and a contact angle of~150 • . The contact angle of both samples slightly decreased upon exposure to a corrosive medium, but functionalized films maintained their superhydrophobic properties even after 5 days of exposure to a 3.5% NaCl solution. Corrosion resistance of the electrodeposited nickel layer on the copper substrate with the larger grain size was comparatively better than that on the substrate with the smaller grain size. Nevertheless, the corrosion resistance of both samples dramatically increased upon surface functionalization with a self-assembled layer of stearic acid.
6,757.8
2020-04-28T00:00:00.000
[ "Materials Science" ]
YNU-HPCC at SemEval-2019 Task 8: Using A LSTM-Attention Model for Fact-Checking in Community Forums We propose a system that uses a long short-term memory with attention mechanism (LSTM-Attention) model to complete the task. The LSTM-Attention model uses two LSTM to extract the features of the question and answer pair. Then, each of the features is sequentially composed using the attention mechanism, concatenating the two vectors into one. Finally, the concatenated vector is used as input for the MLP and the MLP’s output layer uses the softmax function to classify the provided answers into three categories. This model is capable of extracting the features of the question and answer pair well. The results show that the proposed system outperforms the baseline algorithm. Introduction Many questions pertaining to various fields are posted to QA forums by users every day, where they collect answers. However, the answers do not always address the question asked. Indeed, in some cases, the answer has nothing to do with the question. There are several reasons why this is the case. For example, the responder could have misunderstood the question and so provided a wrong answer. Most QA forums have little control over the quality of the answers posted. Moreover, in our dynamic world, the true answer was true in the past, but it may be false now . Figure 1 presents an example from the Qatar Living forum. In this case, all three answers could be considered to be good since they formally answer the question. Nevertheless, a1 contains false information, whereas a2 and a3 are correct, as can be established from the official government website. In this study, we aim to solve the problem of detecting true factual information in online forums. Given a question requesting factual information, the goal is to classify the provided answers into the following categories. (i) Factual -True: The answer is true and can be proved by cross referencing with an external resource. (ii) Factual -False: The answer gives a factual response, but it is either false, partially false, or the responder is uncertain about their response. (iii) Non-Factual: The answer does not provide factual information relevant to the question; it is either an opinion or an advice that cannot be verified. To the best of our knowledge, various approaches have been proposed for the purposes of fact-checking in community forums (Mihaylova et al., 2018), such as long short-term memory (Gers et al., 2000) . In this paper, we provide an LSTM-Attention model for fact-checking in community question answering forums. In our approach, we use pretrained word vectors for word embedding. The LSTM layer is used to extract features from the question and answer sentences. Finally, these features are used by the Attention Mechanism (Vaswani et al., 2017) with a focus on extracting useful information from the features that are significantly relevant to the current output. The remainder of this paper is organized as fol- lows. In section 2, we described the LSTM, Attention model, and their combination. Section 3 summarizes the comparative results of the proposed model against the baseline algorithm. Section 4 concludes the paper. LSTM-Attention Model for Fact-Checking Figure 2 shows the architecture of our model. First, a sentence is transformed into a feature matrix. The feature matrix is then passed into the LSTM to extract salient features. A simple tokenizer is used to transform each sentence into an array of tokens, which constitute the input to the model. This is then mapped into a feature matrix or sentence matrix by an embedding layer. The n-gram features are extracted when the feature matrix passes through the LSTM, and the output of the LSTM is passed into the Self-Attention layer. This layer composes the useful features to output the final regression results by means of a linear decoder. Embedding Layer Vectors encoded using the one-hot method have large dimensions and are sparse. Suppose we encounter a 2,000-word dictionary in natural language processing (NLP). When the one-hot method is used for coding, each word will be rep- Figure 3: LSTM resented by a vector containing 2,000 integers. If the dictionary is larger, this method will be very inefficient. The one-hot-vector method has many defects when used for word encoding. One is that it has too much redundancy; the other is that the dimension of the vector is too high. The vector will have as many dimensions as there are words, which will increase the computational complexity. Wordembedding Mikolov et al. (2013) transforms an original high-dimensional redundant vector into a low-latitude vector with strong information content. No matter how many words there are, the converted vector generally has only 256 dimensions to 1024 dimensions. The embedding layer is the first layer of the model. Each sentence is regarded as a sequence of word tokens t 1 , t 2 ...t n , where n is the length of the token vector. Long Short-Term Memory In theory, RNN Tsoi and Back (1994) should be able to handle such long-term dependency. We can pick and choose the parameters carefully to solve the most elementary form of this type of problem (Le et al., 2015). However, in practice, RNN is not able to learn this knowledge successfully. Therefore, the LSTM was designed to solve the problem of long-term dependency. In practice, the LSTM excels at dealing with long-term dependency information rather than the ability to acquire it at great cost. RNN has a chain of repeating neural network modules. In standard RNN, the repeating module has a very simple structure. LSTM has the same structure, but the structure of repeating modules is more complex. This is different from that of the single neural network layer. Figure 3 shows the detailed structure of an LSTM. The LSTM cal- Figure 4: Attention culates hidden states H t and outputs C t using the following equations. • Gates: • Input transformation: • State update: Here, x t is the input vector; c t is the cell state vector; W and b are layer parameters; f t , i t , and o t are gate vectors; and σ is the sigmoid function. Note that ⊗ denotes the Hadamard product. Bidirectional LSTM comprises a forward LSTM and a reverse LSTM. It captures context feature information very well as compared to LSTM. Therefore, bidirectional LSTM usually performs better than LSTM and we use it to process the sequences. Among the many hidden layers of deep neural networks, the earlier layers learn simple low-level features, and later layers combine simple features to predict more complex things. Therefore, we use several hidden layers to make predictions more accurate. Attention Mechanism The concept of the Attention mechanism came from the human visual attention mechanism (Butterworth and Cochran, 1980). When people perceive things visually, they usually do not observe the scene end-to-end. Instead, they tend to observe specific parts according to their needs. When people find that a scene has something they want to observe in a certain part, they will learn to pay attention to that part in the future when similar scenes appear. With RNN or LSTM, the information accumulation of several time steps is needed to connect long-distance interdependent features. However, the longer the distance is, the less likely it is to be captured effectively. In the Attention calculation process, the connection between any two words in a sentence is directly established through one calculation step. Thus, the distance between long-distance dependent features is greatly shortened, which is conducive to the effective use of these features. Obviously, it is easier to capture the long-distance interdependent features in sentences after the introduction of Attention. In figure 4, self attention can be described as mapping a query and a set of key-value pairs to an output. The calculation of Attention is mainly divided into three steps. The first step calculates the similarity between query and each key to get the weight. The second step uses a softmax function (Jean et al., 2015) to normalize these weights. Finally, the weight and the corresponding key value are weighted and summed to get the final Attention. Currently, in NLP research, the key and value are always the same, that is, key=value. In this part, we use self-attention, which is denoted as key=value=query (Firat et al., 2016). MLP Layer This layer is a fully connected layer that multiplies the results of the previous layer with a weight matrix and adds a bias vector. The ReLU (Jarrett et al., 2009) activation function is also applied in this layer. The final result vectors are finally input to the output layer. Output Layer This layer outputs the final classification result. It is a fully connected layer that uses softmax as an activation function. Data Preprocessing The organizers of the competition provided the training data that included one question and a number of answers. Each of answer was to be classified into the categories: (Factual -TRUE, Factual -FALSE, Non-Factual). We extracted the questions and corresponding answers, and then concatenated them into the form of a questionanswer pair. As all of the data was provided by the "Qatar Living" forum, the content primarily contained English text, and all non-english characters were ignored. We converted all letters into lower case to accommodate the known tokens in word2vec pretrained word vectors. We counted the sentence length of questions and answers. Most of them were no more than 80 words. Therefore, we set the length of the sentence to 80 words. The word2vec pretrained data was used to initialize the weight of the embedding layer. word2vec is a popular unsupervised machine learning algorithm to acquire word embedding vectors. We used 100-dimension word vectors to initialize the weight of the embedding layer. Implementation We used Keras with TensorFlow backend. The hyper-parameters were tuned in train and dev sets using the scikit-learn grid search function that can iterate through all possible parameter combinations to identify the one that provides the best performance. The optimal parameters found are as follows. The LSTM layer count is 2, and the dimension of the LSTM hidden layer (d) is 200. The dropout rate is 0.3. The training has a batch size of 128 and runs for 30 epochs. The results also revealed that the model using pre-trained word2vec vectors and an Adam optimizer achieved the best performance. Evaluation Metrics The system was scored based on Accuracy, macro-F1, and AvgRec where the "Factual -True" instances were considered to be positive, and the remaining instances to be negative. Results and Discussion To prove the advantages of our system architecture, we ran a 6-fold cross validation on different sets of layers. On training data, the trial data experiment results shown in Table 1: Our system achieved 0.548 accuracy on Subtask B. The evaluation results revealed that our proposed system showed considerable improvement over the average baseline, which we attribute to our LSTM with Attention architecture. Our system can effectively extract features from question and answer. Using this, prediction can be made on whether the answers are actually factual and whether the fact is true or not. Conclusion In this paper, we described our submission to the SemEval 2019 Workshop Task 8, which involved Fact-Checking in Community Forums. The proposed LSTM-Attention model combines LSTM and Attention. LSTM extracts local information within both the answer and question. The Attention Mechanism resolves the issue of poor learning effect on the long input sequence. The official results reveal that our system output performed all baseline algorithms and ranked 9th on Subtask B. In future work, we will query a search engine to fetch relevant documents from the Internet to achieve an improved classification system.
2,697.8
2019-06-01T00:00:00.000
[ "Computer Science" ]
Serotype and Genotype (Multilocus Sequence Type) of Streptococcus suis Isolates from the United States Serve as Predictors of Pathotype Streptococcus suis is a significant cause of mortality in piglets and growing pigs worldwide. The species contains pathogenic and commensal strains, with pathogenic strains causing meningitis, arthritis, endocarditis, polyserositis, and septicemia. Serotyping and multilocus sequence typing (MLST) are primary methods to differentiate strains, but the information is limited for strains found in the United States. D isease caused by Streptococcus suis is a significant economic and welfare concern in the swine industry. S. suis is a Gram-positive bacterium, and the species contains pathogenic and commensal strains. Pathogenic S. suis strains are associated with meningitis, arthritis, endocarditis, polyserositis, and septicemia in piglets and growing pigs (1,2), and S. suis strains isolated from neurological or systemic tissues (brain/ meninges, joints, and heart) are commonly considered the primary pathogens (2)(3)(4). Commensal strains normally reside in the upper respiratory tract of pigs, with pigs commonly serving as carriers (1,5,6). S. suis can be an opportunistic pathogen associated with coinfections with other bacterial and viral pathogens (2,3). In addition, some S. suis strains have zoonotic potential, causing meningitis in humans (7). A 1992 United States study investigated the serotype distribution of S. suis in porcine samples from Minnesota and reported the prevalence of serotypes 2 to 9 and 11, of which serotype 2 was the predominant serotype associated with neurological disease (3). A 1993 U.S. study identified serotypes 1 to 8 and 1/2 in naturally infected pigs primarily from a single state, with serotype 2 being the predominant serotype, followed by serotypes 3, 4, 7, 8, 1, 5, 1/2, and 6 (13). A large U.S. study in 2009 investigated the serotype distribution of S. suis strains collected from 2003 to 2005 from 17 states, illustrating that the distribution of strains was similar to Canada (14). In both countries, serotypes 1/2, 2, 3, 7, and 8 were most prevalent in diseased pigs (14,15) which is dissimilar to the distribution in Europe, in which serotype 2 occurs at a considerably higher percentage of isolates than in North America (16). MLST is a nucleotide sequence-based technique for subtyping bacteria, and a standard MLST scheme has been developed for S. suis, with 1,161 registered sequence type (ST) profiles as of 28 February 2019 (17) (pubmlst.org). Global MLST studies of S. suis identified ST1, ST25, and ST28 as the most prevalent STs in swine (18)(19)(20)(21). In North America, ST25 and ST28 are more common among strains recovered from diseased animals, while ST1 strains are more prevalent in Europe and Asia (18,20,22). However, these studies address MLST for serotype 2 strains and may not apply to the remaining serotypes. Previously, studies have classified isolates into pathotypes based on clinical information and site of isolation (3,4). Our objective was to combine information on pathotype with serotype and ST to address the limited information on current S. suis strains circulating within the United States. In total, 208 porcine S. suis isolates from North America were characterized by serotyping and MLST to determine the population and distribution of S. suis in the United States. Furthermore, the serotype and MLST data were used to investigate associations with the pathogenic and commensal pathotypes with the goal to identify pathogenic-and commensal-specific serotype and MLST patterns. Identifying the major disease-causing strains can promote the development of treatment and control plans. Our research seeks to identify pathogenic strains to track isolates in an outbreak, select strains for a vaccine, and develop effective treatment and control plans. Selection of S. suis isolates. A total of 208 S. suis isolates were selected for the project. Most of the S. suis isolates were obtained from routine diagnostic cases submitted between April 2014 and July 2017 to the University of Minnesota Veterinary Diagnostic Laboratory (UMNVDL) or the Kansas State Veterinary Diagnostic Lab (KSVDL). Further commensal isolates were collected from 9 different farms with a lack of systemic S. suis clinical disease. Isolates that met our pathotype criteria (defined below) were selected from as many states as possible (n ϭ 20) to minimize sample bias and increase geographic diversity to represent the major regions of the U.S. swine industry. S. suis isolates were verified to the species level by matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) (Microflex device, Bruker Daltonics GmbH, Germany) (23). Multiple isolates may be recovered from healthy pigs due to the native microflora of the upper respiratory tract, while a single isolate is generally responsible for systemic infections (24). To limit the bias in isolating and selecting strains associated with clinical signs, a pathotype category system was developed for the S. suis isolates similar to previously published methods (4,25). "Pathogenic" isolates were obtained from the brain/meninges, joint, heart, or liver and reported as the primary cause of meningitis, arthritis, epicarditis, or septicemia in diagnostic reports by pathologists. "Possibly opportunistic" isolates were from lung samples submitted to the diagnostic lab from pigs without signs of neurological or systemic disease and included two isolates from nasal samples from farms with a clinical outbreak of S. suis disease. "Commensal" isolates were from laryngeal, tonsil, or nasal samples retrieved from farms with no known history or current control methods for S. suis disease. Serotyping, MLST via whole-genome sequencing. Isolates were recultured for 24 to 48 h at 37°C on blood agar plates (tryptic soy agar [TSA] with 5% sheep blood) (Thermo Fisher Scientific, Waltham, MA, USA) and sent for serotyping to the bacterial serology laboratory at the Diagnostic Service of the Faculty of Veterinary Medicine of the Université de Montréal, Canada. The serotyping was done through the coagglutination test with reference antisera (26)(27)(28)(29). Nontypeable samples (samples which failed to react with the serum panel, autoagglutinated, or reacted to several sera) were further serotyped by PCR (30), a technique that cannot differentiate serotype 2 from 1/2 and serotype 1 from 14. The S. suis DNA was extracted using the protocol for cultured cells from the QIAamp DNA kit (Qiagen Inc., Germantown, MD, USA) and submitted to the University of Minnesota Genomic Center (UMGC, St. Paul, MN, USA) for library preparation using Nexture TX (Illumina, San Diego, CA), and next-generation sequencing was performed on a HiSeq 2500 instrument (Illumina) with 250-bp paired-end reads. Illumina sequencing reads for each isolate were processed using Trimmomatic (31) with an average quality cutoff of 20 (2.3 million average reads per sample). Strains were again confirmed as S. suis by having a 96.6% to 100% nucleotide identity to the 1,662-bp S. suis-specific recombination/repair protein (recN) sequence (Streptococcus suis 05HAS68, GenBank accession number CP002007) using the S. suis serotyping pipeline (32). In silico MLST analysis was performed using the Short Read Sequence Typing for Bacterial Pathogens (SRST2) program (http://katholt.github.io/srst2), which maps reads to MLST references (33). The ST allele sequences and profiles were obtained from the S. suis MLST database (https://pubmlst.org/ssuis/) (34). Novel ST allele sequences were confirmed by PCR amplification and Sanger sequencing of the aroA, cpn60, dpr, gki, mutS, recA, or thrA genes (17). The primers used for the amplification and sequencing of the mutS gene were mutS forward (5=-AAGCAGGCAGTCGGCGTGGT-3=) and mutS reverse (5=-AGTACAAACTACCATGCTTC-3=) as described (35). STs were grouped into major clonal complexes (CCs) using the entire MLST database and the eBURST software (36). Groups were defined with the strict parameters for determining single-locus variants (match of 6 or more loci). The entire S. suis MLST database was displayed as a single eBURST diagram by setting the group definition to zero of seven shared alleles. MLST clustering analysis. Alignments, sequence identity calculations, and construction of the MLST sequence identity heatmap for basic clustering analysis were performed with R software (v.3.4.3) (37) and R packages (38)(39)(40)(41). The concatenated sequences of the seven MLST alleles were aligned with MUSCLE (v.3.8.31) (42), and sequence identities were calculated. The sequence identity scores were used to generate a heatmap based on Euclidian distances and neighbor joining clustering. Statistical analysis. Basic data transformation and plotting for statistical analyses were performed using R software and R packages (43)(44)(45). Ternary plots of subtypes and pathotypes were generated using the R package Ternary (v.1.0.2) (46). The pathotype boundaries were assigned and colorcoded using 50% as a cutoff. Odds ratio (OR) analysis was used to test all pathotype-subtype combinations containing more than a single isolate, and 95% confidence intervals (CIs) were generated using Fisher's exact test. For each combination, the 2 by 2 table was created comparing that pathotype and subtype against all others. Similar 2 by 2 tables were generated for testing pathotype and serotype-STcombinations by chi-square and Fisher's exact tests. ORs greater than 1 with a 0.3 minimum lower limit were considered biologically significant. The minimum lower limit of 0.3 was calculated as the average lower limit among the combinations, is specific to our data set, and was selected for the identification of biologically meaningful relationships. An infinite (Inf) OR for a pathotype-subtype combination refers to a subtype that occurred in only one pathotype. The associations within and between types were investigated using multiple correspondence analysis (MCA), with the FactoMineR (v.1.41) and factoextra (v.1.0.5) packages (47,48), by setting the serotype, ST, and pathotype as the three variables. Data availability. The reads associated with the samples were deposited in the NCBI Sequence Read Archive under accession numbers SRR9123061 to SRR9123268 (see Table S1). Serotype and ST distributions of S. suis in the United States. Characterization of S. suis isolates by serotyping and MLST. A total of 208 S. suis isolates were characterized, of which 203 were from the United States, 4 from Canada, and 1 from Mexico (Fig. 1). The clinical history and tissue of origin of the isolates were used to determine the pathotype, and the 208 isolates were classified as pathogenic (n ϭ 139), possibly opportunistic (n ϭ 47), and commensal (n ϭ 22) ( Table 1). The recN segment from S. suis was identified in the whole-genome sequences of all the 208 strains (Ͼ99% coverage of the gene and 40ϫ to 314ϫ depth), indicating that the isolates were S. suis. In silico MLST analyses were performed on the WGS data, and the samples had an average depth of 155ϫ across the seven loci. STs could not be determined for four isolates because one housekeeping gene necessary for MLST classification was not identified in these isolates (referred to as NF, see Table S1 in the supplemental material). Fifty-eight different STs were identified for the remaining 204 isolates, indicating high diversity among the isolates (see Table S3 in the supplemental material). Twenty of these STs were previously defined, while 38 were newly identified (961 to 969, 971 to 998, and 1001; n ϭ 56). The predominant ST was ST28 (n ϭ 52), followed by ST94 (n ϭ 18), ST1 and ST108 (n ϭ 17 each). Relationship between serotypes and STs. The distribution of STs by serotype illustrated the diversity of the S. suis strains (Fig. 2). Fifteen of the 20 serotypes identified contained multiple STs, with the number of different STs within a single serotype ranging from 2 to 8. The predominant serotype 1/2 contained three STs (ST28 [n ϭ 44], ST961 [n ϭ 8], and ST982 [n ϭ 1]). Serotypes 8, 14, 24, 28, and 29 contained a single ST each, namely, ST87, ST1, ST94, ST968, and ST972, respectively. However, serotypes 24, 28, 29, and 1or14 contained only a single isolate. Associations among pathotypes, serotypes, and STs by analysis of proportions and OR. Associations between pathotype and serotype. Proportions and OR analyses were used to investigate pathotype associations with serotype for serotypes (proportions) or serotype-pathotype combinations (OR analysis) that contained more than one isolate. Between 80% and 100% of serotypes 1, 1/2, 2, 7, 14, and 23 were classified as the pathogenic pathotype (Fig. 5A), and these associations were supported by OR analysis (Fig. 5B). In the ternary plot, serotypes 3, 5, and 9 demonstrated a moderate association with the pathogenic pathotype, with 56% to 63% of isolates classified as pathogenic. However, the association between pathotype and serotype was not supported by OR analysis. OR analysis supported associations of serotypes 10 and 12 with the possibly opportunistic pathotype, with 67% of isolates classified as possibly opportunistic in the ternary plot. Serotypes 21 and 31, with 67% to 80% of FIG 3 Distribution of S. suis pathotypes by serotype. The stacked histogram illustrates the serotypes identified in this study, which were subdivided by pathotype (pathogenic, possibly opportunistic, and commensal). The x axis represents each serotype while the y axis represents the frequency of each pathotype. Bar sections are labeled with their respective pathotypes. The category 1or14 and NT (nontypeable) represents isolates with serotypes that could not be differentiated by coagglutination, PCR, or WGS. isolates classified as commensal in the ternary plot, were supported as commensal pathotypes by OR analysis. Associations between pathotype and ST. Proportions and OR analysis were used to investigate pathotype associations with ST for STs (proportions) or ST-pathotype combinations (OR analysis) that contained more than one isolate. The ternary plot of the 58 STs (and the NF category) illustrated a clear differentiation by pathotype for all STs except ST87 and ST119 (approximately 50% pathogenic) (Fig. 6A). Twelve STs and the NF category contained over 75% of isolates classified as pathogenic, including ST1, ST13, ST25, ST28, ST29, ST94, ST108, ST117, ST225, ST373, ST961, and ST977, which demonstrated the same associations by OR (Fig. 6B). ST969 had an association with the possibly opportunistic pathotype, which was supported by OR. The commensal pathotype demonstrated a strong association with ST750 and ST821, which was supported by OR analysis. Odds ratio and MCA of pathotypes, serotypes, and STs. Initially, OR was used to investigate the relationships between pathotype and serotype-ST-combinations, but significance relationships were lacking for the combinations (OR data not shown). Then, MCA was performed to analyze the possible relationships among all serotypes, STs, and pathotypes (Fig. 7). The first and second dimensions of the analysis only represent 6% of the data. The ellipses represent 95% of isolates in each pathotype. All the subtypes demonstrating a strong association with the pathogenic pathotype by OR analysis (Fig. 5 and 6) fell within the overlapping 95% ellipses for multiple pathotypes by MCA 3) and typical threshold (OR, 1) for identifying significant ORs. Error bars represent the 95% confidence intervals. Inf, Infinite. Nontypeable (NT) represents isolates which could not be serotyped using coagglutination, PCR, or WGS. (Fig. 7). Five serotypes and 13 STs in the commensal pathotype lacked overlapping ellipses. Serotypes 21 and 31 lacked any isolates with the pathogenic pathotype (Fig. 3), while ST750 and ST821 contained only isolates with the commensal pathotype (Fig. 4). The limited representations of the MCA data (6% variance) and the overlapping ellipses indicate a lack of relationship between serotype, ST, and pathotype, highlighting potential confounding factors for predicting pathogenic isolates based on both serotyping and MLST together. Thus, the relationship between pathotype, serotype, and ST is lacking for the pathogenic and possibly opportunistic pathotypes. Associations between pathotype and MLST CC by analysis of proportions and OR. Identification of S. suis CCs. To investigate the population structure of our S. suis isolates by MLST, the STs were assigned into CCs defined by eBURST, using the entire S. suis MLST database and our 58 STs (Fig. 8 and Table S3). Using the stringent definition (six of seven shared alleles) for defining a CC, five CCs (CC1, CC28, CC94, CC104, and CC750) with a primary founder were identified from our set of STs. However, multiple STs (n ϭ 30) did not form a CC or formed a CC without a primary founder ( Table 2). The most diverse CC (CC94) contained isolates from 13 of the 28 STs assigned into CC, compared with CC1, CC28, CC104, and CC750, which contained isolates from 4, 7, 1, and 3 STs, respectively. Associations between pathotype and CC. Patterns between CC and pathotype were investigated by proportions and OR analysis. CC1, CC28, CC94, and CC104 were associated with the pathogenic pathotype, and the association was supported by OR analysis (Fig. 9). CC750 was associated with the commensal pathotype and was supported by OR analysis, with 83% of isolates classified as the commensal pathotype. The STs among the group of isolates lacking a CC did not associate with any pathotype. CC1 was divided into two groups and clustered with CC750 and isolates without a CC. The first cluster of CC1 contained a concentration of isolates in the pathogenic pathotype (n ϭ 17/28), while the second cluster contained 4 pathogenic isolates, 6 possibly opportunistic isolates, and a single isolate with the commensal pathotype ( Lacking a CC, the ST13 isolates (n ϭ 5; serotype 1 [n ϭ 4] and serotype 1or14 [n ϭ 1]) clustered with CC1 isolates, demonstrating a possible genetic relatedness to isolates of CC1 and the pathogenic pathotype. Serotypes 1, 2, and 14 and ST1 and ST13 were also associated with isolates of the pathogenic pathotype by proportions and OR. Inversely, CC750 (n ϭ 6) consisted of isolates with the commensal (n ϭ 5) and possibly opportunistic (n ϭ 1) pathotypes and was predominantly composed of isolates characterized as nontypeable (n ϭ 5/6) and ST750 (n ϭ 4/6). Interestingly, CC750 was closely related to the group of isolates lacking a CC (n ϭ 31), which consisted of isolates with the commensal pathotype (n ϭ 12/31, multiple serotypes and novel STs), providing further evidence for the association between CC750 and the commensal pathotype. DISCUSSION S. suis is an important swine pathogen, often resulting in neurological and systemic disease caused by pathogenic strains. However, much is still unknown about the population structure of S. suis in the United States. In this study, we utilized serological and molecular typing techniques to investigate the serotype and ST distributions of U.S. isolates. Fourteen of the 20 S. suis serotypes identified in this study were recovered from pigs with clinical disease (n ϭ 139). The predominant pathogenic serotypes identified in this study were 1/2 (n ϭ 45), 7 (n ϭ 19), and 2 (n ϭ 14), which have been previously identified as the predominant serotypes from diseased pigs in North America (14,15,49,50). While serotypes 2 and 3 are considered predominant pathogenic serotypes in North America, only 10.6% of the strains in our study were recovered from diseased pigs. Furthermore, the serotype distribution from our study differed from European studies, in which serotypes 2 and 9 are predominant (50, 51). The higher prevalence of serotype 1/2 in North America could be due to a common evolu- tionary lineage with serotype 2. Genetic analysis by PCR-based serotyping of the cps loci demonstrated serotypes 1/2 and 2 share the same genetic profile and cannot be differentiated by serotype-specific cps loci (11,12). Sequencing of the cpsK gene reveals a missense mutation permitting the differentiation of serotypes 2 and 1/2 (12), but a PCR protocol has not been implemented yet to differentiate these serotypes. In our study, the geographic distribution of S. suis was from 20 different states (Table S1), which represent the major swine-producing states in the United States. Variability in the serotype distribution of S. suis has been reported within the same country, which is likely due to natural differences in geographic distribution (13). Geographic distribution of the S. suis serotypes in our study identified serotype 1/2 in 13 of the 20 states, with a concentration in 5 of the 20 states, possibly displaying a geographic distribution pattern of serotype 1/2 in the United States. Serotype 1/2 is also a frequent serotype (3). The five CCs are indicated by black brackets, with the number of isolates in the CC. Blue brackets represent clusters of isolates without a CC. Nontypeable (NT) represents isolates which could not be serotyped using coagglutination, PCR, or WGS. #, group of isolates lacking a CC; ϩ, ST13 not within a CC but closest to CC1; ϳ, ST979 not within a CC but closest to CC94. found in Canada, although at lower levels than serotypes 2 and 3 (52). This prevalence of serotype 1/2 in Canada may contribute to the U.S. serotype distribution through the transport of pigs between the two countries (50). Transport of livestock has been associated with geographic invasion or the emergence of a pathogen in a novel geographic area (53)(54)(55). While most transport of pigs to the United States head to harvest facilities, new breeding stock of pigs could be colonialized with new S. suis strains, which could result in the spread of new strains to downstream swine farms. Whole-genome analysis of the U.S. and Canadian serotype 1/2 strains would further clarify the relationship between U.S. and Canadian 1/2 strains. We anticipated identifying a large number of novel ST profiles due to the inclusion of commensal and possibly opportunistic samples, which are not generally subjected to subtyping by MLST. As a result of this study, 38 novel ST profiles were submitted to the S. suis MLST database. Of the 58 STs identified here, 24 STs were isolated from pigs with clinical disease, and the predominant STs were ST28 (n ϭ 42), followed by ST1 (n ϭ 17), ST94 (n ϭ 14), and ST108 (n ϭ 14). In a previous Canadian study in 2011, ST25 was the predominant ST found in Canada, while ST28 was the predominant ST found in the United States (22). Our results confirm ST28 as a predominant pathogenic pathotype, while ST25 represents only 1% of the strains recovered from diseased pigs (n ϭ 2). The reason for this low percentage of ST25 isolates in the United States is unclear, and updated ST analysis of S. suis strains from Canada is needed to confirm ST25 as the predominant ST in that country. Our ST distribution also differs from that of European and Asian countries in which ST1 strains, largely characterized as serotype 2, are predominant in diseased pigs (50,56). Proportions, OR, and clustering analysis illustrated potential relationships among pathotypes, serotypes, and STs. While multiple pathogenic serotypes and STs were identified in our study, this discussion focuses on serotype and STs with more than four isolates in the pathogenic pathotype. Serotypes 1, 1/2, 2, 7, 14, and 23 as well as ST1, ST13, ST28, ST94, ST108, ST961, and ST977 were frequently identified as pathogenic strains. Based on our pathotype classifications, isolates characterized as pathogenic were linked to neurological or systemic disease, and our analyses provide evidence that these subtypes are potential indicators of virulence. As discussed previously, serotypes 2 and 1/2 are predominant serotypes identified from diseased pigs in North America, supporting our observations of these serotypes as pathogenic strains by proportions, OR, and clustering analysis (14,15,49,50,52). Serotypes 1 and 7 are more prevalent in diseased pigs in some European countries than in North America, and pathogenic serotype 1 strains have been linked to the production of muramidase-released protein (MRP), extracellular-factor protein (EF), and suilysin (SLY). Pathogenic serotype 1 strains have been characterized as producing both MRP and EF, with variable production of SLY (16,18). In one study (18), four of the six serotype 1 strains were MRP ϩ EF ϩ SLY ϩ and five of the six were either ST1 or ST13, indicating a correlation between serotype 1, ST1, ST13, and virulence. Interestingly, the serotype 1 isolates in the current study were either ST1 (n ϭ 7/11) or ST13 (n ϭ 4/11) and were associated with the pathogenic pathotype, supporting the previous study. Serotype 7 was the second-most common serotype identified in this study, and 19/23 isolates were characterized as the pathogenic pathotype. Virulence studies on serotype 7 strains demonstrating clinical disease in pigs are limited, but a previous in vivo study associated serotype 7 with septicemia and arthritis, with rare cases of meningitis (57). These findings support the classification of serotype 7 as pathogenic. This study demonstrates that ST appears to be a stronger predictor of pathotype than serotype. While experimental mouse models have demonstrated the virulence of serotype 2 ST1, ST25, and ST28 (22,56), our analyses also illustrated ST1, ST13, ST28, ST94, ST108, ST961, and ST977 (of various serotypes) as pathogenic. As mentioned previously, we hypothesize that Canadian and U.S. serotype 2 and serotype 1/2 strains share a evolutionary lineage. If so, the observed virulence of serotype 2 ST28 in previous studies may support the virulence of serotype 1/2 ST28, as predicted in our study. Whole-genome single nucleotide polymorphism (SNP)-based phylogenetic analysis of S. suis serotype 2 ST28 strains revealed a unique clade composed of virulent strains capable of inducing severe disease in a murine infection model (58). These strains demonstrated differences in virulence to reference serotype 2 ST28 strains of low virulence. Recently, a study characterized pathogenic Australian serotype 1/2 ST1 strains by core genome single nucleotide polymorphisms and linked the genetic similarity to pathogenic serotype 1/2 ST1 strains from the United Kingdom and Vietnam (59). Our clustering analysis indicates that ST1, ST13, ST94, ST108, ST961, and ST977 may also be pathogenic. It would be of interest to further investigate the virulence properties of serotype 1/2 ST28, as well as ST1, ST13, ST94, ST108, ST961, and ST977 strains isolated in the United States. In addition to strains in CC1, CC28, and CC104, serotype 9 strains belonging to CC16 (previously CC87) have been isolated from pigs with invasive disease (20). However, the low percentage of serotype 9 strains in our study is reasonable because serotype 9 is predominant in diseased pigs from the Netherlands (16). The serotype 9 strains in this study belong to multiple CCs or occur as singletons and did not demonstrate associations with pathotype. Serotype 9 isolates from diseased and healthy pigs in China were characterized into multiple STs and demonstrated high diversity among the isolates (60). The majority of these serotype 9 isolate STs occurred as singletons and did not form major clonal complexes. Inversely, commensal S. suis serotypes 21 and 31 and ST750 and ST821 were identified by proportions, OR, and cluster analysis. Studies on S. suis from North America have observed a prevalence of serotype 21 from healthy pigs (26,27). However, previous studies have identified a limited number of serotype 31 strains from pigs with typical clinical signs of S. suis disease (49,52,61,62). The association between serotype 31 and pathotype remains unclear and requires further investigation. Associations among serotypes, STs, and pathotypes, although identified by individual analyses, were not evident in the MCA, indicating both serotype and ST together could not indicate pathotype. We investigated additional approaches, such as chisquare and Fisher's exact tests, but these tests failed to generate significant relationships between both serotype and ST. In addition, we investigated associations between serotype-ST combinations and pathotype by chi-square and Fisher's exact tests and did not identify any significant associations. One possible explanation for this is the lack of discrimination due to the limitations of sample size within each subtype. Traditional chi-square and Fisher's exact tests work best on nonsparse data (few zero values) (63,64). These tests have been used to identify associations between S. suis subtypes and characteristics of pathogenicity. However, most studies involved a limited number of subtypes of interest, while our study focused on all serotypes and STs identified in our sample set. Due to the diversity of the S. suis strains in this study and the large number of subtypes evaluated, the division of our data by pathotype resulted in sparse data. Thus, sparse data limits our ability to conduct certain analyses using common approaches for S. suis. An OR formula was used to evaluate statistical significance of subtype with pathotype, as well as the size of the possible effect, to limit the misidentification of associations due to sample size. For this reason, proportions were used for basic identification of relationships and OR analysis was used for further discrimination of strains. In summary, our study increases the knowledge on S. suis strains circulating in the United States between 2014 and 2017 by investigating serotype and ST distributions. We identified a diverse set of strains, predominantly serotypes 1/2, 3, and 7, and as ST1, ST28, and ST94. Further investigation by pathotype classification (defined in this study) identified STs that could be differentiated as pathogenic or commensal pathotypes. The predominance of serotype 1/2 strains from clinically affected pigs in our study stresses the importance of expanding studies of virulence traits to other serotypes and STs of S. suis. These findings can be applied to improve the prevention and control of S. suis by selecting strains for diagnostics and vaccine development. SUPPLEMENTAL MATERIAL Supplemental material for this article may be found at https://doi.
6,612.2
2019-06-26T00:00:00.000
[ "Medicine", "Biology" ]
Tetranectin Binds to the Kringle 1-4 Form of Angiostatin and Modifies Its Functional Activity Tetranectin is a plasminogen kringle 4 domain-binding protein present in plasma and various tissue locations. Decreased plasma tetranectin or increased tetranectin in stroma of cancers correlates with cancer progression and adverse prognosis. A possible mechanism through which tetranectin could influence cancer progression is by altering activities of plasminogen or the plasminogen fragment, angiostatin. Tetranectin was found to bind to the kringle 1-4 form of angiostatin (ASTK1-4). In addition, tetranectin inhibited binding of plasminogen or ASTK1-4 to extracellular matrix (ECM) deposited by endothelial cells. Finally, tetranectin partially counteracted the ability of ASTK1-4 to inhibit proliferation of endothelial cells. This latter effect of tetranectin was specific for ASTK1-4 since it did not counteract the antiproliferative activities of the kringle 1-3 form of angiostatin (ASTK1-3) or endostatin. These findings suggest that tetranectin may modulate angiogenesis through interactions with AST. INTRODUCTION Tetranectin (TN) plays a role in skeletal formation during development since targeted deletion of the protein results in spinal deformity [1]. The function(s) of tetranectin in postnatal life have not been elucidated although there is evidence for roles in tissue remodeling, coagulation, and cancer. Tetranectin was originally isolated as a plasminogen-binding protein that can enhance plasminogen activation in the presence of tissue plasminogen activator [2]. Tetranectin binds to plasminogen through a calcium-sensitive interaction of its C-terminal domain with kringle 4 domain of plasminogen [3,4]. Tetranectin also has a distinct binding site in its N-terminus that mediates binding to complex sulfated carbohydrates (eg, heparin) [5]. The N-terminus of tetranectin could, therefore, mediate binding to extracellular matrix components. Plasma levels of tetranectin are approximately 100 nM in healthly adults [6]. However, these levels decline in patients with cancer and rheumatoid arthritis [6,7,8]. Tetranectin is also found in a mobilizable set of granules in neutrophils [9], in monocytes [10] and platelets [7], and in various tissue locations like cartilage and the extracellular matrix (ECM) of developing or regenerating muscle [11,12,13]. Tetranectin is implicated in the pathogenesis of cancer since decreased plasma levels of tetranectin correlate with cancer progression [6,14]. In the case of ovarian cancer, decreased plasma levels of tetranectin were a stronger predictor of adverse prognosis than cancer stage [15]. Furthermore, tetranectin is present in the stroma of various cancers (eg, breast, ovary, colon), whereas it is not present in normal tissue from which the cancers arose [16,17]. Positive staining for tetranectin in cancer stroma has also been strongly correlated with cancer progression [15]. The mechanisms through which tetranectin may participate in cancer progression have not been elucidated. Tetranectin colocalizes with plasminogen in the invasive front of melanoma lesions [18], although how tetranectin affects binding or local activation of plasminogen in cancer stroma has not been determined. This paper explores the hypothesis that tetranectin may interact with angiostatin. Angiostatin is formed in cancer tissues by proteolytic degradation of plasminogen. The predominant form of angiostatin produced in cancer tissues is AST K1-4 [19,20,21]. AST K1-4 inhibits cancer progression and metastasis by inhibiting cancer-related angiogenesis. We demonstrate that tetranectin binds to AST K1-4 and reduces its ability to bind to ECM of endothelial cells or to inhibit endothelial cell growth. Reagents Human plasminogen and an antibody directed against K1-3 domain of plasminogen were purchased from Enzyme Research Labs (South Bend, Ind). Rabbit antihuman tetranectin (with or without horseradish peroxidase attached) was obtained from DAKO Corp (Carpinteria, Calif). Recombinant angiostatins and endostatin Recombinant human angiostatins containing kringle domains 1-3 and 1-4 (AST K1-3 and AST K1-4 ), and recombinant human endostatin were graciously provided by Drs Nicolas MacDonald and Kim Lee Sim (EntreMed, Inc, Rockville, Md). AST K1-4 was produced in Chinese hamster ovary cells and purified as described in [22]. Endostatin and AST K1-3 were produced in Pichia pastoris [23,24]. Native AST K1-4 derived from human plasma was purchased from Angiogenesis Research Industries (Chicago, Ill). The recombinant and native angiostatins had similar endothelial cell growth inhibitory properties (data not shown). The native preparation was used in the endothelial growth inhibition assays (see below). Enzyme-linked immunoabsorbent assay (ELISA) for binding of angiostatin or plasminogen to tetranectin Binding of angiostatin or plasminogen to tetranectin was assessed by coating plates initially with wild type or mutant forms of tetranectin. Tetranectin was diluted to a final concentration of 6.8 µg/mL (100 nM) in coating buffer (bicarbonate buffer at pH 9.6), added to 96-well microtitre plates (Costar, Corning Inc, Corning, NY), and incubated overnight at 4 • C. The wells were washed and then incubated with either plasminogen (22.5 µg/mL) or angiostatin (50 µg/mL) at room temperature for 1 hour. Bound plasminogen or angiostatin was detected by addition of a 1 : 1000 dilution of antibody directed against the kringle 1-3 domain of plasminogen (Enzyme Research Laboratories, South Bend, Ind) for 1 hour at room temperature. Preliminary studies demonstrated that this antibody recognized angiostatin and plasminogen to a similar extent. A secondary antibody (HRP-labeled donkey anti-mouse IgG; Jackson Research Labs, West Grove, Pa) was then added at 1 : 40 000 dilution for 1 hour at room temperature. Binding was detected using a TMB peroxidase EIA substrate kit (BioRad, Hercules, Calif) and 1N H 2 SO 4 . OD 450 readings were measured using a Titertek Multiscan reader. In all experiments, background binding of plasminogen and angiostatin was tested by including additional wells coated with 2.5% BSA only. Note that initial experiments were attempted in which angiostatin or plasminogen was coated onto ELISA plates followed by addition of tetranectin. However, it was found that background binding of tetranectin to BSA-coated plates was too high to reliably assess binding by this method. Recombinant tetranectins Recombinant wild-type human tetranectin was produced in E coli, refolded and purified as described in [10]. Mutant tetranectins were generated by site-directed mutagenesis as described in [3,4]. Assay of binding of plasminogen or angiostatin to ECM of endothelial cells ECM was prepared from human umbilical vein endothelial cells (HUVECs) grown for 2 days postconfluence as described in [25]. HUVECs were obtained from Clonetics Products, a division of BioWhittaker (San Diego, Calif) and cultured as outlined in the manufacturer's instructions. The subendothelial matrix was recovered by removing cells with 0.5% Triton X-100 in PBS (pH 7.4) followed by incubation with 25 mM NH 4 OH to remove cytoskeletal elements, and then washed with PBS supplemented with 0.05% tween-20. The adherent ECM was incubated with 1% BSA in PBS to saturate nonspecific protein binding sites. AST (at 0.5, 0.25, and 0.125 µM) was preincubated with TN (at 100 nM) for 30 minutes at 37 • C and then added to designated wells of 96-well plate. Bound AST was detected as described above. Assay of endothelial cell proliferation HUVECs were seeded overnight in minimum essential medium containing 2.5% fetal bovine serum (FBS) in a 96-well plate (5000 cells/well) at 37 • C. The following day, fresh medium supplemented with basic fibroblast growth factor (bFGF; 10 ng/mL) was added. In addition, tetranectin, angiostatin, or endostatin or combinations of these proteins were added to triplicate wells. The cells were incubated for 72 hours by replenishing fresh medium and test substances (bFGF, angiostatin, endostatin, tetranectin) as indicated at 24 hours and 48 hours. Cells were then harvested at 72 hours and counted by hemocytometer. Plasminogen and angiostatin bind to tetranectin As expected, plasminogen bound to recombinant wild-type tetranectin ( Figure 1). Since the form of angiostatin composed of kringle domains 1-4 of plasminogen (AST K1-4 ) contains the kringle 4, we tested its binding to tetranectin in parallel. AST K1-4 also bound significantly to tetranectin (Figure 1). The mechanism of binding of tetranectin to plasminogen has been determined through the use of tetranectin mutants [3]. Binding is calcium-sensitive (ie, reduced by increasing concentrations of calcium) and is mediated by Cterminal domain of tetranectin. A mutant form of tetranectin in which lysine 148 was replaced with alanine (TN K148A ) binds to plasminogen markedly less than wild type tetranectin [4]. In contrast, substitution of threonine 149 with tyrosine (TN T149Y ) resulted in increased binding to kringle 4 [4]. Plasminogen and AST K1-4 bound significantly less to the TN K148A and significantly more to TN T149Y than to wild type TN (Table 1 and ELISA plates were coated with recombinant wild-type human tetranectin (100 nM) or BSA, and then treated with plasminogen (22.5 µg/mL) and AST K1-4 (50 µg/mL). Results shown are mean ± SEM of 5 experiments (each experiment done in triplicate). Binding of plasminogen and AST K1-4 to tetranectin was significantly greater than binding to BSA-coated plates (P < .01). Binding of plasminogen to TN was significantly greater than binding of AST K1-4 (P < .05). As shown in Figure 1, AST K1-4 bound to wild-type TN significantly less than plasminogen. However, binding of AST K1-4 to the TN T149Y form was equivalent to binding of plasminogen. Increased plasminogen and angiostatin binding of TN T149Y could result from the greater affinity of this mutant for kringle 4. TN T149Y is also distinguished from wild-type tetranectin in that it binds to the kringle 2 domain of plasminogen [4], which could be involved in binding to AST K1-4 . This is likely to be the case since AST K1-3 showed significant binding to TN Y149A , whereas binding of wild-type TN to AST K1-3 was not significantly greater than background binding to BSA (data not shown). Nonetheless, binding of AST K1-3 to TN T149Y was markedly less than that of AST K1-4 or plasminogen, indicating that increased affinity of TN T149Y for kringle 4 accounts for most of the increased binding of this mutant to AST K1-4 . Angiostatin (AST K1-4 ) binds to ECM of endothelial cells and tetranectin inhibits this binding Our next goal was to determine if tetranectin alters functional activities of angiostatin. Plasminogen binds to ECM of endothelial cells [25,26]. We wanted to determine if angiostatin also binds to ECM of endothelial cells and to determine the effect of tetranectin on this binding. ECM of HUVECs was prepared as described in [25]. As shown in Figure 2, plasminogen did bind to this matrix and this binding was significantly inhibited by pre-incubation of plasminogen with a physiological concentration (100 nM) of wild-type tetranectin. As shown in Figure 3, AST K1-4 also bound to the ECM and binding was again significantly reduced by tetranectin. Tetranectin modulates the ability of angiostatin to inhibit endothelial cell proliferation AST K1-4 significantly inhibited the bFGF-stimulated growth of HUVECs as expected (Figure 4). Tetranectin alone did not significantly alter proliferation in the presence (Figure 4) or absence (data not shown) of bFGF. However, when HUVECs were treated with both tetranectin and AST K1-4 , proliferation was significantly greater than that with AST K1-4 alone. Note that tetranectin did not completely reverse the antiproliferative action of AST K1-4 since there were still significantly fewer cells in cultures treated with the combination of tetranectin and AST K1-4 than in control cultures. In the experiments shown in Figure 4, there was a trend (not statistically significant) toward increased proliferation of endothelial cells in response to tetranectin alone. It was possible, therefore, that the ability of tetranectin to counteract the antiproliferative activity of AST K1-4 resulted from independent effects of tetranectin on endothelial cell proliferation, rather than from its interaction with AST K1-4 . To study this further, we tested the activity of additional concentrations of tetranectin alone, or tetranectin in combination with endostatin or AST K1-3 ( Table 2). In these experiments, 150 nM concentration of tetranectin alone modestly but significantly increased endothelial cell proliferation. However, a further increase of the concentration of tetranectin to 375 nM resulted in loss of this enhancing activity (ie, endothelial cell counts in cultures treated with 375 nM tetranectin were 38 ± 3 × 10 3 as compared to 43 ± 1.4 in control; n = 6; P < .06). Both endostatin and AST K1-3 inhibited the growth of endothelial cells as expected (Table 2). However, tetranectin (150 nM) did not lessen the antiproliferative effect of either AST K1-3 or endostatin. These results suggest that the ability of tetranectin to counteract the antiproliferative activity of AST K1-4 is not the result of an independent effect of tetranectin on the endothelial cells. DISCUSSION The important novel findings of this paper are that tetranectin binds to AST K1-4 , inhibits binding of plasminogen and AST K1-4 to ECM of endothelial cells, and partially counteracts the effects of AST K1-4 on endothelial cell proliferation. The mechanism of binding of AST K1-4 to tetranectin is similar to that of plasminogen based on studies using mutant forms of tetranectin with enhanced or reduced ability to bind plasminogen (Table 1). These results also indicate that interactions of AST K1-4 with tetranectin could be modulated through introduction of discrete modifications of tetranectin's binding site for plasminogen. It is of note that although tetranectin bound significantly to AST K1-4 , binding to AST K1-4 was significantly less than binding to plasminogen. This finding was unexpected since AST K1-4 contains the principal binding site for tetranectin (ie, kringle 4). It may be that the conformation of kringle 4 in AST K1-4 differs sufficiently from its conformation in plasminogen to affect binding of tetranectin. This binding difference may be significant in some physiological situations. However, the other results presented in this paper indicate strongly that binding of tetranectin to AST K1-4 is sufficient to affect other activities of AST K1-4 . We demonstrate that AST K1-4 , like plasminogen, binds to the ECM of endothelial cells. This finding is novel and of interest since it could relate to localization of angiostatin in vivo. Of more relevance to the aims of this paper, we also found that tetranectin significantly reduced binding of AST K1-4 to ECM of endothelial cells. The ability of tetranectin to inhibit binding of AST K1-4 to ECM suggests that it could promote angiogenesis in vivo. We therefore tested whether tetranectin affects the antiangiogenic activity of AST K1-4 . Physiological concentrations of wildtype tetranectin significantly counteracted the effect of AST K1-4 on endothelial cell proliferation. Tetranectin did not have a similar interaction with AST K1-3 or endostatin ( Table 2), indicating that its ability to counteract the antiangiogenic effects of AST K1-4 is dependent on binding to the kringle 4 domain and not to some other direct interaction with endothelial cells. Tetranectin alone had a variably enhancing effect on endothelial cell growth at some concentrations. However, this effect was not dose-related and is unlikely to account for the ability of tetranectin to counteract antiproliferative effects of AST K1-4 based upon results shown in Table 2. As noted, extensive data derived from the study of clinical samples suggests that increased tetranectin in the stroma of cancer tissues is associated with an adverse prognosis in various cancers. Our findings suggest that tetranectin may promote tumor progression by favoring angiogenesis. Cancer-associated angiogenesis has been quantitated by enumeration of the density of microvessels in tumor stroma. Increased microvessel density is associated with adverse prognosis in many cancers [27]. Future studies could address whether increased microvessel density is associated with stromal tetranectin reactivity. The ability of tetranectin to modify other functional properties of angiostatin should also be examined. One immediate implication of our findings is that AST K1-4 and AST K1-3 may have different activities in vivo based on differential binding to tetranectin. This might account for the increased elimination half-life of AST K1-4 compared with AST K1-3 in vivo, or the fact that a similar inhibition of cancer metastases was obtained with lower effective in vivo exposure to AST K1-3 than AST K1-4 [22]. Angiostatin may inhibit angiogenesis in inflammatory states [28] or after vascular injury [29]. Recent studies demonstrate that biologically active angiostatin is produced by neutrophils [30], and that angiostatin inhibits neutrophil migration and inflammation-induced angiogenesis [31]. Of interest, prior studies demonstrated that tetranectin is contained in a subset of neutrophils, from which it can be released after cell stimulation with various agonists [9]. Hence, interactions of tetranectin and plasminogen or angiostatin may also be involved in inflammatory processes. Whereas angiostatin produced in cancer tissues appears most often to be AST K1-4 [19,21], neutrophils produce AST K1-3 [30]. Therefore, the participation of tetranectin in angiogenesis may vary in different physiological or pathological states depending on which form of angiostatin is produced. In summary, we demonstrate that tetranectin binds to the form of angiostatin commonly produced in cancer tissues, characterize the mechanism of binding using mutant forms of tetranectin, and show that tetranectin inhibits important functional properties of angiostatin. These findings provide insight into the mechanisms through which tetranectin participates in cancer progression. Furthermore, these findings have implications for therapeutic use of different forms of angiostatin.
3,882.2
2004-06-30T00:00:00.000
[ "Biology", "Chemistry" ]
Immunostimulatory Properties of the Emerging Pathogen Stenotrophomonas maltophilia ABSTRACT Stenotrophomonas maltophilia is a multiple-antibiotic-resistant opportunistic pathogen that is being isolated with increasing frequency from patients with health-care-associated infections and especially from patients with cystic fibrosis (CF). While clinicians feel compelled to treat infections involving this organism, its potential for virulence is not well established. We evaluated the immunostimulatory properties and overall virulence of clinical isolates of S. maltophilia using the well-characterized opportunistic pathogen Pseudomonas aeruginosa PAO1 as a control. The properties of CF isolates were examined specifically to see if they have a common phenotype. The immunostimulatory properties of S. maltophilia were studied in vitro by stimulating airway epithelial and macrophage cell lines. A neonatal mouse model of pneumonia was used to determine the rates of pneumonia, bacteremia, and mortality, as well as the inflammatory response elicited by S. maltophilia infection. Respiratory and nonrespiratory S. maltophilia isolates were highly immunostimulatory and elicited significant interleukin-8 expression by airway epithelial cells, as well as tumor necrosis factor alpha (TNF-α) expression by macrophages. TNF-α signaling appears to be important in the pathogenesis of S. maltophilia infection as less than 20% of TNFR1 null mice (compared with 100% of wild-type mice) developed pneumonia and bacteremia following intranasal inoculation. The S. maltophilia isolates were weakly invasive, and low-level bacteremia with no mortality was observed. Despite the lack of invasiveness of S. maltophilia, the immunostimulatory properties of this organism and its induction of TNF-α expression specifically indicate that it is likely to contribute significantly to airway inflammation. There has been a notable increase in the prevalence of Stenotrophomonas maltophilia isolated from clinical specimens over the past several years, as documented by the SENTRY Antimicrobial Surveillance Program (18). This organism is often isolated as a nosocomial pathogen in hospitalized patients (7), as well as in cystic fibrosis (CF) (12), burn (36), human immunodeficiency-infected, and other immunosuppressed patients (2,15). Although rarely associated with septic shock, S. maltophilia commonly causes persistent bacteremia and is frequently associated with respiratory tract and catheter-related infections. An analysis of 139 isolates from 105 non-CF patients established that S. maltophilia was a cause of infection in the central nervous system, bone, bloodstream, and urinary tract, as well as the respiratory tract (37). Many case reports have demonstrated the potential of S. maltophilia to cause invasive infection as an opportunistic pathogen in immunocompromised patients (24) or when it is inadvertently introduced into a normally sterile site (20). S. maltophilia has been isolated from 10% of CF patients in the United States (Cystic Fibrosis Foundation registry data) (14) and from up to 25% of CF patients in Europe (12,33). Epidemiological studies have suggested that, unlike Burkhold-eria cenocepacia complex and Pseudomonas aeruginosa infections, the presence of S. maltophilia in CF patients is not associated with a worse clinical outcome (14,34). However, the contribution of this organism to chronic airway inflammation and its ability to persist within biofilms in vivo have not been well studied. Many CF clinicians feel compelled to treat S. maltophilia, a difficult task considering its innate resistance to ␤-lactam and aminoglycoside antibiotics and rapid development of resistance to fluoroquinolones. When S. maltophilia is isolated from normally sterile sites, eradication is similarly challenging. S. maltophilia is of considerable general interest, as a PubMed search for 2006 yielded 165 articles covering diverse aspects of S. maltophilia biology, such as mechanisms of antimicrobial resistance, rapid identification, and descriptions of clinical illnesses. A prototypic strain has recently been sequenced, and annotation of the genome is in progress (www .sanger.ac.uk/Projects/S_maltophilia/). One recent clinical study of 89 S. maltophilia respiratory isolates indicated that the vast majority of these organisms were colonizers and not associated with a significant respiratory infection (26). The molecular mechanisms responsible for the virulence or lack of virulence of S. maltophilia have not been fully characterized. Although S. maltophilia has the high GϩC content (63 to 70%) of the pseudomonads, it lacks the prodigious metabolic capabilities of these organisms. S. maltophilia strains are obligate aerobes, and most, but not all, strains require methionine or cysteine for growth (2). As might be expected for a respiratory pathogen, the organisms can form biofilms (5). Like P. aeruginosa, S. maltophilia expresses a homologue of algC, the gene encoding phosphoglucomutase, a key enzyme in the synthesis of extracellular polysaccharides (22). S. maltophilia expresses flagella, is motile (3), produces an extracellular protease (39), and synthesizes diverse lipopolysaccharide (LPS) structures with at least 31 different O antigens (40). While a single study has suggested that S. maltophilia LPS is less immunogenic than the LPS of Escherichia coli (41), the contribution of LPS to S. maltophilia virulence has not been well characterized. It is not clear if S. maltophilia isolates from CF patients have unique properties, as is the case for P. aeruginosa isolates. Faced with an increasing number of infections with S. maltophilia and limited data regarding the potential of this organism for virulence, we surveyed selected properties of 24 S. maltophilia clinical isolates obtained from the Columbia University Medical Center. We examined strains from diverse clinical settings, including CF and non-CF respiratory specimens, as well as nonrespiratory (blood, skin, and soft tissue) specimens, and evaluated their immunogenic potential in established in vitro and in vivo assay systems by comparing them to the well-characterized laboratory strain P. aeruginosa PAO1. MATERIALS AND METHODS Bacterial strains. Twenty-four nonclonal clinical isolates of S. maltophilia were obtained from different patients over a 3-month period at the Columbia University Medical Center and the CF Referral Center in New York, NY. S. maltophilia was isolated from the respiratory tracts of CF patients (CF isolates) (n ϭ 10) and non-CF patients (non-CF isolates) (n ϭ 7), as well as from patients with blood, skin, and soft tissue infections (n ϭ 7). Bacteria were isolated, identified as S. maltophilia by biochemical characteristics and antibiotic resistance analysis, and grown in Luria-Bertani (LB) broth, and aliquots were frozen in LB-glycerol at Ϫ80°C. For each experiment bacteria were grown from frozen stocks on LB agar. P. aeruginosa PAO1 and a lasl rhll mutant (26) were used as controls. Cell culture and reagents. 1HAEoϪ (human airway epithelial) and 16HBE (human bronchoepithelial) cells were grown as described previously (6). RAW cells were grown in RPMI medium with 10% fetal calf serum (Invitrogen). Unless indicated otherwise, reagents were purchased from Sigma. All media used were supplemented with 100 U/ml penicillin, 100 g/ml streptomycin, 50 g/ml gentamicin, and 4 g/ml amphotericin B. Motility. The motility of S. maltophilia was determined by examining its ability to diffuse in soft agar plates. PAO1 and a Fla Ϫ mutant (10) were used as positive and negative controls. Biofilm assay. Bacteria were grown overnight with agitation, the optical densities at 600 nm were standardized to 2, and the cultures were diluted 1:100 in LB broth. Aliquots (100 l) were added to a 96-well plate, which was incubated for 18 h at 37°C (25). Growth was monitored by determining the optical density at 600 nm, and after two or three washes with water, crystal violet was added for 15 min, which was followed by three rinses with water and then addition of 95% ethanol. The material was then transferred to a fresh 96-well plate, and the absorbance at 540 nm was determined. Each sample was tested in triplicate. IL-8 and TNF-␣ detection. The level of interleukin-8 (IL-8) was determined by an enzyme-linked immunosorbent assay (ELISA) (R&D Systems) following exposure of 1HAEoϪ cells to 10 8 CFU of bacteria (29). The level of tumor necrosis factor alpha (TNF-␣) was determined by an ELISA (DuoSet; R&D Systems) following exposure of RAW cells to 500 ng/ml of lipid A for 4 h. The cell viability was Ͼ75%, as assessed by using trypan blue. Each data point was determined in sextuplicate, and the data were normalized to the protein content. LPS purification and lipid A isolation. Large-scale LPS preparations were extracted using a hot phenol-water extraction method (38). LPS was treated with RNase A, DNase I, and proteinase K (11) and then extracted to remove contaminating proteins. Small-scale LPS preparations were isolated as described previously (9). Lipid A was isolated after hydrolysis in 1% sodium dodecyl sulfate at pH 4.5 (1). Samples were resuspended in 500 l of water, frozen, and lyophilized. For RAW cell stimulation samples were standardized by weight. Mass spectrometry. Negative-ion matrix-assisted laser desorption ionizationtime of flight (MALDI-TOF) experiments were performed as described previously, with the following modifications (9). Lyophilized lipid A was dissolved with 10 l of a 5-chloro-2-mercaptobenzothiazole MALDI matrix in chloroformmethanol (1:1, vol/vol) and then applied (1 l) onto a sample plate. All MALDI-TOF experiments were performed using a Bruker Autoflex II MALDI-TOF mass spectrometer (MS) (Bruker Daltonics Inc., Billerica, MA). Each spectrum was an average of 300 shots. ES tuning mixture (Agilent, Palo Alto, CA) was used to calibrate the MALDI-TOF MS. Mouse model of infection. Groups of 6 to 10 7-to 10-day-old C57BL/6 or C57BL/6-Tnfrsflat mlImx (TNFR1 null; Jackson Laboratories) mice were intranasally inoculated with 10 8 CFU of bacteria in 10 l of phosphate-buffered saline. Sixteen hours later the rates of pneumonia (defined as recovery of Ͼ10 3 CFU per lung), bacteremia (measured by determining the presence of bacteria in the spleen), and mortality were determined (35). For neutrophil detection, lung cell suspensions were analyzed for double expression of CD45 and Ly6C by flow cytometry (13). For lung TNF-␣ mRNA quantification, real-time PCR was performed using primers 5Ј-ATGAGCACAGAAAGCATGATC-3Ј and 5Ј-TACA GGCTTGTCACTCGAATT-3Ј. Actin was used as a control for standardization. The studies were performed in accordance with the guidelines of the Institutional Animal Care and Use Committee. Serum sensitivity assay. Bacteria were grown to an optical density of 0.5, washed in Hanks' balanced salt solution, and incubated with 60% serum, 60% heat-inactivated serum, or Hanks' balanced salt solution alone at 37°C with agitation for 90 min, and then they were plated on LB medium. Phagocytosis assay. Phagocytosis by RAW cells was determined by incubating cells with 10 8 CFU of bacteria for 30 min. Cells were washed, treated with 600 g/ml of gentamicin (which killed all the S. maltophilia strains used) for 1 h, washed again, trypsinized, and plated on LB medium. Invasion assay and measurement of transepithelial resistance. Bacteria (ϳ1 ϫ 10 7 CFU) were added to the apical surface of polarized 16HBE cells on Transwell-Clear cell culture inserts (Corning-Costar) with 3-m pores. After 4 h the number of organisms in the basolateral medium was determined. Transepithelial resistance was measured using Millicell-ERS (Millipore). Triplicate wells were used for both assays. Statistical analysis. Data obtained in the mouse experiments were analyzed using a Mann-Whitney nonparametric test. Categorical variable proportions were compared using Fisher's exact test. S. maltophilia interactions with airway epithelial cells and macrophages. The virulence properties typically associated with airway pathogens include motility, the ability to form biofilms, and activation of chemokine expression (16,30). The majority (80%) of the non-CF S. maltophilia isolates were as motile as P. aeruginosa PAO1, while only 30% of the CF strains were motile (data not shown). Whereas all S. maltophilia strains grew similarly in plastic, biofilm production was highly variable (Fig. 1A). Most CF isolates (6/10) formed biofilms that were appreciably more dense (increased staining with crystal violet) than a PAO1 biofilm. However, 4/10 CF strains did not produce a detectable biofilm and instead behaved like the negative control, JP2, a lasI rhlI mutant of PAO1 (27). All of the non-CF respiratory tract isolates, as well as the isolates from other clinical sites, synthesized at least as much biofilm as PAO1 synthesized, and the majority appeared to produce more extracellular material than PAO1 produced. We then assessed the ability of S. maltophilia to stimulate the expression of IL-8, the major polymorphonuclear leukocyte (PMN) chemokine produced by airway epithelial cells and a common marker of airway inflammation (Fig. 1B). The immunostimulatory capabilities of CF isolates were highly variable and more variable than the immunostimulatory capabilities of S. maltophilia strains from other sources. In a comparison with P. aeruginosa PAO1, some (4/10) of the CF strains were considerably less immunostimulatory, whereas the non-CF isolates did not differ from PAO1. In addition to airway epithelial cells, alveolar macrophages play an important role in the induction of inflammation by secreting TNF-␣ in response to bacterial stimulation. LPS is a potent inducer of TNF-␣ production through its lipid A moiety. The ability of the lipid A moiety of S. maltophilia LPS isolated from 12 clinical strains (7 CF isolates, 4 respiratory non-CF isolates, and 1 blood isolate) to induce TNF-␣ expression in RAW cells, a murine macrophage cell line, was tested (Fig. 1C). All of the S. maltophilia lipid A moieties were significantly more potent for stimulating TNF-␣ production by RAW cells than P. aeruginosa PAO1 lipid A was. The CF isolates exhibited a range of immunostimulation activities, but even the least stimulatory isolate elicited more TNF-␣ production than did PAO1. Similarly, the non-CF isolates induced six-to eightfold more cytokine production than PAO1 induced. Properties of S. maltophilia lipid A. To better understand the immunogenicity of the S. maltophilia strains, particularly compared with the immunogenicity of PAO1, the lipid A moieties of 12 strains were analyzed by MS. MS analysis of S. maltophilia lipid A indicated that there was a high degree of overall heterogeneity in the ion species for all isolates, potentially due to increased fatty acid variability. The results of a detailed analysis of two randomly selected respiratory isolates (CF1 and CF2) and one randomly selected blood isolate (N3) are shown in Fig. 2 . A dominant [M-H] Ϫ ion cluster at a mass-to-charge ratio (m/z) between 1613 and 1670 was observed for all S. maltophilia isolates ( Fig. 2A to C). MALDI-TOF MS for all three isolates revealed a heterogeneous mixture of species (m/z 1613, 1627, 1641, 1655, 1669, and 1683) that suggests that fatty acids that differed by one carbon (⌬m/z, 14) were added to the lipid A structure. Additionally, lipid A isolated from blood isolate N3 had an additional phosphate group (m/z 80) at m/z 1749 and 1763 (Fig. 2C). Compared to the increased heterogeneity observed in lipid A preparations isolated from the individual S. maltophilia clinical isolates, the lipid A from the laboratoryadapted wild-type P. aeruginosa isolate, PAO1, exhibited markedly less heterogeneity, with penta-and hexa-acylated ion species at m/z 1447 and 1616, respectively (Fig. 2D). S. maltophilia does not invade across epithelial monolayers. After initial colonization of the airways, bacterial pathogens must cross the epithelial barrier to cause bacteremia. Five S. maltophilia strains (two CF isolates, two respiratory non-CF isolates, and one blood isolate) were randomly selected, and their invasiveness was determined in vitro by comparing their abilities to cross intact airway epithelial cell monolayers with tight junctions. S. maltophilia caused a reduction in the potential difference across the membrane (from 3,000 ⍀ to 1,500 ⍀). This level of disruption of the tight junctions enabled at most 1 ϫ 10 2 to 3 ϫ 10 2 CFU of the S. maltophilia strains to cross the epithelial barrier, values which are 2 orders of magnitude lower than the number of PAO1 cells that were able to invade. P. aeruginosa PAO1 destroyed the integrity of the tight junctions by the end of the 4-h incubation. This reduced the resistance across the cells to that of the transwell membrane alone (200 ⍀), which enabled Ͼ10 4 CFU to get across the monolayer (Table 1). S. maltophilia virulence in a mouse model of respiratory tract infection. The net effects of the immunostimulatory and invasive properties of five S. maltophilia isolates were tested in a mouse model of infection and compared directly to the effects of P. aeruginosa PAO1 (Fig. 3A). The majority of the S. maltophilia isolates caused pneumonia in more than 50% of the mice inoculated. The importance of biofilm production in the ability to colonize and cause airway infection has been demonstrated for P. aeruginosa (32). Similarly, isolate CF2, a CF isolate, caused pneumonia in only 10% of the mice, which correlated with its inability to form a biofilm (Fig. 1A). The other CF and non-CF isolates all formed dense biofilms (Fig. 1A), which likely contributed to the development of pneumonia (Fig. 3A). Although a relatively high percentage of mice developed bacteremia, the bacterial counts in the spleens were in general very low (Table 2), in contrast to the results obtained with PAO1, which was recovered at a density of Ͼ10 3 CFU. The low levels of bacteremia did not lead to mortality, nor did any of the mice appear to be moribund at 16 h postinoculation. The nature of the immune response in the mouse lungs to each of the S. maltophilia isolates was evaluated by flow cytometry (Fig. 3B). The percentage of PMNs recruited to the lungs as a function of the total number of leukocytes was determined. Once again, there was a wide range of responses, but each of the isolates induced significant PMN recruitment and the number of PMNs was, in general, equivalent to or even greater than the number of PMNs elicited by the PAO1 control strain. S. maltophilia virulence in a TNFR1 null mouse. A major difference between the immune responses induced by the S. maltophilia strains and the immune responses elicited by P. aeruginosa PAO1 was the amount of TNF-␣ produced. To assess the importance of TNF-␣ signaling for clearance of S. maltophilia and for defense against invasive infection, we compared the responses of wild-type C57BL/6 and TNFR1 null mice to the most virulent isolate, isolate N3 ( Fig. 3A and Table 2). N3 caused significantly less pneumonia and bacteremia in the TNFR1 null animals (100% in the wild type, compared to 20% pneumonia and 25% bacteremia in the TNFR1 null mice; FIG. 3. S. maltophilia lung infection. (A) Percentages of C57BL/6 mice (Wild type) and TNFR1 null mice that developed pneumonia or bacteremia or died. Two asterisks indicate that the P value is Ͻ0.01 for a comparison with wild-type mice inoculated with N3. (B) Percentages of PMNs in the total leukocytes in the lungs of wild-type and TNFR1 null mice. Each symbol represents an individual mouse, and the lines indicate the medians for the groups. One asterisk indicates that the P value is Ͻ0.05 and two asterisks indicate that the P value is Ͻ0.01 for a comparison with control mice inoculated with phosphate-buffered saline (C). (C) Lung TNF-␣ mRNA expression in TNFR1 null mice as determined by real-time PCR and standardized to actin. Each symbol represents an individual mouse, and the lines indicate the medians for the groups. The asterisk indicates that the P value is Ͻ0.05 for a comparison with control mice inoculated with phosphate-buffered saline. on January 10, 2021 by guest http://iai.asm.org/ P Ͻ 0.001 for both), and the bacterial counts in the spleens of bacteremic mice were significantly lower (P Ͻ 0.01). Similarly, TNFR1 null mice showed increased bacterial clearance during P. aeruginosa infection (31). The TNF-␣ expression in the lungs of TNFR1 null mice infected with S. maltophilia N3 was determined as a control (Fig. 3C). Whereas high levels of TNF-␣ were induced, the TNF-␣ could not contribute effectively to the pathogenesis of infection in the absence of TNFR1, the only TNF receptor expressed by airway epithelial cells. These results suggest that TNF-␣-dependent signaling is a major cause of the pathology attributed to the organisms. Flow cytometry analysis of the cellular infiltrates in the wild-type and TNFR1 null mice demonstrated that the numbers of recruited PMNs were similar, indicating that chemokine signaling remained intact (Fig. 3B). Phagocytosis and killing assays. In addition to weak invasiveness, the low levels of bacteremia after intranasal infection with high levels of S. maltophilia may be attributed to specific systemic host clearance mechanisms. We compared the serum sensitivities and rates of phagocytosis and killing by RAW cells of the different isolates. While S. maltophilia isolates CF1, CF2, N1, and N2 were sensitive to serum, the N3 strain, like PAO1, was resistant. These results are consistent with the higher bacterial counts in spleens of mice infected with the N3 isolate (Table 2). However, all of the S. maltophilia strains tested were readily phagocytosed (range, 6 to 33%) even more efficiently than the PAO1 control (5%), indicating that control of bacterial replication in the blood prevents sepsis and mortality. DISCUSSION S. maltophilia has the properties expected of an opportunistic pathogen. These organisms have intrinsic antimicrobial resistance and cause infections that result in increased morbidity, but not usually in mortality, in patients with impaired host defenses. The clinical isolates evaluated in the present study were generally capable of biofilm formation and were highly immunostimulatory, features that are very important in the initial colonization of the airways and development of pneumonia. As might be predicted from the accumulating clinical reports, most of the isolates that we tested were not particularly virulent, and none caused death in a neonatal mouse model of respiratory tract infection using a high inoculum. The properties of S. maltophilia strains isolated from pa-tients with respiratory and nonrespiratory infections did not differ significantly. While CF isolates were heterogeneous, there were no marked differences between these isolates and other respiratory (non-CF) isolates that suggested a "CF phenotype," analogous to the mucoid strains of P. aeruginosa. However, we found that only 30% of the CF isolates were motile, compared to 80% of the non-CF isolates. While motility is very important in the pathogenesis of pneumonia (10), one feature of P. aeruginosa adaptation to the CF lung is its loss of motility (20). Flagella are highly immunostimulatory (28) and can function as ligands for nonopsonic phagocytosis (21). Thus, it is thought that decreased expression of flagella may protect the bacteria from the host immune response. Loss of motility, likely due to attenuated expression of flagella, appears to be a common mechanism of adaptation to the CF airways for both P. aeruginosa and S. maltophilia. All the S. maltophilia isolates tested were highly immunostimulatory. Overall, the S. maltophilia strains induced as much IL-8 expression as P. aeruginosa induced. IL-8 (or KC in mice) is a chemokine that recruits PMNs into the lungs. In a murine model of pneumonia, S. maltophilia and P. aeruginosa similarly induced significant PMN recruitment into the lungs. However, S. maltophilia induced substantially more TNF-␣ expression than the P. aeruginosa control induced. This TNF-␣ response may be associated with the high degree of lipid A heterogeneity detected in S. maltophilia (8,23). TNF-␣ is a potent proinflammatory cytokine that induces neutrophil and macrophage activation. Airway inflammation, the hallmark of pneumonia, is required to clear the bacteria, but the accumulation of activated neutrophils and macrophages and their products is detrimental to normal lung function. TNF-␣ signaling contributes significantly to the pathophysiology of S. maltophilia pneumonia, as reflected by the minimal disease observed in the TNFR1 null mice. These findings are consistent with clinical studies that showed that there was deterioration of lung function after prolonged exposure to S. maltophilia in CF patients (19). S. maltophilia strain N3, a blood isolate, was somewhat more virulent than the other strains in the mouse model of pneumonia. Mass spectrometry analysis of the lipid A of this strain, however, revealed larger peaks (m/z Ͼ1,700) that represented modifications not characterized yet. These modifications may explain why this isolate is more virulent, as it has been demonstrated that in other gram-negative bacteria modifications in lipid A play a role in increased virulence and immunostimulatory responses (23). Further studies that include analysis of the O antigen and LPS core modifications are required to confirm this hypothesis. S. maltophilia binds to airway epithelial cells as efficiently as P. aeruginosa binds and it aggregates along the cell junctions (4), but it is poorly invasive. The sequenced S. maltophilia genome has few regions with low levels of homology (20 to 30%) to any of the P. aeruginosa type III secretion genes. Type III secretion systems mediate bacterial interactions with host cytoskeletal components in many gram-negative pathogens and, in P. aeruginosa, correlate highly with invasive infection (17). Thus, the potential lack of type III secretion genes in S. maltophilia may contribute to its limited invasive capabilities. Thus, in terms of invasive potential, S. maltophilia differs substantially from even the laboratory strain P. aeruginosa PAO1. Moreover, during pulmonary infection, the few organisms that cross the epithelial barrier are readily cleared, if not by lytic effects of serum, by phagocytosis, and they do not produce concentrations in the blood that are high enough to cause sepsis. S. maltophilia, like P. aeruginosa, has the potential to contribute to the inflammatory process that compromises respiratory function in CF and in hospital-acquired pneumonias. Since few CF patients have an S. maltophilia infection without a concomitant P. aeruginosa infection (34), it is difficult to sort out the relative contribution of each organism to ongoing lung damage. However, our data suggest that targeting S. maltophilia with antimicrobial therapy and perhaps even anti-inflammatory therapy may decrease overall levels of inflammation that contribute to pathology.
5,789
2007-01-12T00:00:00.000
[ "Biology", "Medicine" ]
Molecular Docking and Pharmacological Investigations of Rivastigmine-Fluoxetine and Coumarin–Tacrine hybrids against Acetyl Choline Esterase The present AChE inhibitors have been successful in the treatment of Alzheimer׳s Diseases however suffers serious side effects. Therefore in this view, the present study was sought to identify compounds with appreciable pharmacological profile targeting AChE. Analogue of Rivastigmine and Fluoxetine hybrid synthesized by Toda et al, 2003 (dataset1), and Coumarin−Tacrine hybrids synthesized by Qi Sun et al (dataset2) formed the test compounds for the present pharmacological evaluation. p-cholorophenyl substituted Rivastigmine and Fluoxetine hybrid compound (26d) from dataset 1 and −OCH3 substitute Coumarin−Tacrine hybrids (1h) from dataset 2 demonstrated superior pharmacological profile. 26 d showed superior pharmacological profile comparison to the entire compounds in either dataset owing to its better electrostatic interactions and hydrogen bonding patterns. In order to identify still better compound with pharmacological profile than 26 d and 1h, virtual screening was performed. The best docked compound (PubCId: PubCid: 68874404) showed better affinity than its parent 26 d, however showed poor ADME profile and AMES toxicity. CHEMBL2391475 (PubCid: 71699632) similar to 1h had reduced affinity in comparison to its parent compound 1h. From, our extensive analysis involving binding affinity analysis, ADMET properties predictions and pharmacophoric mappings, we report p-cholorophenyl substituted rivastigmine and fluoxetine hybrid (26d) to be a potential candidate for AcHE inhibition which in addition can overcome narrow therapeutic window of present AChE inhibitors in clinical treatment of Alzheimer׳s disease. Abbreviations AD - Alzheimer׳s Disease, AChE - Acetyl Choline Estarase, OPLS - Optimized Potentials for Liquid Simulations, PDB - Protein Data Bank. reported with impairment in memory, decision making, orientation to physical surroundings and language. Cholinergic hypothesis of the pathogenesis now shows dysregulation of cholinergic system forms the major pathological feature of AD [4]. Biopsies of the cerebral cortex has shown that these cholinergic neurons which provide extensive innervations in the cerebral cortex selectively degenerate which affects the cognitive functions, especially memory [5]. With the immense role of cholinergic system in AD, several pharmacological strategies have been aimed at correcting the cognitive deficits by manipulating cholinergic neurotransmission. The most powerful strategy developed was development of Acetyl Choline Esterase (ChEI) inhibitors that selectively blocks Acetyl Choline Esterase (AChE)-an enzyme which is involved in termination of synaptic transmission by hydrolysis of acetyl choline and finally making it unavailable for neural transmission in coxtex which otherwise is manifested as cognitive dysfunction observed in AD. Since the introduction of the first cholinesterase inhibitor in 1997, most clinicians would consider treatment by the cholinergic drugs like donepezil, galantamine and rivastigmine that forms first line pharmacotherapy for mild to moderate Alzheimer's disease [6,7]. Various clinical trials of inhibitors have shown that, on the whole their effects were modest however were associated with frequent adverse reactions and lack of the drug's substrate specificity [8]. In addition, some drugs like donepezil delays the disease worsening but nevertheless offers acute symptoms like headache, constipation, confusion and dizziness. In some patients, the regular dose of donepezil, galantamine and rivastigmine have been positively associated with acute insomnia and anorexia [9]. Considering the side effects of the present compounds, the treatment strategy of AD thus shifted to ethnopharmacological approach which promises high activity bestowed with minimal side effects. In traditional practices of medicine, numerous plants have been used to treat cognitive disorders, including neurodegenerative diseases such as Alzheimer's disease (AD). There are numerous drugs available in Western medicine that have been directly isolated from plants, or are derived from templates of compounds from plant sources. Therefore, In the view of above, the present study focuses computer based pharmacological profiling, evaluation and identification high affinity plant compounds from the dataset of rivastigmine and fluoxetine hybrid compound synthesized by N. Toda Preparation of protein and compounds The crystal structure of AChE receptor was retrieved from Protein Data Bank (PDB) with PDB ID: 1ACJ [12] (Figure 2). The X-Ray diffraction structure of AChE receptor had a resolution of 2.80 Å and R value of 0.195. Unit cell parameters were as Length [Å] a = 113.70, b = 113.70, c = 138.10, Angles [°] α = 90.00, β = 90.00, γ = 120.00. The structure was downloaded in pdb format and was further prepared for docking process. The protein was prepared using the PrepWiz module of Schrodinger suite. In the preparation procedure, the protein was first preprocessed by assigning the bond orders and hydrogens, creating zero order bonds to metals and adding disulphide bonds. The missing side chains and loops were filled using Prime Module of Schrodinger. Further all the water molecules were deleted beyond 5 Å from hetero groups. Once the protein structure was preprocessed, H bonds were assigned which was followed by energy minimization by OPLS 2005 force field [13]. The final structure obtained was saved in.pdb format for further studies. All the ligands were optimized through OPLS 2005 force field algorithm embedded in the LigPrep module of Schrödinger suite, 2013 (Schrodinger. LLC, New York, NY) [14]. The ionizations of the ligand were retained at the original state and were further desalted. The structures thus optimized were saved in sdf format for docking procedures. Structure Similarity search The compound with superior pharmacological profile amongst the two datasets was further used as query molecule in pursuit to identify still better druglike compound than any molecules mentioned in the dataset. Similarity search was supervised by Binary Finger Print Based Tanimoto similarity equation to retrieve compounds with similarity threshold of 95 % against NCBI's Pubchem compound database [15]. Molecular docking of compounds Molecular docking program-Molegro Virtual Docker (MVD) which incorporates highly efficient PLP (Piece wise Linear potential) and MolDock scoring function provided a flexible docking platform [16,17]. All the ligands were docked at the active site of AChE. Docking parameters were set to 0.20Å as grid resolution, maximum iteration of 1500 and maximum population size of 50. Energy minimization and hydrogen bonds were optimized after the docking. Simplex evolution was set at maximum steps of 300 with neighborhood distance factor of 1. Binding affinity and interactions of ligands with protein were evaluated on the basis of the internal ES (Internal electrostatic Interaction), internal hydrogen bond interactions and sp2-sp2 torsions. Post dock energy of the ligand-receptor complex was minimized using Nelder Mead Simplex Minimization (using non-grid force field and H bond directionality) [18]. On the basis of rerank score best interacting compound was selected from each dataset. Bioactivity and ADMET profiling of compounds. All the compounds were screened for its drug ability by lipinksi filters. Biological activity of the ligands was predicted using Molinspiration webserver (© Molinspiration Cheminformatics 2014). The complete ADMET properties was calculated using admetSAR [19,20]. Pharmacophoric Mapping Pharmacophoric mapping which involves ligand interaction patterns, hydrogen bond interaction, hydrophobic interactions was evaluated using Accelrys Discovery Studio 3.5 DS Visualizer [21]. 1h (Figure 1b) from dataset 2 demonstrated highest binding affinity. In particular, compound 26 d a hybrid molecule with the motifs of Rivastigmine and Fluoxetine with functional modification with p-chlorophenyl showed highest affinity than compounds in either groups. From keen perusal of the structural details of 26d, it may be assumed that large substituent (R= pchlorophenyl) may attributed to its better activity (IC50 >1000) and highest affinity (Rerank Score=-168.933). From dataset 2, compound 1h-a Coumarin-Tacrine hybrid demonstrated highest binding affinity against AChE. However, our observations of binding affinity did not correlate with the estimated activity by authors, 1q as described by authors shows highest activity (Ki= 91.1), while adhering to our observation it is 1h which showed highest binding affinity (rerank score=-166.33). The discrepancies observed is an important subject for further investigation. However, taking into consideration all the compounds from dataset1 and 2, unarguably 26d (from dataset) (Figure 2) demonstrated highest binding affinity and in addition showed optimal in vitro activity. In further approach, in pursuit to identify even better molecule endowed with superior pharmacological profile than compound 26 d from dataset 1 and compound 1h from dataset 2, virtual screening was performed against Pubchem database (taking compound 61 as query). A total of 14 compounds structurally similar to compound 26d were retrieved while 18 structural similar were retrieved against its parent compound 1h. All the similar compounds those akin to 26 d and 1h retrieved hitherto were docked against AChE structure. Compound with Pubchem Id: 68874404 (Figure 1c) showed superior binding affinity out of all the similar 14 compounds retrieved against its parent compound 26 d, while, compound CHEMBL2391475 (PubCid: 71699632) (Figure 1d) demonstrated superior affinity among all the 18 compounds retrived with respect to its parent compound1h Table 3 (see supplementary material). It worthy to note that though PubCid: 68874404 showed slightly higher affinity to AChE than its parent compound 26d , however, quite apparent from predicted activity scores, Table 4 (see supplementary material) it shows abruptly less activity for enzyme inhibition. In addition the ADMET profiles were comparatively poor when compared to its parent compound 26d Table 5 (see supplementary material). However, the important drawback of compound PubCid: 68874404 was that it was predicted to be Ames toxic. Therefore, it can be presumed that, though it has good affinity profile, however, it should not form candidate drug owing to its toxicity. While in the case of CHEMBL2391475 the affinity score was 1.09 folds declined than its parent compound 1h Table 3 (see supplementary material) in addition the predicted enzyme inhibition activity was considerably lower In the further perusal, our pursuit was to reveal the rationale behind superior pharmacological profile of 26 d. In terms of binding affinity, the appreciable binding can be attributed to its excellent interaction profile especially in terms of electrostatic and H-bonding interactions Table 3 As show in Table 6 (see supplementary material), the interaction profile of 26 d was quite appreciable than compound 1h from dataset 2 and its respective similar CHEMBL2391475 (PubCid: 71699632). An obvious thing which can be noted is, although 26 d similar compound PubCid: 68874404 shows good interaction profile, nevertheless, as mentioned above suffer with poor ADME properties and AMES toxicity. Owing to optimal affinity, high enzyme inhibition activity and non-toxicity, 26 d was further analyzed for pharmacophoric mappings. Comprehensively shown in Figure 3 Supplementary material: * Compound with highest binding affinity, + Activity tested in mouse brain.
2,366
2015-08-31T00:00:00.000
[ "Chemistry", "Medicine" ]
Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System? Data privacy is an important issue for “machine learning as a service” providers. We focus on the problem of membership inference attacks: Given a data sample and black-box access to a model’s API, determine whether the sample existed in the model’s training data. Our contribution is an investigation of this problem in the context of sequence-to-sequence models, which are important in applications such as machine translation and video captioning. We define the membership inference problem for sequence generation, provide an open dataset based on state-of-the-art machine translation models, and report initial results on whether these models leak private information against several kinds of membership inference attacks. Motivation There are many situations where private entities are worried about the privacy of their data.For example, many companies provide black-box training services where users are able to upload their data and have customized models built for them, without requiring machine learning expertise.A common concern in these "machine learning as a service" offerings is that the uploaded data be visible only to the client that owns it. Currently, these entities are in the position of having to trust that service providers abide by the terms of their agreements.While trust is an important component in relationships of all kinds, it has its limitations.In particular, it falls short of a well known security maxim, originating in a Russian proverb that translates as, Trust, but verify. 1 Ideally, customers would be able to verify that their This problem has been formalized as the membership inference problem, first introduced by Shokri et al. (2017) and defined as: "Given a machine learning model and a record, determine whether this record was used as part of the model's training dataset or not."The problem can be tackled in an adversarial framework: the attacker is interested in answering this question with high accuracy, while the defender would like this question to be unanswerable (see Figure 1).Since then, researchers have proposed many ways to attack and defend the privacy of various types of models.However, the work so far has only focused on standard classification problems, where the output space of the model is a fixed set of labels. In this paper, we propose to investigate membership inference for sequence generation problems, where the output space can be viewed as a chained sequence of classifications.Prime examples of sequence generation includes machine translation and text summarization: in these problems, the output is a sequence of words whose length is undetermined a priori.Other examples include speech synthesis and video caption generation.Sequence generation problems are more arXiv:1904.05506v2[cs.LG] 16 Mar 2020 complex than classification problems, and it is unclear whether the methods and results developed for membership inference in classification problems will transfer.For example, one might imagine that while a flat classification model might leak private information when the output is a single label, a recurrent sequence generation model might obfuscate this leakage when labels are generated successively with complex dependencies. We focus on machine translation (MT) as the example sequence generation problem.Recent advances in neural sequence-to-sequence models have improved the quality of MT systems significantly, and many commercial service providers are deploying these models via public API's.We pose the main question in the following form: Given black-box access to an MT model, is it possible to determine whether a particular sentence pair was in the training set for that model? In the following, we define membership inference for sequence generation problems ( §2) and contrast with prior work on classification ( §3).Next we present a novel dataset ( §4) based on state-of-the-art MT models. 2 Finally, we propose several attack methods ( §5) and present a series of experiments evaluating their ability to answer the membership inference question ( §6).Our conclusion is that simple one-off attacks based on shadow models, which proved successful in classification problems, are not successful on sequence generation problems; this is a result that favors the defender.Nevertheless, we describe the specific conditions where sequence-to-sequence models still leak private information, and discuss the possibility of more powerful attacks ( §7). Problem Definition We now define the membership inference attack problem for sequence-to-sequence models in detail.Following tradition in the security research literature, we introduce three characters: Alice (the service provider) builds a sequenceto-sequence model based on an undisclosed dataset A train and provides a public API.For MT, this API takes a foreign sentence f as input and returns an English translation ê. Bob (the attacker) is interested in discerning whether a data sample was included in Alice's training data A train by exploiting Alice's API.This sample is called a "probe" and consists of a foreign sentence f and its reference English translation, e. Together with the API's output ê, Bob has to make a binary decision using a membership inference classifier g(•), whose goal is to predict: We term in-probes to be those probes where the true class is in, and out-probes to be those whose true class is out.Importantly, note that Bob has access not only to f but also to e in the probe.Intuitively, if ê is equivalent to e, then Bob may believe that the probe was contained in A train ; however, it may also be possible that Alice's model generalizes well to new samples and translates this probe correctly.The challenge for Bob is to make this distinction; the challenge for Alice is to prevent Bob from doing so. Carol (the neutral third-party) is in charge of setting up the experiment between Alice and Bob.She decides which data samples should be used as in-probes and out-probes and evaluates Bob's classification accuracy.Carol is introduced only to clarify the exposition and to setup a fair experiment for research purposes.In practical scenarios, Carol does not exist: Bob decides his own probes, and Alice decides her own A train . Detailed Specification In order to be precise about how Carol sets up the experiment, we will explain in terms of machine translation, but note that the problem definition applies to any sequence-to-sequence problem.A training set for MT consists of a set of sentence pairs {(f The distinction among subcorpora is not necessary in the abstract problem definition, but is important in practice when differences in data distribution may reveal signals in membership. Without loss of generality, in this section assume that Carol has a finite number of samples from two subcorpora d ∈ { 1 , 2 }.First, she creates an out-probe of k samples from subcorpus 1 : Then Carol creates the data for Alice to train Alice's MT model, using subcorpora 1 and 2 : (3) Importantly, the two sets are totally disjoint: i.e.A out_probe ∩ A train = ∅.By definition, out-probes are sentence pairs that are not in Alice's training data.Finally, Carol creates the in-probe of k samples by drawing from A train , i.e.A in_probe ⊂ A train , which is defined to be samples that are included in training: ) Note that both A in_probe and A out_probe are sentence pairs that come from the same subcorpus; the only difference is that the former is included in A train while the latter is not. There are several ways in which Bob's data can be created.For this work, we will assume that Bob also has some data to train MT models, in order to mimic Alice and design his attacks.This data could either be disjoint from A train , or contain parts of A train .We choose the latter, which assumes that there might be some public data that is accessible to both Alice and Bob.This scenario slightly favors Bob.In the case of MT, parallel data can be hard to come by, and datasets like Europarl are widely accessible to anyone, so presumably both Alice and Bob would use it.However, we expect that Alice has in-house dataset (e.g., crawled data) which Bob does not have access to.Thus, Carol creates data for Bob: (5) Note that this dataset is like A train but with two exceptions: all samples from subcorpora 2 and all samples from A in_probe are discarded.One can view 2 as Alice's own in-house corpus which Bob has no knowledge of or access to, and 1 as the shared corpus where membership inference attacks are performed. To summarize, Carol gives A train to Alice, who uses it in whatever way she chooses to build a sequence-to-sequence model M [A train , Θ].The model is trained on A train with hyperparameters Θ (e.g., neural network architecture) known only to Alice.In parallel, Carol gives B all to Bob, who uses it to design various attack strategies, resulting in a classifier g(•) (see Section 5).When it is time for evaluation, Carol provides both probes A in_probe and A out_probe to Bob in randomized order and asks Bob to classify each sample as in or out.For each probe (f i ), Bob is allowed to make one call to Alice's API to obtain ê(d) i .As an additional evaluation, Carol creates a third probe based on a new subcorpus 3 .We call this the "out-of-domain (OOD) probe": i ) : Both A out_probe and A ood should be classified as out by Bob's classifier.However, it has been known that sequence-to-sequence models behave very differently on data from domains/genre that is significantly different from the training data (Koehn and Knowles, 2017).The goal of having two out probes is to quantify the difficulty or ease of membership inference in different situations. Summary and Alternative Definitions Figure 2 summarizes the problem definition.The probes A out_probe and A ood are by construction outside of Alice's training data A train , while the probe A in_probe is included.Bob's goal is to produce a classifier that can make this distinction.He has at his disposal a smaller dataset B all , which he can use in whatever way he desires.There are k samples each for A in_probe , A out_probe , and A ood .Alice's training data A train excludes A out_probe and 3 , while including A in_probe .Bob's data B all is a subset of Alice's data, excluding A in_probe and 2 . There are alternative definitions of this membership inference problem.For example, one can allow Bob to make multiple API calls to Alice's model for each probe.This enlarges the repository of potential attack strategies for Bob.Or, one could evaluate Bob's accuracy not on a per-sample basis, but allow for a coarser granularity where Bob can aggregate inferences over multiple samples.There is also a distinction between white-box and black-box attacks: we focus on the black-box case where Bob has no internal access to the internal parameters of Alice's model, but can only guess at likely model architectures.In the whitebox case, Bob would have access to Alice's model internals, so different attacks would be possible (e.g., backpropagation of gradients).In these respects, our problem definition makes the problem more challenging for Bob the attacker. Finally, note that Bob is not necessarily always the "bad guy".Some examples of who Alice and Bob might be in MT are: (1) Organizations (Bob) that provide bitext data under license restrictions might be interested to determine whether their licenses are being complied with in published models (Alice).(2) The organizers (Bob) of an annual bakeoff, e.g.WMT, might wish to confirm that the participants (Alice) are following the rules of not training on test data.(3) "MT as a service" providers may support customized engines if users upload their own bitext training data.The provider promises that the user-supplied data will not be used in the customized engines of other users, and can play both Alice and Bob, attacking its own model to provide guarantees to the user.If it is possible to construct a successful membership inference mechanism, then many "good guy" would be able to provide the aforementioned fairness (1, 2) and privacy guarantees (3). 3 Related Work Shokri et al. (2017) introduced the problem of membership inference attacks on machine learning models.They showed that with shadow models trained on either realistic or synthetic datasets, Bob can build classifiers that can discriminate A in_probe and A out_probe with high accuracy.They focus on classification problems such as CIFAR image recognition and demonstrate successful attacks on both convolutional neural net models as well as the models provided by Amazon ML. Why do these attacks work?The main information exploited by Bob's classifier is the output distribution of class labels returned by Alice's API.The prediction uncertainty differs for data samples inside and outside the model training data, and this can be exploited.Shokri et al. (2017) proposes defense strategies for Alice, such as restricting the prediction vector to top-k classes, coarsening the values of the output probabilities, and increasing the entropy of the prediction vector.The crucial difference between their work and ours, besides our focus on sequence generation problems, is the availability of this kind of output distribution provided by Alice.While it is common to provide the whole distribution of output probabilities in classification problems, this is not possible in sequence generation problems because the output space of sequences is exponential in the output length.At most, sequence models can provide a score for the output prediction ê(d) i , for example with a beam search procedure, but this is only one number and not normalized.We do experiment with having Bob exploit this score (Table 3), but it appears far inferior to the use of the whole distribution available in classification problems. Subsequent work on membership inference has focused on different angles of the problem.et al. (2017) proposes attack methods based on generative adversarial networks, while Nasr et al. (2018) provides adversarial regularization techniques for the defender.Nasr et al. (2019) extends the analysis to white-box attacks and a federated learning setting.Pyrgelis et al. (2018) provides an empirical study on location data.Veale et al. (2018) discusses membership inference and the related model inversion problem, in the context of data protection laws like GDPR.Shokri et al. (2017) notes a synergistic connection between the goals of learning and the goals of privacy in the case of membership inference: the goal of learning is to generalize to data outside the training set (e.g., so that A out_probe and A ood are translated well), while the goal of privacy is to prevent leaking information about data in the training set.The common enemy of both goals is overfitting.Yeom et al. (2017) analyze how overfitting by Alice's increases the risk privacy leakage; Long et al. (2018) showed that even a well-generalized model holds such risks in classification problems, implying that overfitting by Alice is a sufficient but not necessary condition for privacy leakage. A large body of work exists in differential privacy (Dwork, 2008;Machanavajjhala et al., 2017).Differential privacy provides guarantees that a model trained on some dataset A train will produce statistically similar predictions as a model trained on another dataset which differs in exactly one sample.This is one way in which Alice can defend her model (Rahman et al., 2018), but note that differential privacy is a stronger notion and often involves a cost in Alice's model accuracy.Membership inference assumes that content of the data is known to Bob and only is concerned whether it was used.Differential privacy also protects the content of the data (i.e., the actual words in (f i ) should not be inferred).Song and Shmatikov (2019) explored the membership inference problem of natural language text, including word prediction and dialog generation.They assume that the attacker has access to a probability distribution or a sequence of distributions over the vocabulary for the generated word or sequence.This is different from our work where the attacker gets only the output sequence, which we believe is a more realistic setting. Data: subcorpora and splits Based on the problem definition in Section 2, we construct a dataset to investigate the possibility of the membership inference attack on MT models.We make this dataset available to the public to encourage further research. 4here are various considerations to ensure the benchmark is fair for both Alice and Bob: we need a dataset that is large and diverse to ensure Alice can train state-of-the-art MT models and Bob can test on probes from different domains.We used corpora from the Conference on Machine Translation (WMT18) (Bojar et al., 2018).We chose German-English language pair because it has a reasonably large amount of training data, and previous work demonstrate high BLEU scores. We now describe how Carol prepares the data for Alice and Bob. First, Carol selects 4 subcorpora for the training data of Alice, namely CommonCrawl, Europarl v7, News Commentary v13, and Rapid 2016.A subset of these 4 subcorpora are also available to Bob ( 1 in section 2.1).In addition, Carol gives ParaCrawl to Alice but not Bob ( 2 in §2.1).We can think of it as an in-house data the service provider holds.For all these subcorpora, Carol first performs basic preprocessing: (a) tokenization of both the German and English sides using the Moses tokenizer, (b) de-duplication of sentence pairs so that only unique pairs are present, and (c) randomly shuffling all sentences prior to splitting into probes and MT training data. 5igure 3 illustrates how Carol splits subcorpora for Alice and Bob.For each subcorpus, Carol splits them to create probes A in_probe and A out_probe , and A train and B all .Carol sets k = 5, 000, meaning each probe set per subcorpus has 5,000 samples.For each subcorpus, Carol selects 5,000 samples to create A out_probe .She then uses the rest as A train and select 5,000 from it as A in_probe .She excludes A in_probe and ParaCrawl from A train to create a dataset for Bob, B all . 6In addition, Carol has 4 other domains to create out-of-domain probe set A ood , namely, EMEA and Subtitles 18 (Tiedemann, 2012), Koran (Tanzil), and TED (Duh, 2018).These subcorpora are equivalent to 3 in section 2.1.The size of A ood is 5,000 per subcorpus, same as A in_probe and A out_probe .The number of samples for each set is summarized in Table 1. Alice MT Architecture Alice uses her dataset A train (consisting of 4 subcorpora and ParaCrawl) to train her own MT model.Since Paracrawl is noisy, Alice first applied dual conditional cross-entropy filtering (Junczys-Dowmunt, 2018), retaining the top 4.5 million lines.Alice then trained a joint BPE sub- 6 We prepared two different pairs of A in_probe and A out_probe .Thus B all has 10k less samples than Atrain, and not 5k less.For the experiment we used only one pair, and kept the other for future use. Evaluation Protocol To evaluate membership inference attacks on Alice's MT models, we use the following procedure: First, Bob asks Alice to translate f .Alice returns her result ê to Bob. Bob also has access to the reference e and use his classifier g(f, e, ê) to infer whether (e, f ) was in Alice's training data.The classification is reported to Carol, who computes "attack accuracy".Given a probe set P containing a list of (f, e, ê, l), where l is the label (in or out), this accuracy is defined as: If the accuracy is 50%, then the binary classification is same as random, and Alice is safe.An accuracy slightly above 50% can be considered potential breach of privacy. Shadow Model Framework Bob's initial approach for attack is to use "shadow models", similar to Shokri et al. (2017).The idea is that Bob creates MT models with his data to mimic (shadow) the behavior of Alice's MT model, then train a membership inference classifier on these shadow models.To do so, Bob splits his data B all into his own version of in-probe, outprobe, and training set in multiple ways to train MT models.Then he translates these probe sentences with his own shadow MT models, and use the resulting (f, e, ê) with its in or out label to train a binary classifier g(f, e, ê).If Bob's shadow models are sufficiently similar to Alice's in behavior, this attack can work. Bob first selects 10 sets of 5,000 sentences per subcorpus in B all .He then chooses 2 sets and use one as in-probe and the other as out-probe, and combine in-probe and the rest (B all minus 10 sets) as a training set.We use notations B 1+ For each group of data, Bob first trains a shadow MT model using the training set.He then uses this model to translate sentences in the in-probe and out-probe sets.Bob has now a list of (f, e, ê) from different shadow models, and he knows for each sample if it was in or out of the training data for the MT model used to translate that sentence. Bob MT Architecture Bob's model is a 4-layer Transformer, with no tied embedding, model/embedding size 512, 8 attention heads, 1,024 hidden states in the feed forward layers, word-based batch size of 4,096.The model is optimized with Adam (Kingma and Ba, 2015), regularized with label smoothing (0.1), and trained until perplexity on newstest2016 (Bojar et al., 2016) had not improved for sixteen consecutive checkpoints, computed every 4,000 batches.Bob has BPE subword models with vocab size 30k for each language.The mean BLEU scores of the ten shadow models on newstest2018 is 38.6±0.2 (compared to 42.6 for Alice). Membership Inference Classifier Bob extracts features from (f, e, ê) for a binary classifier.He uses modified 1-4 gram precisions and smoothed sentence-level BLEU score (Lin and Och, 2004) as features.Bob's intuition is that if an unusually large number of n-grams in ê matches e, then it could be a sign that this was in the training data and Alice memorized it.Bob calculates n-gram precision by counting the number of n-grams in translation that appear in the reference sentence.In the later investigation Bob also considered the MT model score as an extra feature. Bob tried different types of classifiers, namely namely Perceptron (P), Decision Tree (DT), Naïve Bayes (NB), Nearest Neighbors (NN), and Multilayer Perceptron (MLP).DT uses GINI impurity for the splitting metrics, and the max depth to be 5.Our NB uses Gaussian distribution.For NN we set the number of neighbors to be 5 and used Minkowski distance.For MLP, we set the size of hidden layer to be 100, activation function to be ReLU, and L2 regularization term α to be 0.0001. Pseudocode 1 summarizes the procedure to construct a membership inference classifier g(•) using Bob's dataset B all .For training the binary classifiers, Bob uses models from data splits 1 to 3 for training, 4 for validation, and 5 for his own internal testing.Note that the final evaluation of the attack is done using the translations of A in_probe and A out_probe with Alice MT model, by Carol. Attack Results We now present a series of results based on the shadow model attack method described in Section 5.In Section 6.1 we will observe that Bob has difficulty attacking Alice under our definition of membership inference.In Sections 6.2 and 6.3 we will see that Alice nevertheless does leak some private information under more nuanced conditions.Section 6.4 describes the possibility of attacks beyond sentence-level membership.Section 6.5 explore the attacks using external resources.Table 2 shows the accuracy of the membership inference classifiers.There are 5 different types of classifiers, as described in section 5.3.The numbers in the Alice column shows the attack accuracy on Alice probes A in_probe and A out_probe ; these are the main results.The numbers in Bob columns show the results on the Bob classifiers' train, vali-dation, and test sets, as described in Section 5.3. Main Result The results of the attacks on the Alice model show that it is around 50%, meaning that the attack is not successful and the binary classification is almost the same as a random choice. 9The accuracy is around 50% for Bob:valid, meaning that Bob also has difficulty attacking his own simulated probes, therefore the poor performance on A in_probe and A out_probe is not due to mismatches between Alice's model and Bob's model. The accuracy is around 50% for Bob:train as well, reveals that the classifier g(•) is underfitting. 10This suggests that the current features do not provide enough information to distinguish inprobe and out-probe sentences.Figure 5 shows the confusion matrices of the classifier output on Alice probes.We see that for all classifiers, whatever prediction they make is incorrect half of the time.Table 3 shows the result when MT model score is added as an extra feature for classification.The result indicates that this extra information does not improve the attack accuracy.In summary, these results suggest that Bob is not able to reveal membership information at the sentence/sample level.This result is in contrast to previous work on membership inference in "classification" problems, which demonstrated high accuracy with Bob's shadow model attack. Additionally, note that while accuracies are close to 50%, the number of Bob:test tend to be slightly higher than Alice's for some classifiers.This may reflect the fact that Bob:test is a matched condition using the same shadow MT architecture, while Alice probes are from a mismatched condition using an unknown MT architecture.It is important to compare both numbers in the experiments: accuracy on Alice probes is the real evaluation and accuracy on Bob:test is a diagnostic. Out-of-Domain Subcorpora Carol prepared out-of-domain (OOD) subcorpora, A ood , that are separate from A train and B all .The membership inference accuracy of each subcorpus is shown in Table 4.The accuracy for OOD subcorpora are much higher than that of original in-domain subcorpora.For example, the accuracy with Decision Tree was 50.3% and 51.1% for ParaCrawl and CommonCrawl (in-domain), whereas 67.2% and 94.1% for EMEA and Koran (out-of-domain).This suggests that for OOD data Bob has a better chance to infer the membership. In Table 4 we can see that Perceptron has accuracy 50% for all in-domain subcorpora and 100% for all OOD subcorpora.Note that the OOD subcorpora only have out-probes; By definition none of the samples from OOD subcorpora are in the training data.We get such accuracy because our Perceptron is always predicting out, as we can see in Figure 5.We believe this behavior is caused by applying Perceptron to inseparable data, and this particular model happened to be trained to act this way.To confirm this we have trained variations of Perceptrons by shuffling the training data, and observed that the resulting models had different output ratio of in and out, and in some cases always predicting in for both in and OOD subcorpora. Figure 6 shows the distribution of sentencelevel BLEU scores per subcorpus.The BLEU scores tends to be lower for OOD subcorpora, and the classifier may exploit this information to distinguish the membership better.But note that EMEA (out-of-domain) and CommonCrawl (in-domain) have similar BLEU, but vastly different membership accuracies, so the classifier may also be exploiting n-gram match distributions. Overall, these results suggest that Bob's accuracy depends on the specific type of probe being tested.If there is a wide distribution of domains, there is a higher chance that Bob may be able to reveal membership information.Note that in the actual scenario Bob will have no way of knowing what is OOD for Alice, so there is no signal that is exploitable for Bob.This section is meant as an error analysis that describes how membership inference classifiers behave differently in case the probe is OOD. Out-of-Vocabulary Words We also focused on the samples which contain the words that never appear in the training data of the MT model used for translation, i.e., out-ofvocabulary (OOV) words.For this analysis, we focus only on vocabulary that does not exist in the training data of Bob's shadow MT models, rather than Alice's, since Bob does not have access to her vocabulary.By definition there are only outprobes in OOV subsets. For Bob's shadow models, 7.4%, 3.2%, and 1.9% of samples in the probe sets had one or more OOV words in source, reference, or both sentences, respectively.Table 5 shows the membership inference accuracy of the OOV subsets from Bob test set, which is generally very high (>70%).This implies that sentences with OOV words are translated idiosyncratically compared to the ones without OOV words, and classifier can exploit this. Alternative Evaluation: Grouping Probes Section 6.1 showed it is generally difficult for Bob to determine membership for the strict definition of one sentence per probe.What if we loosen the problem, letting the probe be a group of sentences? We create probes of 500 sentences each to in- Table 6 shows the accuracy on probe groups.We can see that the accuracy is much higher than 50%, not only for Bob's training set but also for his validation and test sets.However, for Alice, we found that classifiers were almost always predict- ing in, resulting the accuracy to be around 50%.This is due to the fact that classifiers were trained on shadow models that have lower BLEU scores than Alice.This suggests that we need to incorporate the information about the Alice / Bob MT performance difference.One way to adjust the difference is to directly manipulate the input feature values.We adjusted the feature values, compensating by the difference in mean BLEU scores, and accuracy on Alice probes increased to 60% for P and DT as shown in the "adjusted" column of Table 6.If the classifier took advantage of the absolute values in its decision, the adjustment may give improvements.If that is not the case, then improvements are less likely.Before the adjustment, all classifiers were predicting everything to be in for Alice probes.Classifiers like NB and MLP apparently did not change how often it predicts in even after the normalization, whereas classifiers like P and DT did.In a real scenario this BLEU difference can be reasonably estimated by Bob, since he can use Alice's translation API to calculate BLEU score on a heldout set, and compare it with his shadow models. Another possible approach to handle the problem of classifiers always predicting in is to consider the relative size of classifier output score.We can rank the samples by the classifier output scores, and decide top N% to be in and rest to be out.Figure 7 shows how the accuracy changes when varying the in percentage.We can see that the accuracy can be much higher than the original result, especially if Bob can adjust the threshold based on his knowledge of in percentage in the probe.This is the first strong general result for Bob, suggesting the membership inference attacks are possible if probes are defined as groups of sentences. 11Importantly, note that the classifier threshold adjustment is performed only for the classifiers in this section, and is not relevant for the classifiers in Section 6.1 to 6.3. Attacks using External Resources Our results in Section 6.1 demonstrate the difficulty of general membership inference attacks.One natural question is whether attacks can be improved with even stronger features or classifiers, in particular by exploiting external resources beyond the dataset Carol provided to Bob.We tried two different approaches: one using a Quality Estimation model trained on additional data, and another using a neural sequence model with a pre-trained 11 We can imagine an alternative definition of this grouplevel membership inference where Bob's goal is to predict the percentage of overlap with respect to Alice's training data.This assumes that model trainers make corpus-level decisions about what data to train on.Reformulation of a binary problem to a regression problem may be useful for some purposes.language model.Quality Estimation (QE) is a task of predicting the quality of a translation at the sentence or word level.One may imagine that a QE model might produce useful feature to tease apart in and out because in translations may have detectable improvements in quality.To train this model, we used the external dataset from the WMT shared task on QE (Specia et al., 2018).Note that for our language pair, German to English, the shared task only had labeled dataset for SMT system.Our models are NMT, so the estimation quality may not be optimally matched, but we believe this is the best data available at this time.We applied the Predictor-Estimator (Kim et al., 2017) implemented in an open source QE framework OpenKiwi (Kepler et al., 2019).It consists of predictor that predicts each token of the target sentence given the target context and the source, and estimator that takes features produced by the predictor to estimate the labels; Both are made of LSTMs.We employed this model as this is one of the best models seen in the shared tasks, and it does not require alignment information.The model metrics on the WMT18 dev set, namely Pearson's correlation, Mean Average Error and Root Mean Squared Error for sentencelevel scores are 0.6238, 0.1276, and 0.1745 respectively. We used the sentence score estimated by the QE model as an extra feature for classifiers described in Section 6.1.The results are shown in Table 7. We can see that this extra feature did not give any significant influence to the accuracy.In a more detailed analysis, we find that the reason is that our in and out probes both contain a range of translations from low to high quality translations, and our QE model may not be sufficiently fine-grained to tease apart any potential differences.In fact, this may be difficult even for a human estimator. Another approach to exploit external resources is to use language model pre-trained on a large amount of text.In particular, we used BERT (Devlin et al., 2019) which has shown competitive results in many NLP tasks.We used BERT directly as a classifier, and followed a fine-tuning setup similar to paraphrase detection: for our case the inputs are the English translation and reference sentences, and the output is the binary membership label.This setup is similar to the classifiers we described in Section 5.3, where rather than training Perceptron or Decision Tree on manuallydefined features, we directly applied sequence encoders on the raw sentences. We fine-tuned the BERT Base,Cased English model with Bob:train.The results are shown in Table 7. Similar to previous results, the accuracy is 50% so the attack using BERT as classifier was not successful.Detailed examination of the BERT classifier probabilities show that they are scattered around 0.5 for all cases, but in general quite random for both Bob and Alice probes.This result is similar to the other simpler classifiers in Section 6.1.In summary, from above results we can see that even with external resources and more complex classifiers, sentence-level attack is still very difficult for Bob.We believe this attests to the inherent difficulty of the sentence-level membership inference problem. Discussions and Conclusions We formalized the problem of membership inference attacks on sequence generation tasks, and used Machine Translation as an example to investigate the feasibility of a privacy attack. Our results in Section 6.1 and Section 6.5 show that Alice is generally safe and it is difficult for Bob to infer the sentence-level membership.In contrast to attacks on standard classification problems (Shokri et al., 2017), sequence generation problems maybe be harder to attack because the input and output spaces are far larger and complex, making it difficult to determine the quality of the model output or how confident the model is.Also, the output distribution of class labels is an effective feature for the attacker for standard classification problems, but is difficult to exploit in the sequence case. However, this does not mean that Alice has no risk of leaking private information.Our analyses in Sections 6.2 and 6.3 show that Bob's accuracy on out-of-domain and out-of-vocabulary data is above chance, suggesting that attacks may be feasible in conditions where unseen words and domains cause the model to behave differently.Further, Section 6.4 shows that for a looser definition of membership attack on groups of sentences, the attacker can win at a level above chance. Our attack approach was a simple one, using shadow models to mimic the target model.Bob can attempt more complex strategies, for example, by using the translation API multiple times per sentence.Bob can manipulate a sentence, for example, by dropping or adding words, and observe how the translation changes.We may also use the metrics proposed by Carlini et al. (2018) as features for Bob; they show how recurrent models might unintentionally memorize rare sequences in the training data, and proposed a method to detect it.Bob can also add "watermark sentences" that have some distinguishable characteristics to influence the Alice model, making attack easier.To guard against these attack, Alice protection strategy may include random subsampling of training data or additional regularization terms. Finally, we note some important caveats when interpreting our conclusions.The translation quality of Alice and Bob MT models turned out to be similar in terms of BLEU.This situation favors Bob, but in practice Bob is not guaranteed to be able to create shadow models of the same standard, nor verify how well it performs compared to the Alice model.We stress that when one is to interpret the results, one must evaluate both on Bob's test set and Alice probes side-by-side, like those shown in Tables 2, 3, and 7, to account for the fact that Bob's attack on his own shadow model translations is likely an optimistic upper-bound on the real attack accuracy on Alice's model. We believe our dataset and analysis is a good starting point for research in these privacy questions.While we focused on MT, the formulation is applicable to other kinds of sequence generation models such as text summarization and video captioning; these will be interesting as future work. i ) }.We use a label d ∈ { 1 , 2 , . ..} to indicate the domain (the subcorpus or the data source), and an index i ∈ {1, 2, . . ., I(d)} to indicate the sample id in the domain (subcorpus).For example, e (d) i with d = 1 and i = 1 might refer to the first sentence in the Europarl subcorpus, while e (d) i with d = 2 and i = 1 might refer to the first sentence in the CommonCrawl subcorpus.I(d) is the maximum number of sentences in the subcorpus with label d. Figure 2 : Figure 2: Illustration of data splits for Alice and Bob.There are k samples each for A in_probe , A out_probe , and A ood .Alice's training data A train excludes A out_probe and 3 , while including A in_probe .Bob's data B all is a subset of Alice's data, excluding A in_probe and 2 . Salem et al. (2018) investigated the effect of training the shadow model and datasets that match or does not match the distribution of A train , and compared training a single shadow model as opposed to many.Truex et al. (2018) presents a comprehensive evaluation of different model types, training data, and attack strategies; Borrowing ideas from adversarial learning and minimax games, Hayes Figure 3 : Figure 3: Illustration of actual MT data splits.A train does not contain A out_probe , and B all is a subset of A train with A in_probe and ParaCrawl excluded. in_probe B 1+ out_probe , and B 1+ train for the first group of inprobe, out-probe, and training set.Bob then swaps the in-probe and out-probe to create another group.We notate this as B 1− in_probe , B 1− out_probe , and B 1− train .With 10 sets of 5,000 sentences, Bob can create 10 different groups of in-probe, out-probe, and training set. Figure 4 illustrates the data splits. Figure 4 : Figure 4: Illustration of how Bob splits B all for each shadow model.Blue boxes are the in-probe B in_probe and training data B train , where small box is the inprobe and small and large boxes combined is the training data.Green box indicates the out-probe B out_probe .Bob uses models from splits 1 to 3 as a train, 4 as a validation, and 5 as a test sets for his attack. Figure 5 : Figure 5: Confusion matrices of the attacks Alice model per classifier type. Figure 7 : Figure 7: How the attack accuracy on Alice set changes probe groups are sorted by Perceptron output score and the threshold to classify them as in is varied. Table 1 : Number of sentences per set and subcorpus.For each subcorpus, A train includes A in_probe and does not include A out_probe .B all is a subset of A train , excluding A in_probe and ParaCrawl.A ood is for evaluation only, and only Carol has access to them. Alice column shows the accuracy of attack on Alice probes A in_probe and A out_probe .Bob columns show the accuracy on the classifiers' train, validation, and test set.Note that, following the evaluation protocol explained in 4.3, only Carol the evaluator can observe the accuracy of the attacks on Alice model. Table 3 : Membership inference accuracy when MT model score is added as an extra classifier feature. Table 4 : Membership inference accuracy per subcorpus.Right 4 columns are results for out-of-domain subcorpora.Note that ParaCrawl is out-of-domain for Bob and his classifier, whereas in-domain for Alice and her MT model.Figure 6: Distribution of sentence-level BLEU per subcorpora for A in_probe (blue boxes), A out_probe (green, left five boxes), and A ood (green, right four boxes). Table 6 : Attack accuracy on probe groups.In addition to the original Alice set, we have adjusted set where the feature values are adjusted by subtracting the mean BLEU difference between Alice and Bob models. Table 7 : Membership inference accuracies for classifiers with Quality Estimation sentence score as an extra feature, and a BERT classifier.
9,590
2019-04-11T00:00:00.000
[ "Computer Science" ]
Technological Developments in the Intelligent Transportation System (ITS) . The background of this research is to want to know the technological developments that exist in the intelligent transportation system, by knowing the developments it will be able to add to the repertoire of research and deepen similar research. The method used in this research is to conduct a literature review, by reading many journals that can be the basis for this research, reading will be able to develop the problems that have been researched. The problem raised in this research is wanting to know technological developments in the smart transportation system and making one example of a system that will be developed in smart transportation. This research will produce technological technologies that can be developed in smart transportation systems, and provide examples of research that can be developed on smart transportation systems. http://ijstm.inarah.co.id proposed technique can produce a credible indicator of visibility to motorists. The analytical results corroborated by the wide field measurements confirm the superiority of the proposed system when compared to other visibility estimation methods such as conventional DCP and WIE, where the results of the proposed system show about 25% increase in accuracy over other techniques considered. In addition, the proposed DCP is approximately 26% faster than conventional DCP. The promising results obtained potentiate the integration of the proposed techniques in real-life scenarios [12]. A fast-moving research area, driven in part by rapid changes based on cyber-physical systems. It should be acknowledged that existing vehicle communication systems are vulnerable to privacy vulnerabilities that require addressing. The tactical challenge is that many vehicle communication applications and services take advantage of basic safety messages that contain vehicle identity, location and other personal data. A popular way of dealing with this privacy issue is to take advantage of a pseudonym change scheme to protect the identity and location of the vehicle. However, many such schemes suffer as costs grow and the difficulty of certificate management increases with the number of pseudonyms generated and stored, raising doubts about the economic viability of the approach. A decentralized blockchain-based solution for pseudonym management that overcomes this limitation. This scheme consists of a distribution of pseudonyms and random operation, which allows the reuse of existing pseudonyms for different vehicles. The results, reported here, indicate that the proposed scheme can reuse existing pseudonyms and achieve a better level of anonymity at a lower cost than existing schemes [13]. Intelligent Transport System (ITS) to improve safety and create traffic flow, simultaneously. This may increase the number of trips with the Colombo City Council (CMC) area. Colombo City Council (CMC) is the economy will improve through access. This is the influence of the business and residential areas that have negative social and environmental impact. These are the factors to consider at the Colombo Municipal Council (CMC) for an information trip [14]. ITS is basically a combination of developments in computing, information technology and telecommunications combined with expertise in the automotive and transportation sectors. ITS main developing technology is taken from mainstream developments in these sectors. Therefore ITS can be defined as the application of computing, information and communication technology for vehicle management and real-time networks that involve the movement of people and goods [15]. Systems and techniques to improve traffic prediction accuracy include a system of one or more computers that can be operated to receive requests related to traffic predictions, comparing the first prediction error for the first traffic prediction model (moving average) with the second prediction error for the second (average historical average) traffic prediction model, calculated using a historical data set selected from the previously recorded traffic data according to the day and time associated with the demand, select the use of the first model or the second prediction error comparison model, and provide the output for use in traffic prediction, where the output comes from applying the first traffic prediction model when the first prediction error is less than the second prediction error, and the output comes from applying the second traffic prediction model when the first prediction error is not less than the second prediction error [16]. ITS is a real-time, efficient and comprehensive transportation management system. Speed, vehicles and traffic are the main dynamic parameters of intelligent traffic detection information acquisition system, accurate knowledge of these parameters can make the traffic scheduling signal control effectively, thereby ensuring efficient circulation of the entire traffic system. At the same time, on time, accurately understand the speed, traffic volume, and vehicle traffic information for road traffic prediction to improve long-term planning from basic data, and have great guiding meaning. As part of ITS, the current traffic detection technology that is commonly used at home and abroad is for different test objects designed and used independently, if the system design can be integrated to detect each object, the unified use of communication engineering, engineering theoretical information in the RRs traffic detection technology and the two most combined data processing technologies, which not only reduces the complexity of traffic data detection, but also saves due to different object detection systems for construction and increases construction costs, is more conducive to increasing the application effect of smart transportation system. Guide this idea and propose a comprehensive traffic data detection system based on laser scanning data, can also run speed detection, http://ijstm.inarah.co.id traffic detection and vehicle recognition modules, and process detected data, and build a database, realizing data acquisition and diversification of processing functions [17]. Two-wheeled vehicles, commonly called Segways, are a growing research topic, and the Line Tracer is a robot that can walk alone by reading predefined lines and lanes and balancing automatically based on changes and shifts in the balance point. This research aims to expand the research that has been done before, which combines a two-wheeled balancing robot and makes a two-wheeled robot that can run on its own using a Line Tracer. Segway is expected to produce a design that can run on its own (automatically) carrying goods without human assistance. In the previous research, the two-wheeled robot lego was successfully created using the Predictive Control Model using an inverted pendulum. Another Q-Learning study to make robots can learn how to solve a problem finding a route with the robot itself after some time learning through the same path. This research will be conducted by making a Segway design that uses a robot as the object of research. Lego NXT is a robot that is used as a Segway model, and will be given a color sensor that uses a PID controller for the testing process. The Lego NXT robot will be a model and an algorithm control test tool that will be examined. The PID control algorithm is programmed in high-level C language and has good performance for tracking stability. A color sensor is a device that can distinguish colors, in this case a color sensor is used to make the robot recognize the color of the path that has a color. The suitability of the control algorithm performance, color sensors and robot performance will be investigated in the framework of the Segway robot which uses the concept that runs on a track that has been provided and can run automatically without a driver. How to adjust the performance of two motors, each of which is owned by the two wheels, so that the motor can take turns balancing and making the place shift [18]. Information acquisition system detection, accurate knowledge of these parameters can make the traffic scheduling signal control effectively, thereby ensuring efficient circulation of the entire traffic system. At the same time, on time, accurately understand the speed, traffic volume, and vehicle traffic information for road traffic prediction to improve long-term planning from basic data, and have great guiding meaning. As part of ITS, the current traffic detection technology that is commonly used at home and abroad is for different test objects designed and used independently, if the system design can be integrated to detect each object, the unified use of communication engineering, engineering theoretical information in the RRs traffic detection technology and the two most combined data processing technologies, which not only reduces the complexity of traffic data detection, but also saves due to different object detection systems for construction and increases construction costs, is more conducive to increasing the application effect of smart transportation system. Guide this idea and propose a comprehensive traffic data detection system based on laser scanning data, can also run speed detection, traffic detection and vehicle recognition modules, and process detected data, and build a database, realizing data acquisition and diversification of processing functions. For this predictive information, drivers implicitly project future conditions based on historical (if they experienced it before) and current traffic information. Therefore, short-term prediction of traffic conditions is required for traffic management and tourist information systems [19]. Based on the picture above, the explanation will be given below: Literature Review In the first stage of this research using the literature review method by reading many journals and many books related to this research, with a lot of reading will strengthen the research raised. Research In the second stage of this research using research, by doing research it will be able to answer the research problems raised in this study, by conducting research you can also carry out the stages of research according to the direction of the research, so that this research is not biased. Result In the third stage, this research will produce data and system proposals that will answer the research problems raised, with this stage the research stage is finished. III. RESULT AND DISCUTION For system development, innovations made to ITS or Intelligent Transportation System Improve defense in an Intelligent Transportation System such as carrying out maintenance on the System (Maintenance) every time the vehicle is turned on. So that there is no data encroachment on the system, which one can find out the whereabouts of the rider. If that happens, it will be very dangerous for the rider which can cause unwanted things, therefore it is recommended that periodic system maintenance be held. How the system works can be seen from the following flowchart. Fig 2. Flowchat The flowchart above is one of the smart transportation systems which can be without a crew (driver), this system will work using Autosteer, Auto Lane Change, Automatic Emergency Steering and Side Collision Warning, as well as the Autopark feature. In the flowchart above, when the driver / driver enters the transportation vehicle, he will check the driver's data. If the data is registered, the vehicle engine will start, whereas if the driver's data is not registered, the vehicle will not start. Furthermore, if the vehicle's engine is running, it will display traffic conditions such as congestion points anywhere, looking for alternative routes and even the vehicle directly directs you without having to find any paths that are traversed. Intelligent transportation systems and rapid development in recent years have become one of the effective ways of understanding traffic problems. Advanced information technology, computer technology has injected new vitality, data communication transmission technology and electronic recognition and application of automatic control technology for the further development of intelligent transportation systems, traffic information collection and detection technology in intelligent transportation systems to achieve http://ijstm.inarah.co.id automatic detection, recognition and detection technology. tracking of moving vehicles is the most important part. To avoid things that are undesirable when driving smart transportation, the system that has been created must often carry out periodic maintenance. IV. CONCLUTION With the rapid socio-economic development, a comprehensive modern network of traffic has become an important index to evaluate the level of social development. At this time, an explosion began to form in China's traffic, which is a great convenience for people's daily life and travel, but also with the growth of traffic, also brings a lot of trouble and inconvenience. Vehicles moving in the intelligent transportation system from automatic detection and recognition technology to make some exploration and research, other advantages and disadvantages of vehicle detection technology and testing equipment with analysis and comparison, combined with the actual situation of current technology developments that provide a new way of detecting traffic, detection by laser scanning data analysis and processing for traffic information data. In view of the current domestic and foreign general traffic detection technology, and detection equipment is the basis for different test objects independently. Designing and using the status quo, the integrated use of communication engineering, information theory and engineering, traffic detection technology in intelligent transportation systems is designed to achieve an integrated detection object acquisition system, namely a traffic information acquisition platform based on laser scanning data that can run concurrently, set the timing of the vehicle speed detection function module, traffic flow statistics and vehicle type automatic recognition, processing and detected data, database creation at the same time, realizing multifunctional data acquisition and processing. This not only reduces the complexity of traffic data detection, but also saves construction because each corresponds to a different object in this paper, considering the current common laser detection equipment, laser detection technology than traditional detection systems and increasing construction costs, is more conducive to increase smart application effect transportation system.
3,068.8
2021-05-27T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
An analytical approach to engineer multistability in the oscillatory response of a pulse-driven ReRAM A nonlinear system, exhibiting a unique asymptotic behaviour, while being continuously subject to a stimulus from a certain class, is said to suffer from fading memory. This interesting phenomenon was first uncovered in a non-volatile tantalum oxide-based memristor from Hewlett Packard Labs back in 2016 out of a deep numerical investigation of a predictive mathematical description, known as the Strachan model, later corroborated by experimental validation. It was then found out that fading memory is ubiquitous in non-volatile resistance switching memories. A nonlinear system may however also exhibit a local form of fading memory, in case, under an excitation from a given family, it may approach one of a number of distinct attractors, depending upon the initial condition. A recent bifurcation study of the Strachan model revealed how, under specific train stimuli, composed of two square pulses of opposite polarity per cycle, the simplest form of local fading memory affects the transient dynamics of the aforementioned Resistive Random Access Memory cell, which, would asymptotically act as a bistable oscillator. In this manuscript we propose an analytical methodology, based on the application of analysis tools from Nonlinear System Theory to the Strachan model, to craft the properties of a generalised pulse train stimulus in such a way to induce the emergence of complex local fading memory effects in the nano-device, which would consequently display an interesting tuneable multistable oscillatory response, around desired resistance states. The last part of the manuscript discusses a case study, shedding light on a potential application of the local history erase effects, induced in the device via pulse train stimulation, for compensating the unwanted yet unavoidable drifts in its resistance state under power off conditions. in such a way to induce monostability or different forms of multistability (see section "Application of the theory to endow the ReRAM cell with three, four, or five oscillatory behaviours") in the device oscillatory behaviour upon request.The capability of the ReRAM cell to act as a monostable or multistable oscillator under suitable periodic pulse train stimulation may be leveraged to develop novel forms of data detection and computation in memory for artificial intelligence applications in the years to come.For example, as revealed in section "Compensating for the drift in the resistance of crosspoint devices under power off conditions", the regular application of a suitable generalised pulse train voltage signal across each crosspoint nanodevice may allow to correct an unwanted drift in its resistance state under power off conditions.Finally, the conclusions, summarising the most significant results of this research study, are drafted in section "Conclusions". Memristor Model The Strachan model 4 falls in the class of first-order extended voltage-controlled memristors 13 , defined via the DAE set 14 where the ODE (1), referred to as state equation, dictates the rate of change of the memory state x of the oneport, as an input voltage signal v is let fall between its terminals.The algebraic constraint (2), known as state-and input-dependent Ohm law, defines how state and voltage affect the flow of the output current signal i through the device stack.In (1) ((2)) g(x, v) (G(x, v)) represents the state evolution (memductance) function.Let us assume x to be constrained to lie at all times within a closed set D [x L , x U ] .In the Strachan model the state evolution function reads as where step(•) is the Heaviside function, while the SET g SET (x, v) and RESET g RESET (x, v) components of g(x, v), referred to as SET and RESET state evolution functions, and governing the evolution of the device memory state under positive and negative input voltages, respectively, are in turn defined as in which p = i • v denotes the power dissipated in the memristor, as a voltage signal is applied between its ter- minals.The formula for the memductance function in the Strachan model assumes the form For any given voltage v, the higher is the memory state x, and the larger is the memductance.In the remainder of this paper, the ReRAM cell model, consisting of the ODE (1), with state evolution function (3), where the SET and RESET components are respectively expressed by equations ( 4) and (5), and of the algebraic relation (2), with memductance function (6), is referred for simplicity as Strachan DAE set.Table 1 reports the values assigned to the parameters in Eqs.(4), (5), and (6) so as to allow the resulting model to reproduce experimental data, extracted from a Ta 2 O 5−x physical sample, to within a preliminarily specified degree of accuracy 3 . Theoretical tools This section introduces the theoretical concepts applied in the research study discussed later on. (1) ẋ =g(x, v), and • exp p σ p , and (5) Table 1.Strachan model parameters fitted to a Ta 2 O 5−x physical sample.The lower x L and upper x U bounds in the state existence domain D are respectively equal to 0 and 1. The time average state dynamic route technique When a voltage signal v S falls across the ReRAM cell, as illustrated in Fig. 1a, the time average x of its memory state x, referred to as time average state for short, evolves with time according to the formula Taking the time derivative of both sides of Eq. (7) gives where the last step follows from the integration of the state equation (1) for v = v S .In principle Eq. ( 8) could be employed to explore the response of the device to any periodic stimulus, but then recurring to some numerical integration method would be necessary for determining its solutions.However, as explained shortly, for stimuli composed of rectangular pulses of suitable widths and heights, the study of the behaviour of the periodically-forced device may be considerably simplified, and some analytical developments, following from a simplification of Eq. ( 8), are possible.Let us first consider the excitation scenario analysed in the bifurcation study from Pershin and Slipko 8 .With reference to Fig. 1b, the train voltage stimulus v S , applied across the memristor, features here a first τ + -long SET pulse of positive polarity and amplitude V + and a second τ − -long RESET pulse of negative polarity and amplitude V − over each cycle of length T = τ + + τ − .Equation ( 8) reduces then to Assuming the positive (negative) SET (RESET) pulse induces a relatively small increase (decrease) in the device memory state over the first (second) τ + ( τ − )-long part of each cycle, it is possible to substitute the state x with its time average x in each integrand without introducing a large error in the resulting approximation.Equation (9) boils then down to where (7) RESET pulse per cycle.The RESET pulse of height V − and width τ − follows the series of SET pulses.The i th SET pulse is V +,i high and τ +,i wide, with i ∈ {1, . . ., P} .The ordering of the positive pulses from the lowest to the highest in each input cycle follows the convention adopted in the systematic methodology to engineer multistability in the steady-state oscillatory response of the ReRAM cell to a generalised train stimulus (refer to section "A systematic methodology to craft the pulse stimulus for enabling the ReRAM cell to support multiple oscillations around prescribed resistance levels".However, this has no effect on the simulations.In fact, to facilitate their convergence, in the numerical investigations, discussed in section "Conclusions", the SET pulses were listed from the most narrow to the most wide before being applied in this order one after the other across the device.10) (refer to Fig. 2c), is the so-called time average state dynamic route (TA-SDR) resulting from the earlier arbitrarily specified pulse train stimulation of the nanodevice 16 .A state dynamic route (SDR), namely the ẋ versus x locus, derivable from the state Eq.(1) for a given DC value V assigned to the voltage v, governs the time evolution of the memory state of a first-order memristor under the specified bias stimulus.In this regard, it is worth to observe that a number of research studies have recently www.nature.com/scientificreports/ reported laboratory measurements of SDRs acquired from memristive nanodevices, enabling to establish an important communication channel between theoreticians and experimenters.The interested readers are invited to consult works from Messaris 17 , Maldonado 18 , and Marrone 19 for the details.A TA-SDR can then be interpreted as an extension of the SDR, enabling the investigation of the response of the same device to a particular AC periodic square pulse train.On condition that the choice of the pulse train parameters does not jeopardise the accuracy of the approximation inherent in Eq. ( 10), the analysis of this new graphic tool enables to determine number, mean values, and stability properties of all the admissible asymptotic oscillations in the memory state of the periodically-forced device.Moreover the predictive capability of the TA-SDR technique may be verified by means of another more rigorous system-theoretic methodology, described shortly in section "The state change per cycle map tool". The state change per cycle map tool The State Change Per Cycle Map (SCPCM) analysis tool 20 is inspired from the Poincaré map technique 12 , a powerful method from Nonlinear Dynamics Theory, which facilitates the study of a nth-order non-autonomous periodically-forced continuous-time system, equivalent to a (n + 1)th-order autonomous continuous-time system, where the time assumes the role of a state variable, through the analysis of a simpler n-dimensional discrete-time one.For the Strachan model, featuring order n = 1 , and continuously driven by an input signal v, forced to follow a generic periodic voltage stimulus v S , the Poincaré map assumes the one-dimensional form where k ∈ N >0 , and x k stands for the sample x(k • T) of the solution of the ODE (1), with g(x, v) expressed by the formula (3), at the end of the kth input cycle.For k = 1 the map reduces to x 1 = P (x 0 ) , where x 0 denotes the ODE initial condition x(0), and x 1 is the state sample x(T) at the end of the first input cycle.The Poincaré map accurately provides the sequence of values x 0 , x 1 , . . ., also referred to as return points, extracted from the time series of the ODE solution at regular T-long time intervals from the initial instant t = 0 of the simulation.The one-dimensional discrete-time system is said to admit a fixed point x * if it maps such point into itself, which is mathematically formulated via the equality x * = P (x * ) .A fixed point of the map corresponds to a steady-state oscillatory solution for the original non-autonomous continuous-time system.However, the periodic attractor of the non-autonomous ODE (1) is asymptotically stable if and only if the fixed point of the map is also asymptotically stable, which implies the inequality |P ′ (x k )| x k =x * < 1 to hold true. Figure 3a sketches qualitatively how the graph of the map may look like for an exemplary case study, where, similarly as assumed in Fig. 2 , of which the outer ones (inner one) are stable (is unstable).A few coloured zig-zag trajectories, known as cob-web plots 12,20 in Nonlinear Dynamics Theory, are also displayed to show the discrete-time evolution of the map from distinct initial conditions toward one of the two LAS fixed points.In our study a map of this kind can be extracted from the Strachan DAE set, when the input voltage v is enforced to follow a given periodic voltage stimulus v S , e.g. in the form of a rectangular pulse train, by recording samples of the memristor state x at regular T-long time intervals from the beginning of each of a large ensemble of simulations, differing in the initial conditions, and then plotting for each of the resulting time series the kth sample x k = x(k • T) versus the (k − 1) th one x k−1 = x((k − 1) • T) , with k ∈ N >0 .For k = 1 the SCPCM reduces to � 1;0 = x 1 − x 0 = P (x 0 ) − x 0 , providing the change in the memory state over the first input cycle. (b) x k;k−1 = x k − x k−1 versus x k−1 locus, illustrating the SCPCM of the ReRAM cell subject to the periodic stimulus, which induces a state motion resulting in the Poincaré map shown in plot (a).www.nature.com/scientificreports/ i.e. within the kth input cycle, as a function of its value x k−1 at time t = (k − 1) • T , i.e. either at the end of the (k − 1) th input cycle, if k > 1 , or at the beginning of the simulation, if k = 1 .The SCPCM admits a graphical visualisation on the x k;k−1 versus x k−1 plane, as sketched qualitatively in Fig. 3b, corresponding to the P (x x−1 ) versus x k−1 locus in plot (a) of the same figure.For any initial condition x 0 from a set of values uniformly distributed across the state existence domain D , the net change �x(k; k − 1) in the state x over the time interval [(k − 1) • T, k • T] may be marked on this plane at the abscissa corresponding to the state value x k−1 at t = (k − 1) • T for each k ∈ N >0 .A suitable interpolation method can then be employed to derive the curve, which best fits the sequences of return points collected for all the selected initial conditions.Arrows, pointing to the east (west), are then superimposed along the graph of a SCPCM in the upper (lower) half of the x k;k−1 versus x k−1 plane to indicate a progressive step-wise increase (decrease) in the discrete-time evolution of the Poincaré return point when x k;k−1 is positive (negative).For each k value in N >0 , the k th return point x k of the Poincaré map for a given initial condition x 0 may be obtained by adding the abscissa x k−1 , representing either the (k − 1) th return point, if k > 1 or the initial condition, if k = 1 , to the ordinate x k;k−1 of the point of intersec- tion between the graph of the SCPCM and the vertical line passing through the point(x k−1 , 0) .A fixed point x * of the Poincaré map corresponds to the state value, at which the SCPCM crosses the x k−1 axis, as x k;k−1 = 0 therein.The stability of a fixed point of the map may be inferred by monitoring the direction of the arrows in its neighbourhood.Arrows, pointing toward (away from) a fixed point on both its left and right sides, provide clear evidence for its asymptotic stability (instability).Alternatively, the same information can be retrieved by inspecting the slope of the graph of the SCPCM at the fixed point.In fact, the fixed point is asymptotically stable (unstable) if and only if the slope of the x k;k−1 versus x k−1 locus is negative (positive) at its location. Remark 1 The vector field of the original non-autonomous ODE (1) maps a given state value into some other one over a T-long time span, irrespective of the number of input cycles, elapsed since the beginning of the simulation.Therefore, a more efficient strategy to compute a SCPCM, in comparison to the method described earlier, envisages to test the response of the ReRAM cell to a predefined periodic stimulus across a T-long time span only, for each initial condition x 0 from a set of values uniformly distributed across the state existence domain D .Specifically, in each iteration step, the state value x 1 x(T) at the end of a T-long simulation, and the state change x 1;0 , relative to the initial condition, would be recorded, allowing to identify a particular point on the plane, spanned by x 0 and x 1;0 on the horizontal and vertical axes, respectively.Interpolating the data through some best-fit curve, and renaming the label on the horizontal (vertical) axis as x k−1 ( x k;k−1 ) finally results in the graph of the SCPCM of interest. All in all, the SCPCM technique enables to explore the response of a first-order nonlinear dynamic system to any periodic stimulus.It thus extends the applicability scope of the TA-SDR tool, which is employable solely in those case studies, where a periodic train of rectangular pulses stimulates a system of this kind.Furthermore, following the steps, described in Remark 1, it should be possible to acquire a SCPCM experimentally, provided access to the device state were possible.On the other hand, the measurement of a TA-SDR seems to pose harder challenges.Moreover, as elucidated in this section, no approximation is involved in the derivation of a SCPCM, which, as a result, may be used to verify the predictions of the TA-SDR investigation technique.Despite its weak points, however, the latter method allows the derivation of an analytical approach to engineer multi-stability in the oscillatory response of the nano-device to a generalised periodic pulse train voltage stimulus, as described in section "An analytical methodology to operate the pulse-driven ReRAM cell as a multimodal device with initial condition-dependent oscillatory behaviour".Moreover, upon availability of a reliable model for the ReRAM cell, and for any given rectangular pulse train stimulus, the computation of the SCPCM takes a much longer time than the determination of the respective TA-SDR.In fact, while the latter task simply requires to plot the right hand side of the TA-SE (10), adapted to the excitation signal of interest, against the time average state, the first one requires the numerical integration of the state equation over one input cycle for an adequate number of initial conditions, as explained in Remark 1. Insights into the model Before introducing the analytical framework, allowing to induce mono-or multi-stability in the oscillatory response of the ReRAM cell to a generalised pulse train voltage stimulus, this section discusses numerical investigations, which shed light into the properties of the Strachan model equations as well as into its response to a square wave excitation signal from the class illustrated in Fig. 1b. Dynamic route map The Dynamic Route Map (DRM) of the first-order ReRAM cell under focus is a family of SDRs, each of which corresponds to the plot of the state evolution function (3) against the state for a particular DC value V assigned to the voltage v.When V is negative (positive), the resulting g(x, V) versus x locus is referred to as a RESET SDR (SET SDR).A number of RESET (SET) SDRs, obtained by sweeping |V| in 0.2 V-long steps from 0.2 V to 1 V , are shown in plots (a), (c), (e), (g), and (i) ((b), (d), (f), (h), and (l)) of Fig. 4. As may be inferred by inspecting the graphs on the left (right) column of this figure, the choice of the negative (positive) DC value V has no significant (a strong) impact on the shape of the resulting RESET (SET) ẋ versus x locus.We may thus conclude that, toward the development of a strategy to massage the SET ẋ| SET and RESET ẋ| RESET components of the TA-SE (10) in such a way to enable a desired number of intersections between their graphs, the fine control of the position of the gaussian bell-shaped g(x, V) versus x locus across the horizontal axis through smooth changes in the positive DC voltage V is worth of exploitation.τ − , depending upon the selection of the RESET V − and SET V + pulse heights.This is clearly illustrated in Fig. 5a, visualising through a three-dimensional surface each admissible equilibrium xeq = xeq (V + , V − ) , which the TA-SE (10) admits for r = 1 , endowing the pulse train with a 50% duty cycle, when V − and V + are in turn chosen as the abscissa and ordinate of any point of the coloured map in plot (b) of the same figure.The dark blue domain from plot (a) contains the only globally asymptotically stable (GAS) equilibrium xeq , which the TA-SE features, upon selecting (V − , V + ) anywhere within the green region from plot (b).On the other hand, the bottom and top violet domains (the cyan domain) from plot (a) include (includes) the leftmost and rightmost LAS equilibria xeq,1 and xeq,3 (the unstable equilibrium xeq,2 ) of the TA-SE corresponding to any choice for the input parameter pair within the red domain from plot (b).For the sake of completeness, the white area in the coloured map of plot (b) contains input parameter pairs, whereby there exists no state value, at which ẋ| SET = − ẋ| RESET .In a scenario of this kind, if ẋ is found to be strictly negative (strictly positive), the memory state of the periodically-forced ReRAM cell shall progressively decrease (increase) toward the lower (upper) bound x L ( x U ) in its existence domain D .As the operation of the device around its fully-RESET or fully-SET state is not recommendable, the selection of input parameter pairs, belonging to the white region in the map of plot (b), shall be avoided in the analysis of exemplary excitation case studies to follow. Monostability Taking V − and V + in turn as the abscissa and ordinate of the point (−0.4V, +0.46 V) , indicated as a black cross marker, and belonging to the green region in the map of Fig. 5b, as may be inferred from Fig. 6a, showing the | ẋ| SET | and | ẋ| RESET | versus x loci for the earlier specified (V − , V + ) pair, the TA-SDR analysis predicts a monostable oscillatory behaviour for the periodically-forced ReRAM cell.Its memory state x is expected to experience a steady-state oscillation around the TA-SE equilibrium xeq = 0.308 .With reference to plot (a) of Fig. 5, the left vertical black dashed line crosses the blue domain of the three-dimensional surface in a single point, specifically (V − , V + , xeq ) = (−0.4V, +0.46 V, 0.308) , as indicated through a green filled circle.Choosing sufficiently small values for the RESET τ − and SET τ + pulse widths is instrumental to prevent the error in the approximation inherent in Eq. ( 10) to jeopardise the accuracy of its predictions.Fixing both τ − and τ + to 1µs , the SCPCM of the ReRAM cell, shown in Fig. 6(b), confirms the conclusions drawn via TA-SDR analysis from plot (a) of the same figure.Figure 6c shows the progressive approach of the solution x to the state Eq.(1) toward the only possible asymptotic periodic waveform, revolving approximately around xeq = 0.308 , from either of two initial conditions, lying one well below the minimum and the other well above the maximum of the steady-state oscillation.Plot (d) of the same figure visualises both the periodic pulse train voltage stimulus v S (in blue) and the steady-state oscillation x ss in the memristor state (in green) together with its mean value xss , its prediction, 10), associated to a train voltage stimulus, featuring two pulses of opposite polarity per cycle, may possibly admit, when the SET τ + and RESET τ − pulse widths are identical, as a function of the SET V + and RESET V − pulse heights, swept across the ranges [−2, 0] V and [0, 1.2] V , respectively.The dark blue surface includes all the GAS equilibria of the TA-SE in the monostable oscillatory operating mode of the ReRAM cell.The cyan (magenta) surface contains all the unstable (all the LAS) equilibria of the TA-SE in the bistable oscillatory operating mode of the ReRAM cell.(b) Projection of the surface from (a) onto the V + versus V − plane.Choosing the pulses' heights of the pulse train voltage stimulus, featuring a 50% duty cycle, according to the coordinates of any point in the green (red) region, the TA-SE features a single GAS equilibrium (two LAS equilibria) for r = 1 .The black cross marker (black plus sign) identifies the input parameter pair (V − , V + ) , inducing the particular monostable (bistable) oscillatory response, illustrated in Fig. 6 (Fig. 7), in the nanodevice. Vol:.( 1234567890 www.nature.com/scientificreports/namely the TA-SE equilibrium xeq , as well as the map fixed point x * , corresponding to the minimum value it assumes over each cycle. Bistability Setting the RESET V − and SET V + pulse heights to −0.6V and +0.54V , respectively, which identifies a point, indicated through a black plus sign, and lying within the red region in the map of Fig. 5b, the TA-SDR analysis predicts the coexistence of two LAS steady-state oscillatory solutions for the memory state x of the periodicallyexcited ReRAM cell, as may be inferred from Fig. 7a, revealing the existence of a triplet of crossings between the loci of the moduli of the SET and RESET TA-SE components.The abscissa of each of the two outer intersections xeq,1 = 0.106 and xeq,3 = 0.370 (of the inner intersection xeq,2 = 0.237 ) denotes a LAS (an unstable) equilib- rium for Eq.(10).The right vertical black dashed line in Fig. 5a intersects the bottom and top violet domains in the green-filled points cell under the application of a two-pulse-per-cycle pulse train voltage stimulus v S , when its SET V + and RESET V − pulse heights are in turn set to +0.46 V and −0.4 V , and for r = 1 , irrespective of the choice of its SET τ + and RESET τ − pulse widths.Note that scaling the widths of the 2 pulses in the train per cycle by the same factor does not affect the TA-SDR prediction.The only GAS equilibrium xeq of the TA-SE lies at 0.308, which is the abscissa of the black-filled circle.A marker, indicating the zero of the RESET component at x = 0 , is omitted from the graph, so as to avoid clutter.(b) SCPCM of the ReRAM cell subject to a particular pulse train voltage stimulus v S , belonging to the class considered in (a), and characterised by parameters ) (refer to the blue signal of period T = τ + + τ − = 2 µs in plot (d)).The Poincaré map, from which it is extracted, features a GAS fixed point x * (see the black-filled circle).Differently from what is the case for the TA-SDR, scaling the widths of the 2 pulses in the train per cycle by the same factor may affect the SCPCM.(c) Brown (Green) trace: progressive approach of the solution x to the Strachan DAE set, when v is forced to follow the particular excitation voltage signal v S , employed for the derivation of the SCPCM, from the initial condition x 0 = x 0,1 = 0.15 ( x 0 = x 0,2 = 0.85 ) toward a unique steady-state oscillation.(d) Green trace: steady-state time series x ss of the memristor state x, as extracted from the solution featuring the same colour in plot (c).Horizontal lines mark the locations of the map fixed point x * , of the TA-SE equilibrium xeq , and of the time average xss of the steady-state time series.As the RESET pulse follows the SET pulse over each cycle of the input train, x ss attains its minimum value at the end of any period.Therefore x * directly reveals the minimum of x ss across one input cycle. ).In each of the two cases the choice of the initial condition ensures that no transients appear in the device response.The time average x1 ( x3 ) of the solution x 1 ( x 3 ), as well as the corresponding LAS TA-SE equilibrium xeq,1 ( xeq,3 ) and LAS map fixed point x * 1 ( x * 3 ) are also marked in plot (e, f). Vol:.( 1234567890) www.nature.com/scientificreports/respectively, and the cyan domain in the red-filled point (V − , V + , xeq ) = (−0.6V, +0.54 V, 0.237) .Setting the RESET τ − and SET τ + pulse widths to a relatively small value, specifically 40 ps , as shown in plot (b) of Fig. 7, visualising the time waveform of the resulting train voltage stimulus v S , allows to limit the change in the memory state over each cycle, which endows the TA-SDR graphic tool with predictive capability.In fact, as may be inferred from plot (c) of the same figure, the SCPCM of the ReRAM cell, subject to the stimulus from plot (b), validates the conclusions drawn through the analysis of the TA-SE (10).The cyan (violet) trace in Fig. 7d depicts the transient behaviour of a solution to the ODE (1), as it approaches the LAS oscillatory waveform revolving approximately around the leftmost (rightmost) TA-SE equilibrium xeq,1 ( xeq,3 ).Due to the slow/fast dynamical effects, emerging in the nanodevice, it takes a rather long (rather short) time for the first (second) solution to attain the steady state.However, an ad hoc choice of the initial condition may allow to retrieve the asymptotic behaviour of the state of the periodically-driven ReRAM cell without the need to wait for transients to vanish.In fact, plot (e) ((f)) of the same figure illustrates the transient-free solution x 1 ( x 3 ) to the ODE (1), initiated from the leftmost (righmost) map fixed point x * 1 ( x * 3 ), corresponding to the minimum value the state assumes over each cycle, together with its mean value x1 ( x3 ), and the respective approximation xeq,1 ( xeq,3 ). On the crossings between one scaled SET SDR and one scaled RESET SDR In general, a single gaussian bell-shaped SET SDR may cross a single RESET SDR nowhere, which is of no practical interest, as elucidated in section "Numerical investigation of the ReRAM response to the basic pulse train stimulus", in one point, endowing the resulting TA-SE with a GAS equilibrium xeq , or in three locations, specifically xeq,1 , xeq,2 , and xeq,3 , the outer of which denote LAS equilibria for the corresponding TA-SE.For a fixed choice of the negative pulse height V − , this depends upon the amplitude V + of the positive pulse as well as upon the SET-to-RESET pulse width ratio r, as may be inferred from the coloured map, shown in Fig. 8a, which was derived by means of a numerical procedure for V − = −0.5 V , and depicts through a white, a green, and a red hue the regions of the r versus V + plane, where, according to the TA-SDR analysis, the ReRAM cell is expected to admit no, a monostable, and a bistable oscillatory behaviour at steady state, respectively.Fig. 8b, c) illustrates the loci of the SET and RESET TA-SE components for a choice of the input parameter pair (V + , r + ) , specifically (+0.50 V, 1 × 10 8 ) ((+0.75 V, 1 × 10 −30 )) (see the black cross marker (black plus sign) within the green (red) domain in plot (a) of the same figure), which is expected to trigger a monostable (bistable) oscillatory response in the ReRAM cell, when V − is fixed to −0.5 V. An analytical methodology to operate the pulse-driven ReRAM cell as a multimodal device with initial condition-dependent oscillatory behaviour Despite an in-depth numerical investigation may allow to explore the response of the ReRAM cell to a rectangular pulse train stimulus, the availability of an analytical strategy to craft the excitation signal so as to endow the memory state of the ReRAM cell with a prescribed number of steady-state oscillatory solutions, revolving around predefined levels, would be of greater interest for circuit designers.In order to address this point, this section first presents a thorough analytical investigation of the Strachan model, and then employs its findings to propose a systematic approach to engineer multistability in the oscillatory response of the ReRAM cell to a generalised pulse train voltage stimulus from the class defined in the section to follow. Adaptation of the TA-SE to a generalised pulse train stimulus In fact, here the periodic voltage source v S in the test circuit of Fig. 1a is assumed to emit a generalised pulse train from the class illustrated in Fig. 1c.In each cycle the generalised train is composed of a tunable number P ∈ N >0 of positive SET pulses followed by a single RESET pulse.Let the ith SET pulse feature a height V +,i and a width τ +,i , with i ∈ {1, 2, . . ., P} .The height and width of the RESET pulse are indicated as V − and τ − , respectively.The input period is thus computable as T = τ +,1 + τ +,2 + . . .+ τ +,P + τ − .Under this hypothesis, Eq. ( 8) may be expanded as Let us further assume each pulse in the train of Fig. 1c to induce a negligible change in the device memory state.This allows to approximate the state x, appearing in each integrand function from Eq. ( 13) with its time average state x , allowing to derive the TA-SE of the ReRAM cell, subject to the generalised train voltage stimulus.In its approximate formula, still provided by Eq. ( 10), the RESET component keeps the expression reported in (12), while the SET component reads as with τ+,i τ +,i /T. Extraction of key geometrical features from a gaussian bell-shaped SET state evolution function The proposed strategy to endow multistability in the oscillatory response of the ReRAM cell to a generalised pulse train envisages an ad hoc choice for the P + 1 stimulus parameters (V +,1 , τ +,1 , V +,2 , τ +,2 , . . ., V +,P , τ +,P , V − , τ − ) , for P ∈ N >0 , so as to shape the locus of the SET TA-SE component ẋSET in such a way to let it intersect the locus of the RESET TA-SE component ẋSET , while keeping above (below) it to the left (right) of the crossing, in as many locations as specified in the design requirements.In fact, it is fundamental to devise an ad hoc linear combination between scaled gaussian bell-shaped SET state evolution functions for massaging the SET TA-SE component according to the design specifications.The derivation of a few key geometrical properties of a generic gaussian bell-shaped SET SDR, i.e. the g SET (x, v) versus x locus, resulting from assigning an arbitrary positive DC value V + to the voltage v (recall the graphs along the right column of Fig. 4), is instrumental to accomplish this goal. State value at the peak of a SET SDR This section derives an exact closed-form expression for the state value, at which a generic gaussian bell-shaped SET SDR attains its peak level.Employing Eqs.(1), (3), and (4), the rate of change ẋ of the memory state x under the application of a positive bias voltage V + across the device may be cast as where ( 13) with G(x, V + ) expressing the dependence of the device conductance upon its state under the prescribed posi- tive DC stimulus, according to Eq. ( 6).The abscissa of the maximum of the ẋ versus x locus for v = V + may be analytically computed by employing Eq. ( 15), and finding the state value, at which ∂g SET (x, V + )/∂x vanishes.As the exponential function on the right hand side of Eq. ( 15) is a monotonically increasing function of its argument, it is sufficient to find the state value, at which ∂α(x, V + )/∂x vanishes.After some algebraic calculation, the formula for x max (V + ) is found to read as where Figure 9a shows the x max versus V + locus, extracted from the formula (17) (red trace) together with its numerical approximation (blue trace). Positive DC voltage for programming the abscissa of the peak of a SET SDR This section derives an approximate analytical formula for the positive bias voltage V + to be assigned to the volt- age v for the respective ẋ = g SET (x, v) versus x locus to feature the peak level at a preliminarily specified state value x max .The function in Eq. ( 17) may not be inverted analytically, which explains the reason for the search of a suitable approximate formula.The solid blue trace in Fig. 9b shows the dependence of the function γ (V + ) upon V + , as it descends from the exact closed-form expression (18).Let us approximate the expression for γ (V + ) with a quadratic polynomial of the form where a 0 (V +,1 , V +,2 ) and a 1 (V +,1 , V +,2 ) are functions of two voltage parameters, specifically V +,1 and V +,2 , allowed to range between 0 V and 1 V , and shortly subject to an optimisation procedure.Importantly, a 0 and a 1 are strictly positive-and negative-valued, respectively, since, as may be evinced by inspecting the blue trace in Fig. 9b, the original function γ (V + ) features a positive polarity for V + = 0 V , and admits a downward concav- ity, respectively.Replacing γ (V + ) with γ (V + , V +,1 , V +,2 ) into the formula (17) for x max delivers an approximate analytical formula for the abscissa of the peak of the gaussian bell-shaped SET SDR, reading as Inserting now the second-order polynomial (19) in place for γ (V + , V +,1 , V +,2 ) into this equation yields the biquadratic equation (17) The exact analytical solution, descending from the formula (17), is illustrated in red.The numerical solution, depicted in blue, saturates abruptly to the unitary value at the first positive DC voltage V + , specifically 0.957 V , where x max exceeds the upper bound x U of the state existence domain D , keeping unchanged for any larger V + value.(b) Blue trace: Graph of γ as a function of V + , according to the exact analytical formula (18).Red trace: approximation of the γ versus V + locus via the analytical function γ (V + , V +,1 , V +,2 ) from Eq. ( 19) for +,2 ) = (0.662, 0.923)V .(c) Positive value V + to be assigned to the DC voltage V in order for the abscissa of the peak of the resulting SET SDR to lie at a pre-specified location x max .The blue curve shows the V + versus x max locus determined numerically from the blue-coloured numerical solution in (a) by exchanging the data series reported along horizontal and vertical axes.At x max = 1 the blue trace abruptly turns into a vertical segment stretching from V + = 0.957 V to V + = 1 V .The red curve is the Ṽ+ versus x max locus, extracted from the analytical formula (22), proposed to approximate the inverse of the function (17), for V +,1 = V (opt) +,1 , and V +,2 = V (opt) +,2 .(d) Blue trace: graphical illustration of the exact analytical formula (17) for x max .Red trace: xmax versus V + locus, obtained from the approximate closed-form expression (20) for . (e) Peak value g SET, max (V + ) of a SET SDR as a function of the positive DC voltage V + across the ReRAM cell.The red trace shows the exact analytical solution, derived from the closed-form expression (27), while the blue trace depicts its numerical counterpart.(f) Impact of the positive DC voltage V + on the width w k (V + ) of the respective SET SDR, measured as the distance between the state values x +,k and x −,k , at which g SET (x, V + ) appears to be scaled down by a factor k as compared to its peak value g SET (x max , V + ) , for each k value from the set {1.5, 2, 3} .The exact analytical solution, descending from the formula (35), (The numerical solution) is illustrated through a dashed (solid) trace with red (blue), magenta (black), and green (brown) hue for the first, second, and third k value from the triplet.When 1.5, 2, and 3 is assigned to k, the numerical solution deviates from the corresponding analytical one as soon as V descends below +0.184 , +0.211 , +0.237 V (increases above +0.937 ,+0.932 , and +0.925 V ), since then x − ( x + ) descends below (rises above) the lower (upper) bound x L ( x U ) in the state existence domain D. www.nature.com/scientificreports/which can be solved for V + , resulting in an approximate analytical formula for V + (x max ) , featuring the form in which the positive (negative) sign in front of the first (second) square root sign descends from the polarity of V + (from the monotonic increase of V + with x max , as inferable from the graphs along the right column of Fig. 4), and where the functions a 0 (V +,1 , V +,2 ) and a 1 (V +,1 , V +,2 ) are in turn defined as Let us define the error e between x max and its approximation x max ( Ṽ+ (x max ), V +,1 , V +,2 ) as in which the second addend on the right hand side calls for the use of the approximate formula Ṽ+ (x max , V +,1 , V +,2 ) for V + (x max ) , as given in Eq. ( 22), into the closed-form expression for x max (V + ) , reported in Eq. ( 17), with γ (•) denoting the function in (18).For each voltage parameter pair (V +,1 , V +,2 ) , we first com- puted the maximum of the squared error e 2 , as x max was swept in D , and then found the minimum of the result- ing list of numbers via According to this optimisation procedure, assuming V +,2 > V +,1 , the best voltage parameter pair (V +,1 , V +,2 ) , let us denote it as (V (opt) +,2 ) , was found to be equal to (0.662, 0.923) V , which delivered the lowest possible maximum squared error, amounting to 5.635 × 10 −8 .(see the rightmost red-filled circle in Fig. 10).The comple- mentary hypothesis V +,2 < V +,1 results in an optimal voltage parameter pair (V (opt) +,2 ) = (0.923, 0.662) V , whereby once again the maximum squared error descends to its global minimum (see the leftmost red-filled circle in Fig. 10).Using these optimal values for V +,1 and for V +,2 , the formulas for γ (V + , V +,1 , V +,2 ) and for Ṽ+ (x max , V +,1 , V +,2 ) are respectively plotted through red traces in Fig. 9b and c.In the latter plot the blue curve , and Figure 10.Surface of the maximum squared error max x max ∈D {e 2 (x max , V +,1 , V +,2 )} as a function of the voltage parameters V +,1 and V +,2 under optimisation.At each of the points (V +,1 , V +,2 ) = (0.662, 0.923) V and (V +,1 , V +,2 ) = (0.923, 0.662) V , marked as red circles, and symmetrically located relative to the plane V +,2 = V +,1 , the surface assumes the minimum possible value, specifically 5.635 × 10 −8 .Without loss of generality, in the remainder of this paper V +,2 is assumed to be larger than V +,1 .As a result the optimal parameter pair is chosen as (V www.nature.com/scientificreports/illustrates the numerical approximation for the inverse of the function in Eq. (17). Figure 9d depicts the approximate analytical formula (20) for +,1 , and V +,2 = V (opt) +,2 (red trace) together with the exact analytical closed-form expression for x max from (17) (blue trace). Peak value of a SET SDR The maximum value attained by the ẋ versus x locus for a given positive bias level V + assigned to the voltage v may be easily derived by inserting the analytical closed-form expression (17), with γ (V + ) expressed via Eq.( 18), in place for x max on the right hand side of (15), which employs (16) for α(x, V + ) .Algebraic manipulations allow to express the maximum g SET, max (V + ) of the SET state evolution function g SET (x, V + ) for a given choice of V + as where Figure 9e shows the g SET, max (V + ) versus V + locus, as extracted from the exact analytical formula (27) (red trace) and by means of a numerical procedure (blue trace). Width of the Gaussian bell-shaped SET state evolution function For any positive DC value V + assigned to the voltage v, the SET state evolution function g SET (x, V + ) features a gaussian shape on the ẋ versus x plane.Let x − (V + ) and x + (V + ) denote two state values, lying to the left and to the right of the abscissa of the maximum x max (V + ) of the gaussian function, respectively.Assume x −,k (V + ) and x +,k (V + ) to hold the same distance from x max (V + ) , and the common value of the SET state evolution function at each of these points to appear scaled down by a factor k relative to the maximum level g SET,max (V + ) , which may be expressed in mathematical terms as For a given choice of k ∈ R , let us now define the kth-scale width w k (V + ) of the gaussian function as the distance between x +,k (V + ) and x −,k (V + ) , i.e.Using Eqs. ( 15) and ( 27), the condition (29) at either state value x ∓,k (V + ) ∈ {x −,k (V + ), x +,k (V + )} can be recast as Employing Eqs. ( 16) and (28), defining and recalling the earlier specified formula (18) for γ (V + ) , the constraint (31) reduces to the second-order polynomial which can be easily solved for x ∓,k (V + ) , yielding Inserting (34) into Eq.( 30), the kth-scale width w k (V + ) of the gaussian function g SET (x, V + ) is then comput- able via which, as indicated, is surprisingly found to be independent of V + .Fig. 9f depicts the kth-scale width w k (V + ) of the gaussian bell-shaped SET state evolution function g SET (x, V + ) against V + for the first, second, and third k value from the set {1.5, 2, 3} , as computed through the exact analytical formula (35), delivering in turn the con- stant values 0.076, 0.100, and 0.126 (red, magenta, and green dashed traces, respectively) as well as by numerical means (blue, black, and brown solid traces, respectively).Having acquired key geometrical properties of a gaussian bell-shaped SET state evolution function g SET (x, V ) , and, particularly, the positive value V + to be assigned to the DC voltage V to program its peak level at a prescribed state value x max , as reported in section "Positive DC voltage for programming the abscissa of the peak of a SET SDR", and its kth-scale width w k , as described in section "Width of the Gaussian bell-shaped SET state evolution function", we are now in a position to present a systematic technique for choosing 2 • P − 1 parameters of a gen- eralised pulse train from the class illustrated in Fig. 1c, specifically V +,1 , V +,2 , . . ., V +,P , τ +,1 , τ +,2 , . . ., τ +,P , τ − , with V +,1 < V +,2 < . . .< V +,P , so as to induce the coexistence of a predefined number of asymptotic oscillatory solutions for the memory state of the ReRAM cell around prescribed levels ( P ∈ N >0 ), for a given RESET pulse height V − , chosen beforehand. Our methodology envisages to endow the TA-SE (10) with as many stable equilibria as the number P of positive pulses over each cycle of the pulse train voltage signal, falling across the ReRAM cell.In particular, the proposed systematic procedure is calibrated so as to ensure that the graph of the leftmost scaled gaussian bellshaped state evolution function τ+,1 • g SET (x, V +,1 ) , corresponding to the first positive input pulse, which features the smallest height V +,1 , creates one and only one stable TA-SE equilibrium xeq,1 with the graph of the modulus of the scaled RESET state evolution function τ− • g RESET (x, V − ) , appearing on the right hand side of Eq. (12).It also guarantees that the graph of the scaled gaussian bell-shaped state evolution function τ+,j • g SET (x, V +,j ) , cor- responding to the j th positive input pulse, which exhibits height V +,j , forms a pair of TA-SE equilibria, specifically xeq,2•j−2 and xeq,2•j−1 , featuring an unstable and stable nature, respectively, with the graph of τ− • |g RESET (x, V − )| , for j ∈ {2, . . ., P} .The target of the methodology is to massage the aforementioned 2 • P − 1 parameters of the generalised pulse train so as to endow the resulting TA-SE (10) According to the TA-SDR analysis, the stable ones, endowed with odd labels, and referred to as xeq,1 , xeq,3 , . . ., xeq,2•P−1 , are expected to denote the mean values of the P admissible stable oscillatory solutions x 1,ss (t) , x 3,ss (t) , . . ., and x 2•P−1,ss (t) for the memory state x of the periodically-forced ReRAM cell. In order for the ith scaled SET state evolution function, with i ∈ {1, 2, •, P} , to dominate over the other P − 1 terms in the sum, composing the SET TA-SE component ẋSET from Eq. ( 14), locally, around the respective maxi- mum, which is a necessary critical measure to ensure that the existence of TA-SE equilibria in the region around x max,i is determined mainly by the interaction between the τ+,i • g SET (x, V +,i ) and the τ− • |g RESET (x, V − )| versus x loci, the P stable equilibria to be provided as a design specification to the input of the systematic procedure need to hold a suitable distance, which is at least one kth-scale width w k , one from any adjacent other.Moreover, for each i value from the set {1, 2, . . ., P} , the abscissa x max,i of the maximum of the gaussian SET state evolution function g SET (x, V ) , sampled at the DC voltage V +,i , which corresponds to the height of the ith positive pulse within each period of the train stimulus, is placed to the left of the prescribed (2 • i − 1) th stable TA-SE equi- librium xeq,2•i−1 , at an appropriate distance, amounting to one quarter of the kth-scale bell width w k , from its location.This step ensures that for each i ∈ {1, 2, . . ., P} the SET (RESET) forces win over the RESET (SET) ones to the left (right) of the (2 • i − 1) th TA-SE equilibrium xeq,2•i−1 , which, as a result, acquires a stability nature, as explained in section "The time average state dynamic route technique".Having computed the state value, at which the peak of the ith gaussian bell should appear, via x max,i = xeq,2•i−1 − w k /4 , for i ∈ {1, 2, . . ., P} , the approximate closed-form expression (22), with +,2 , is then employed to compute the positive value V +,i to be assigned to the DC voltage V in the expression for the SET state evolution function g SET (x, V ) , appearing in the ith addend of the sum to the right hand side of Eq. ( 14), i.e. the height of the ith positive pulse over each cycle of the train excitation signal v S .P algebraic equations are then written down to enforce the TA-SE (10) to feature equilibria at xeq,1 , xeq,3 , . . ., and xeq,2•P−1 .More specifically, these constraints, imposing an equality between the moduli of the SET ẋSET and RESET ẋRESET TA-SE components at each of the stable equilibria, which Eq. ( 10) is expected to admit, read as where r +,1 τ +,1 /τ − , r +,2 τ +,2 /τ − , . . ., and r +,P τ +,P /τ − express the first, second, . . ., and Pth SET to RESET pulse width ratio, respectively.This set of P equations is then solved for the unknowns r +,1 , r +,2 , . . ., and r +,P .The last unknown parameter, specifically the RESET pulse width τ − , which automatically fixes all the SET pulse widths τ +,1 , τ +,2 , . . ., and τ +,P , is finally chosen so small as to guarantee the accuracy of the predictions, drawn from the TA-SDR analysis, as verifiable through the investigation of the SCPCM of the periodically-forced memristive system, as well as via numerical simulations. Remark 2 Even though an adequate distance between adjacent TA-SE equilibria is preliminarily observed in their prescription, as specified above, for an arbitrary choice of the negative input pulse height V − , out of the theoretic methodology, presented in this section, the leftmost scaled gaussian bell-shaped τ+,1 • g SET (x, V +,1 ) versus x locus may cross the graph of τ− • |g SET (x, V − )| as a function of x a couple of additional times, for some choices of the (36) of the first positive pulse and of the first SET-to-RESET pulse width ratio r +,1 .However, ad hoc control measures can be set in place to ensure that the interaction between the first scaled SET SDR and the modulus of the only scaled RESET SDR forms one and only one GAS equilibrium at the prescribed location xeq,1 for Eq. (10).For example, with reference to the numerical study, discussed in section "On the crossings between one scaled SET SDR and one scaled RESET SDR", and referring to the simple case, where a two-pulse-per-cycle pulse train stimulus is let fall across the ReRAM cell, choosing V − = −0.5 V , the resulting TA-SE equation admits one and only one GAS equilibrium xeq , irrespective of the pulse width ratio r, if V + is set to a value lower than the abscissa V+ = 0.494 V of the cusp in Fig. 8a.The state value x max , at which the g SET (x, V + ) versus x locus features a peak Figure 11.Illustrations elucidating how to choose the design parameter k for a case study, where it is requested for the ReRAM cell to act as a bistable oscillator under the application of a three-pulse-per-cycle pulse train voltage stimulus between its terminals.Let the ith positive pulse in the input sequence over each cycle have amplitude V +,i and width τ +,i , for i ∈ {1, 2} .The negative pulse, following the two positive ones in each input cycle, is assumed to feature a fixed amplitude V − of −0.5V , while its width τ − is to be determined.It is further required for the left LAS TA-SE equilibrium xeq,1 to be located at 0.280.The right LAS TA-SE equilibrium xeq,3 should be apart from the left one by one bell width w k .When k is set to 1.5, 2, and 3, xeq,3 is expected to lie at 0.356, 0.380, and 0.406, respectively.(a) For k = 1.5 the application of the design methodology first employs the approximate analytical formula (22), with x max set to x max,1 = xeq,1 − w 1.5 /4 ( x max,2 = xeq,3 − w /4 ), +,1 , and V +,2 = V (opt) +,2 , to fix the amplitude V +,1 ( V +,2 ) of the first (second) SET pulse to 0.483 V ( 0.550 V ).It then specifies the values 0.815 and 2.687 × 10 −5 for r +,1 and r +,2 , respectively, by solving the linear system of equations (36)-(37).Regardless of the choice for the RESET pulse width τ − , which automatically fixes the values for the SET pulse widths τ +,1 and τ +,2 , the TA-SE is found to admit the triplet of equilibria (x eq,1 , xeq,2 , xeq,3 ) = (0.132, 0.28, 0.356) .Clearly, the design specifications are not satisfied here.(b) For k = 2 , applying the proposed methodology delivers first the SET pulse heights V +,1 = 0.478 V , and V +,2 = 0.564 V , and then the SET-to-RESET pulse width ratios r +,1 = 10.866 and r +,2 = 8.974 × 10 −7 .The TA-SE equilibria are then found to lie at xeq,1 = 0.251 , xeq,2 = 0.28 , and xeq,3 = 0.38 .Also in this case the systematic procedure, introduced in this paper, fails to fulfil the design tasks.(c) Recurring to the proposed design methodology with k = 3 , the pulse train voltage stimulus is crafted as specified by the parameters V +,1 = 0.472 V , V +,2 = 0.580 V , r +,1 = 54.759, and r +,2 = 1.715 × 10 −8 .The TA-SE admits here the equilibria xeq,1 = 0.280 , xeq,2 = 0.309 , and xeq,3 = 0.406 .Therefore, choosing k = 3 , the combination between the two gaussian bells and the red curve, increasing monotonically with the time average state, allows to endow the TA-SE with two LAS equilibria at the desired locations, meeting the design requirements.Graphs revealing the instrumental role of the TA-SDR analysis tool to guide the circuit designer toward an appropriate choice for the parameter k for a case study, where a pulse train voltage stimulus, composed of one negative and three positive pulses per cycle, is expected to induce tristability in the oscillatory response of the ReRAM cell.Let V +,i ( τ +,i ) indicate the pulse amplitude (width) of the i th SET pulse, for i ∈ {1, 2, 3} .The pulse amplitude V − of the RESET pulse is fixed to −0.5 V , while its width τ − is an unknown variable.The leftmost LAS TA-SE equilibrium xeq,1 should lie at 0.275.The jth equilibrium xeq,j should appear to the right of the (j − 1) th equilibrium xeq,j−1 by as much as one bell width w k , for j ∈ {2, 3} .For k equal to 1.5, 2, and 3, the inner (rightmost) LAS TA-SE equilibrium xeq,3 ( xeq,5 ) is expected to lie at 0.351 (0.428), 0.375 (0.475), and 0.401 (0.527), respectively.(a) Choosing k = 1.5 , the proposed systematic design procedure first specifies the values 0.478 V , 0.546 V , and 0.606 V for the SET pulse amplitudes V +,1 , V +,2 , and V +,3 , respectively, via the approximate analytical formula (22), for V +,1 = V (opt) +,1 , and V +,2 = V (opt) +,2 , and fixing x max in turn to x max,1 = xeq,1 − w 1.5 /4 , x max,2 = xeq,3 − w 1.5 /4 , and x max,3 = xeq,5 − w 1.5 /4 .It then solves the system of linear Eqs.(36)-(38) with P = 3 for r +,1 , r +,2 , and r +,3 , in turn found to equal 13.228, 2.375 × 10 −5 , and 5.399 × 10 −12 .Irrespective of the choice for τ − , which directly sets values for τ +,i , with i ∈ {1, 2, 3} , the intersections between the loci of the moduli of the SET and RESET TA-SE components, identifying the equilibria xeq,1 , xeq,2 , and xeq,3 , the outer (the inner) of which are LAS (is unstable), for Eq. ( 10), are found to lie at 0.275, 0.351, and 0.428, respectively.As the TA-SDR analysis predicts bistability in the memristor steady-state oscillatory behaviour, assigning 1.5 to k is not an appropriate design choice.(b) For k = 2 , out of the proposed design procedure, the input parameters V +,1 , V +,2 , V +,3 , r +,1 , r +,2 , and r +,3 , are respectively set to 0.473 V , 0.560 V , 0.636 V , 31.913, 1.578 × 10 −6 , and 2.016 × 10 −16 .Correspondingly, the TA-SE admits the five equilibria xeq,1 = 0.275 , xeq,2 = 0.319 , and xeq,3 = 0.370 , xeq,4 = 0.375 , and xeq,5 = 0.475 , of which those labelled with odd numbers are LAS.Here the systematic parameter tuning procedure meets the design specifications.However the robustness of the design is questionable, given the non-ideal proximity between the TA-SE equilibria xeq,3 and xeq,4 . (c) With k = 3 , the application of the design procedure allows to choose the input parameters V +,1 = 0.467 V , V +,2 = 0.576 V , V +,3 = 0.668 V , r +,1 = 1.115 × for V + = V+ is 0.266.This directly sets the maximum value, which may be prescribed for the TA-SE equilibrium xeq , so as to ensure its GAS property, irrespective of r, to x max + w k /4 , that equals 0.285, 0.291, and 0.297, for the first, second, and third k value from the set {1.5, 2, 3} .In principle, as is the case for the examples illustrated in Figs. 13, 14, and 15, assuming a P-pulse-per-cycle pulse train voltage stimulus were let fall across the ReRAM cell, it is also possible to set the first TA-SE equilibrium xeq,1 to a value larger than this upper bound, but then, after solving the system of linear Eqs. ( 36)-(38), it would be necessary to check that the selection of values for the first pulse height and for the first SET-to-RESET pulse width ratio would fall in the monostability green region of the coloured r versus V + map, with r = r +,1 , under the specified value for V − .Finally, it is worth pointing out that a suitable change in the value, assigned to V − , may allow to move the abscissa V+ of the cusp, indicating the left bound of the red bistability domain, to the right, relative to its location along the horizontal axis in the coloured map of Fig. 8a.With reference to the proposed methodology, this would result in a corresponding increase in the maximum value, which may be prescribed for the stable equilibrium xeq,1 , that the leftmost scaled gaussian bell-shaped SET SDR would form with the graph of the modulus of the scaled RESET state evolution function over the time average state, irrespective of the first SET-to-RESET pulse width ratio r +,1 . Remark 3 The selection of the real-valued parameter k is a critical design choice.In order to gain insights into this important aspect, Figs.11 and 12 illustrate two examples, where the methodological approach, presented in this section, is applied for different k values in the attempt to endow the TA-SE with two or three prescribed equilibria, respectively.In each of the two figures, plots (a), (b), and (c) show the loci of the moduli of the SET and RESET components of the corresponding TA-SE for the first, second, and third k value in the set {1.5, 2, 3} , revealing how only assigning the largest value in this triplet to the parameter under discussion allows to satisfy the design specifications (see the respective captions for more detail). Importantly, under a proper selection for k, constraining the SET and RESET TA-SE components to comply with the set of P constraints (36)-( 38), together with the imposition of a minimum distance between adjacent stable equilibria, prescribed for the TA-SE, as well as with a sufficient leftward shift of each SET SDR relative to the respective stable TA-SE equilibrium, the scaled gaussian bells, resulting from the application of the proposed algorithm, gracefully pass over the locus of the modulus of the scaled RESET state evolution function versus the time average state in the regions of the respective peaks only, as may be inferred from either of Figs.11c and 12c, which refer to a particular case study for P = 2 and for P = 3 , respectively, and where, as a result, the blue- coloured single-valued curve, illustrating the TA-SE component, is found to oscillate around the graph of the modulus of the RESET TA-SE component as a function of the time average state, creating 2 • P − 1 equilibria, of which P stable, as prescribed, for the TA-SE.In each of the case studies, illustrating the application of the theory in section 6, keeping such a value for k, which implies a minimum distance between adjacent prescribed stable TA-SE equilibria of w 3 = 0.126 , and a spacing between each prescribed stable TA-SE equilibrium and the abscissa of the peak of the respective gaussian bell of w 3 /4 = 0.031 , proves to be a suitable choice to accomplish a robust design. Discussion The first part of this section applies the rigorous system-theoretic methodology, presented in section "A systematic methodology to craft the pulse stimulus for enabling the ReRAM cell to support multiple oscillations around prescribed resistance levels", to the Strachan model 4 for the determination of heights and widths of all the pulses, appearing cyclically across the ReRAM cell, so as to endow it with three, four, or five coexisting oscillatory operating modes around prescribed resistance levels.The second part of this section is devoted to show an interesting potential application, where the local fading memory effects, emerging across the nonvolatile resistance switching memory under periodic pulse train stimulation, could be leveraged to counteract certain non-idealities, which may be responsible for the corruption of the synaptic weights stored in a crossbar array. Application of the theory to endow the ReRAM cell with three, four, or five oscillatory behaviours The first, second, and third examples, illustrated in turn in Figs. 13, 14, and 15, result from the application of the theoretical method, presented in section "A systematic methodology to craft the pulse stimulus for enabling the ReRAM cell to support multiple oscillations around prescribed resistance levels", to the Strachan model 4 for the specification of suitable values for the 2 • P − 1 tuneable parameters of a generalised pulse train volt- age stimulus, belonging to the class, visualised in Fig. 1c, namely V +,1 , V +,2 , . . ., V +,P , τ +,1 , τ +,2 , . . ., τ +,P , τ − , with V +,1 < V +,2 < . . .< V +,P , when V − is preliminarily set to −0.5 V , so as to induce the coexistence of P stable asymptotic oscillations with prescribed mean values xeq,1 , xeq,3 , . . ., xeq,2•P−1 in the memory state of the periodically-forced ReRAM cell, with P set to 3, 4, and 5, respectively. Remark 4 The theoretical framework, presented in this manuscript, provides evidence for the support, which nonlinear system theory may provide to experimenters and circuit design engineers.The experimental validation of the theory is the aim of our future research efforts.There are several challenges to tackle in order to achieve this goal. 1. Memristor devices available today can have limited endurance and their electrical behaviour may be subject to subtle drifts under operation, requiring much care and numerous repetitions to acquire convincing results.2. Intrinsic variability in memristors requires the procurement of significant statistics, regarding device-todevice and cycle-to-cycle variability effects, for the provision of convincing experimental results.3. The input pulse sequences, required in our programming schemes, are rather complex, requiring finelyprogrammable high-frequency pulse generators to support the experimental validation activities.This calls for the need to adapt existing measurement routines, available in house, or to acquire new experimental setups. In regard to the third challenge from the above list, it might finally turn out to be less problematic than it seems, as explained next.The application of the rigorous system-theoretic methodology, presented in this section, to the Strachan model results in the specification of input pulse widths, decreasing at exponential rate with increases in their heights.While this issue does not undermine the significance of the theoretical work, which is applicable mutatis mutandis to any other memristor model, it originates here as the Strachan mathematical description 4 was not optimised for regions of the state-voltage space, where the ReRAM cell undergoes local fading memory effects, supporting multistable oscillatory operating modes.In fact, in these regions-refer to the order of magnitude of the bell peak value in either of plots (f), (h), and (l) of Fig. 4, extracted from the DRM upon assigning the first, second, and third positive value V + from the set {0.6 V, 0.8 V, 1.0 V} to the DC voltage V-the Strachan model may overestimate the speed of the oxygen vacancies as they move across the longitudinal extension of the nanodevice during a SET resistance switching process.This issue points to the necessity to retune the Strachan model so as to reproduce more accurately the behaviour of the nanodevice in the state-voltage space domain, where it is subject to local fading memory effects.Importantly, research investigations, applying the proposed systematic methodology to a recent reformulation of the Strachan model 21 , which employs ad hoc functions to limit to some extent the maximum admissible velocity, attainable by the ions under positive voltages, and was introduced to resolve some numerical issues, the original DAE set may suffer from, resulted in a dramatic increase in the minimum pulse width by several orders of magnitude relative to the case, where no upper bound was enforced on the rate of change of the memory state, in various scenarios, where the amplitudes assigned to the SET pulses were found to trigger local fading memory effects in the ReRAM cell.This provides proof-of-concept evidence that the application of our theory to a properly-optimised variant of the Strachan model might lead to the specification of widths and heights for the pulses, composing cyclically the train stimulus, which would be programmable in the control settings of existing physical AC voltage waveform generators. (a) Decomposition of the TA-SDR into its SET (blue trace) and RESET (red trace) contributions, here plotted together on the | ẋ| versus x plane to visualise each possible equilibrium xeq of equation (10), where ẋSET = − ẋRESET , for a case study, where the proposed methodology from section 5.3 set the values for the parameters V +,1 , V +,2 , V +,3 , r +,1 , r +,2 , and r +,3 of a four-pulse-per-cycle pulse train voltage stimulus, with V − preliminarily chosen as −0.5 V , to +0.490 V , +0.649 V , +0.778 V , 4.594, 1.489 × 10 −18 , and 1.361 × 10 −47 , respectively, so as to endow the TA-SE with the 3 stable equilibria xeq,1 = 0.3 , xeq,3 = 0.5 , and xeq,7 = 0.7 , which in turn place the maxima of the first, second, and third gaussian bells at x max,1 = 0.269 , x max,2 = 0.469 , and x max,3 = 0.669 .The TA-SE equilibria are found to lie at xeq,1 = 0. ◂ of CMOS circuitry, promise to overcome the performance limitations of traditional technical systems, opening a wide spectrum of opportunities for electronics in the post-Moore era.Due to the strong nonlinearity, characterising the operating principles of these nanodevices, recurring to powerful concepts from Nonlinear Circuit and System Theory 2 is a necessary step for drawing a full picture of their dynamics.In fact the common approach of electrical engineers to linearise the model of a nonlinear device before commencing the investigation of its dynamics is insufficient to explore their global behaviour.As an example of the significant impact that this theory may have on the progress of memristor research, this paper reveals how the application of some of its powerful techniques to a predictive model 4 of a Ta 2 O 5−x Resistive Random Access Memory cell from Hewlett Packard Labs may allow the development of a systematic strategy, supported by a rigorous analytical framework, to craft a generalised rectangular pulse train voltage stimulus, composed of P ∈ N >0 SET positive pulses and of a single RESET negative pulse, so as to endow the memory state of the nano-device with P of coexisting oscillatory solutions, revolving around mean values, prescribed as design specification, and observable at steady state for all initial conditions drawn from their basins of attraction.The availability of an algorithm, which, evaluating analytical formulas, and solving a linear system of equations, automatically massages the properties of a generalised pulse train stimulus for triggering a monostable (multistable) periodic response in a Resistive Random Access Memory cell, triggering the emergence of global 3 (local 9,10 ) fading memory effects across its physical medium, and forcing it to oscillate around a specific resistance level, for any initial condition from the state existence domain (from a certain basin of attraction), may inspire the development and circuit implementation of novel in-memory sensing and computing paradigms in the years to come.As an example of a potential application of the theory, the local fading memory effects, emerging in the ReRAM cell according to the Strachan model, have been leveraged to propose a novel scheme to compensate for the unavoidable drift in the resistance of a crosspoint nanodevice under power off conditions.A similar theoretical approach, as the one, presented in this paper for the Strachan model, may be developed to investigate the response of the mathematical description of any other non-volatile 22 or volatile 23 resistance switching memory to periodic pulse train stimuli.to follow the voltage stimulus v S from (c) at all times, and for each initial condition x 0 from the set {x 0,1 , x 0,2 , x 0,3 , x 0,4 , x 0,5 , x 0,6 , x 0,7 , x 0,8 , x 0,9 , x 0,10 } = {0.15,0.34, 0.35, 0.485, 0.49, 0.62, 0.625, 0.745, 0.75, 0.9} .When initiated from either initial condition in the first, second, third, fourth, and fifth pair, the memristor state x converges progressively toward the steady-state waveforms x ss,1 , x ss,3 , x ss,5 , x ss,7 , and x ss,9 , respectively, as illustrated in plots (f), (g), (h), (i), and (l), which further visualise in turn the mean values x1,ss , x3,ss , x5,ss , x7,ss , and x9,ss of the asymptotic oscillations, together with the corresponding stable map fixed points x * 1 , x * 3 , x * 5 , x * 7 , and x * 9 . Figure 1 . Figure 1.(a) Circuit employed to investigate the response of the Ta 2 O 5−x ReRAM cell 15 to periodic square pulse-based voltage excitation signals.(b) Time course of a two-pulse-per-cycle train voltage stimulus.(c) Time waveform of a generalised pulse train voltage stimulus v S , including P positive SET pulses and one negative RESET pulse per cycle.The RESET pulse of height V − and width τ − follows the series of SET pulses.The i th SET pulse is V +,i high and τ +,i wide, with i ∈ {1, . . ., P} .The ordering of the positive pulses from the lowest to the highest in each input cycle follows the convention adopted in the systematic methodology to engineer multistability in the steady-state oscillatory response of the ReRAM cell to a generalised train stimulus (refer to section "A systematic methodology to craft the pulse stimulus for enabling the ReRAM cell to support multiple oscillations around prescribed resistance levels".However, this has no effect on the simulations.In fact, to facilitate their convergence, in the numerical investigations, discussed in section "Conclusions", the SET pulses were listed from the most narrow to the most wide before being applied in this order one after the other across the device. https://doi.org/10.1038/s41598-024-55255-7www.nature.com/scientificreports/with τ+ τ + /T and τ− τ − /T .This ODE, referred to as time average state equation (TA-SE) 8 , governs the time evolution of the time average state x of the memristor when a voltage source, generating a specific square pulse train voltage stimulus v S , belonging to the class illustrated in Fig.1b, and characterised by the parameter quartet (V + , τ + , V − , τ − ) , is connected between its terminals, as shown in plot (a) of the same figure.Equations (11) and (12) are respectively referred to as SET and RESET TA-SE components.The blue (red) trace in Fig. 2a illustrates qualitatively a ẋSET ( ẋRESET ) versus x locus of the ReRAM cell subject to an arbitrary pulse train volt- age stimulus.The SET (RESET) resistance switching process tends to increase (decrease) the time average state over each input cycle, as indicated by the arrows on the first (latter) single-valued curve.Plotting the RESET TA-SE component in modulus, as depicted in plot (b) of the same figure, allows to visualise clearly each point, at which the SET and RESET forces balance out.A point of this kind denotes an equilibrium x = xeq for the TA-SE, as | ẋSET | x=x eq = | ẋRESET | x=x eq implies ẋ = 0 .A TA-SE equilibrium is asymptotically stable if and only if | ẋSET | > (<)| ẋRESET | locally to the left (right) of its location, and unstable otherwise.A blue filled circle (red hollow circle) is employed to indicate the location of a stable (an unstable) TA-SE equilibrium.The ẋ versus x locus, derivable by summing the ordinates of the vertically-aligned points, sitting along the SET and RESET traces from plot (a), for each x , as dictated by Eq. ( Figure 2 . Figure 2. (a) Blue (Red) trace: SET ẋSET (RESET ẋRESET ) component of the TA-SE (10) of the ReRAM cell under an arbitrary pulse train stimulation.(b) Moduli of the SET and RESET TA-SE components.Their intersections identify the TA-SE equilibria.(c) TA-SDR of the ReRAM cell subject to the arbitrarily chosen pulse train stimulus.Arrows, pointing to the east (west), are superimposed along any TA-SDR branch, which visits the upper (lower) half plane, so as to indicate a progressive increase (decrease) in the time average state x when ẋ is positive (negative).An equilibrium for the TA-SE exists at the abscissa x = xeq of any point, at which the TA-SDR crosses the horizontal axis, as ẋ = 0 therein.The equilibrium is asymptotically stable (unstable), as indicated via a black filled (red hollow) circle, if and only if the slope ∂ ẋ/∂ x of the ẋ versus x locus is negative (positive) at its location.According to the TA-SDR analysis, the ReRAM cell is expected to act as a bistable oscillator under the given periodic excitation. , a periodic stimulus endows the memory state of the ReRAM cell with two locally asymptotically stable (LAS) steady-state oscillatory solutions.A black filled (red hollow) circle is employed to indicate the location of a stable (an unstable) fixed point of the map.For each k value in N >0 the SCPCM expresses the net change �x k;k−1 x k − x k−1 = P (x k−1 ) − x k−1 , which the ODE solution x undergoes, over the time interval Figure 3 . Figure 3. (a)Exemplary illustration of a one-dimensional discrete-time system x k = P (x k−1 ) , referred to as Poincaré map, which admits three intersections with the identity map x k = P I (x k−1 ) = x k−1 , representing its fixed points, specifically x * 1 , x * 2 , and x * 3 , of which the outer ones (inner one) are stable (is unstable).A few coloured zig-zag trajectories, known as cob-web plots12,20 in Nonlinear Dynamics Theory, are also displayed to show the discrete-time evolution of the map from distinct initial conditions toward one of the two LAS fixed points.In our study a map of this kind can be extracted from the Strachan DAE set, when the input voltage v is enforced to follow a given periodic voltage stimulus v S , e.g. in the form of a rectangular pulse train, by recording samples of the memristor state x at regular T-long time intervals from the beginning of each of a large ensemble of simulations, differing in the initial conditions, and then plotting for each of the resulting time series the kth sample x k = x(k • T) versus the (k − 1) th one x k−1 = x((k − 1) • T) , with k ∈ N >0 .For k = 1 the SCPCM reduces to � 1;0 = x 1 − x 0 = P (x 0 ) − x 0 , providing the change in the memory state over the first input cycle. (b) x k;k−1 = x k − x k−1 versus x k−1 locus, illustrating the SCPCM of the ReRAM cell subject to the periodic stimulus, which induces a state motion resulting in the Poincaré map shown in plot (a). Figure 4 . Figure 4. (a), (c), (e), (g), (i) ((b), (d), (f), (h), (l)) g RESET (x, V ) ( g SET (x, V ) ) versus x locus, denoting the RESET (SET) SDR of the ReRAM cell15 when V is chosen as the first, second, third, fourth, and fifth value from the set {−(+)0.2, −(+)0.4,−(+)0.6,−(+)0.8,−(+)1} V.Over a RESET (SET) resistance switching transition the device state undergoes a progressive decrease (increase), as the arrows, superimposed on top of the respective SDR, clearly indicate through their westward (eastward) direction.With reference to each graph along the first column, the red filled circle shows the location of the stable equilibrium x eq = x L , which the ODE (1) admits for any negative bias value V assigned to the input voltage v. On the other hand, the state equation admits no equilibrium under any positive DC stimulus. Figure 5 . Figure 5. Three-dimensional illustration, showing each admissible stable or unstable equilibrium xeq = xeq (V + , V − ) , which the TA-SE(10), associated to a train voltage stimulus, featuring two pulses of opposite polarity per cycle, may possibly admit, when the SET τ + and RESET τ − pulse widths are identical, as a function of the SET V + and RESET V − pulse heights, swept across the ranges [−2, 0] V and [0, 1.2] V , respectively.The dark blue surface includes all the GAS equilibria of the TA-SE in the monostable oscillatory operating mode of the ReRAM cell.The cyan (magenta) surface contains all the unstable (all the LAS) equilibria of the TA-SE in the bistable oscillatory operating mode of the ReRAM cell.(b) Projection of the surface from (a) onto the V + versus V − plane.Choosing the pulses' heights of the pulse train voltage stimulus, featuring a 50% duty cycle, according to the coordinates of any point in the green (red) region, the TA-SE features a single GAS equilibrium (two LAS equilibria) for r = 1 .The black cross marker (black plus sign) identifies the input parameter pair (V − , V + ) , inducing the particular monostable (bistable) oscillatory response, illustrated in Fig.6(Fig.7), in the nanodevice. Figure 6 . Figure 6.SET | ẋSET | (blue trace) and RESET | ẋRESET | (red trace) components of the TA-SDR of the ReRAM cell under the application of a two-pulse-per-cycle pulse train voltage stimulus v S , when its SET V + and RESET V − pulse heights are in turn set to +0.46 V and −0.4 V , and for r = 1 , irrespective of the choice of its SET τ + and RESET τ − pulse widths.Note that scaling the widths of the 2 pulses in the train per cycle by the same factor does not affect the TA-SDR prediction.The only GAS equilibrium xeq of the TA-SE lies at 0.308, which is the abscissa of the black-filled circle.A marker, indicating the zero of the RESET component at x = 0 , is omitted from the graph, so as to avoid clutter.(b) SCPCM of the ReRAM cell subject to a particular pulse train voltage stimulus v S , belonging to the class considered in (a), and characterised by parameters (V + , τ + , V − , τ − ) = (+0.46V, 1µs, −0.4 V, 1µs) (refer to the blue signal of period T = τ + + τ − = 2 µs in plot (d)).The Poincaré map, from which it is extracted, features a GAS fixed point x * (see the black-filled circle).Differently from what is the case for the TA-SDR, scaling the widths of the 2 pulses in the train per cycle by the same factor may affect the SCPCM.(c) Brown (Green) trace: progressive approach of the solution x to the Strachan DAE set, when v is forced to follow the particular excitation voltage signal v S , employed for the derivation of the SCPCM, from the initial condition x 0 = x 0,1 = 0.15 ( x 0 = x 0,2 = 0.85 ) toward a unique steady-state oscillation.(d) Green trace: steady-state time series x ss of the memristor state x, as extracted from the solution featuring the same colour in plot (c).Horizontal lines mark the locations of the map fixed point x * , of the TA-SE equilibrium xeq , and of the time average xss of the steady-state time series.As the RESET pulse follows the SET pulse over each cycle of the input train, x ss attains its minimum value at the end of any period.Therefore x * directly reveals the minimum of x ss across one input cycle. Figure 7 . Figure 7. Decomposition of the TA-SDR into its SET | ẋSET | (blue trace) and RESET | ẋRESET | (red trace) components-plot (a)-for the ReRAM cell subject to a two pulse-per-cycle pulse train voltage stimulus v S , composed of one SET (RESET) pulse of positive (negative) amplitude V + = +0.54V (V − = −0.6V) over the first (second) τ + (τ − )-long half of each period of duration T = τ + + τ − , irrespective of the common value assigned to τ + and τ − .The TA-SE admits a triplet of equilibria, namely xeq,1 = 0.106 , xeq,2 = 0.237 , and xeq,3 = 0.370 .Each of the outer ones (The inner one), indicated via a black-filled (red hollow) circle, is LAS (unstable).(b) Time waveform of a particular pulse train voltage stimulus, belonging to the class assumed in (a), and identified via the parameter quartet (V + , τ + , V − , τ − ) = (+0.54V, 20 ps, −0.6 V, 20 ps) .(c) SCPCM of the ReRAM cell in the case, where the excitation voltage signal v S from (b) is let fall continuously between its terminals.A black-filled (red hollow) circle denotes a locally-stable (an unstable) fixed point for the associated Poincaré map.(d) Cyan (Violet) trace: time course of the memory state x of the ReRAM cell, with voltage v forced to follow v S from (b) at all times, from the initial condition x = x 0,1 = 0.2 ( x = x 0,2 = 0.3 ).Unlike the latter solution, the first one takes a very long time to attain the steady state.(e, f) Locally-stable oscillatory solution x 1 ( x 3 ) for the state x of the ReRAM cell, as recorded in a numerical simulation of the Strachan DAE set under v = v S from (b) for x 0 = x * 1 ( x 0 = x * 3).In each of the two cases the choice of the initial condition ensures that no transients appear in the device response.The time average x1 ( x3 ) of the solution x 1 ( x 3 ), as well as the corresponding LAS TA-SE equilibrium xeq,1 ( xeq,3 ) and LAS map fixed point x * 1 ( x * 3 ) are also marked in plot (e, f). Figure 8 . Figure 8.(a) Coloured map, depicting how the number of admissible stable or unstable equilibria for the TA-SE of the ReRAM cell, subject to a two-pulse-per-cycle pulse train voltage stimulus from the class illustrated in Fig.1bis influenced by the SET pulse amplitude V + as well as by the ratio r between the SET and RESET pulse widths, given a RESET pulse amplitude V − of −0.5 V .The green and red regions respectively enclose input parameter pairs, which endow the TA-SE with one and only one GAS equilibrium (three equilibria, of which the outer ones are LAS).(b, c) Graphical illustration, showing the decomposition of the TA-SDR into its SET and RESET components for a scenario, where the input pair (V + , r) , lying at (+0.50 V, 1 × 10 8 ) ((+0.75 V, 1 × 10 −30 )) (see the black cross marker (black plus sign) within the green (red) region of the map in (a)), determines the existence of one and only one GAS equilibrium xeq = 0.314 (three equilibria xeq,1 = 0.042 , xeq,2 = 0.516 , and xeq,3 = 0.725 , of which the outer ones are LAS) for the respective TA-SE.
20,967.8
2024-03-07T00:00:00.000
[ "Engineering", "Physics" ]
Central limit theorems for network driven sampling Respondent-Driven Sampling is a popular technique for sampling hidden populations. This paper models Respondent-Driven Sampling as a Markov process indexed by a tree. Our main results show that the Volz-Heckathorn estimator is asymptotically normal below a critical threshold. The key technical difficulties stem from (i) the dependence between samples and (ii) the tree structure which characterizes the dependence. The theorems allow the growth rate of the tree to exceed one and suggest that this growth rate should not be too large. To illustrate the usefulness of these results beyond their obvious use, an example shows that in certain cases the sample average is preferable to inverse probability weighting. We provide a test statistic to distinguish between these two cases. Introduction Classical sampling requires a sampling frame, a list of individuals in the target population with a method to contact each individual (e.g. a phone number). For many populations, constructing a sampling frame is infeasible. Network driven sampling enables researchers to access populations of people, webpages, and proteins that are otherwise difficult to reach. These techniques go by many names: web crawling, Respondent-Driven Sampling, breadth-first search, snowball sampling, co-immunoprecipitation, and chromatin immunoprecipitation. In each application, the only way to reach the population of interest is by asking participants to refer friends. Respondent-Driven Sampling (RDS) serves as a motivating example for this paper. The Centers for Disease Control, the World Health Organization, and 4872 X. Li and K. Rohe the Joint United Nations Programme on HIV/AIDS have invested in RDS to reach marginalized and hard-to-reach populations [6,1]. Each individual i in the population has a corresponding feature y i (e.g. y i ∈ {0, 1} and y i = 1 if i is HIV+). Using only the sampled individuals, we wish to make inferences about the average value of y i across the entire population, denoted as μ (e.g. the proportion of the population that is HIV+). Extensive previous statistical research has proposed various estimators of μ which are approximately unbiased based upon various types of models for an RDS sample [16,17,4]. We note that in the papers cited above (except [4]), RDS is assumed to sample with replacement. Previous research has also explored the variance of these estimators [5,13]. This paper studies the asymptotic distribution of statistics related to these estimators. Results on asymptotic distributions for RDS are useful for two obvious reasons. First, they allow us to construct asymptotic confidence intervals for μ. Second, they provide essential tools to test various statistical hypotheses. The only central limit theorem considered in the RDS literature studied the case when the tree indexed process reduces to a Markov chain [5]; this presumes that each individual refers exactly one person. Previous research suggests that the number of referrals from each individual is fundamental in determining the variance of common estimators [13]. This paper establishes two central limit theorems in settings which allow for multiple referrals. The main results apply to both the sample average and the Volz-Heckathorn estimator, which is an approximation of the inverse probability weighted estimator (cf Remark 1). Because the inverse probability weighted (IPW) estimator and its extensions are asymptotically unbiased, these estimators are often preferred to the sample average. Notation Following [5] and [13], the results below model the network sampling mechanism as a tree indexed Markov process on a graph. There are many assumptions in this model which are incorrect in practice. However, like the i.i.d assumption, it allows for tractable calculations. In the simulations, we show that the theory derived from this model provides a good approximation for a more realistic sampling model. [12] studies the sensitivities of the estimators to this model. Let G = (V, E) be a finite, undirected, and simple graph with vertex set V = {1, ..., N } and edge set E. V contains the individuals in the population and E describes how they are related to one another. As discussed in the introduction, y : V → R is a fixed real-valued function on the state space V ; these are the node features that are measured on the sampled nodes. The target of RDS is to estimate μ = N −1 N i=1 y(i). If each sampled node referred exactly one friend, then the Markov sampling procedure would be a Markov chain. Several classical central limit theorems exist for this model; see [8] for a review. The results herein allow for each sampled node to refer more than one node. This is a Markov process indexed not by a chain, but rather by a tree. Denote the referral tree as T. Where the node set of G indexes the population, the node set of T indexes the samples. That is, we observe a subset of the individuals in G with the sample {X τ } τ ∈T ⊂ V . An edge (σ, τ ) in the referral tree denotes that sampled individual X σ referred individual X τ into the sample. Mathematically, T is a rooted tree-a connected graph with n nodes, no cycles, and a vertex 0 which indexes the seed node. To simplify notation, σ ∈ T is used synonymously with σ belonging to the vertex set of T. For each non-root node τ ∈ T, denote p(τ ) ∈ T as the parent of τ (i.e. the node one step closer to the root). This paper presumes that {X τ } τ ∈T is a treeindexed random walk on G, which was a model introduced by [2]. This model generalizes a Markov chain on G; each transition X p(τ ) → X τ is an independent and identically distributed Markov transition with some transition matrix P that is defined below. Following [2], we will call this process a (T, P )-walk on G. Unless stated otherwise, it will be presumed throughout that the root node of the random walk X 0 is initialized from the equilibrium distribution π of P . It follows that X σ has distribution π for all σ ∈ T. Unless stated otherwise, this paper presumes throughout that the transition matrix P is constructed from a weighted graph G. Let w ij be the weight of the If the graph is unweighted, then deg(i) is the number of connections to node i. Throughout this paper, the graph is undirected. So, w ij = w ji for all pairs i, j. We use the term simple random walk for the Markov chain constructed on the unweighted graph (i.e. w i,j ∈ {0, 1} for all i, j). The simple random walk presumes that each participant selects a friend uniformly and independently at random from their list of friends. [10] serves as this paper's key reference for Markov processes. Following the notation in that text, define E π (y) = N i=1 π i y(i) and var π (y) = E π (y−E π (y)) 2 for the function y. In order to estimate μ, we observe y(X τ ) for all τ ∈ T. Because G is undirected, P is reversible and has stationary distribution π with π i ∝ deg(i) for all i ∈ G; this fact is helpful for creating an asymptotically unbiased estimator for μ, particularly under the simple random walk assumption [17]. Remark 1. In general, the quantity of interest is not equal to E π (y). As such, the sample average of y(X τ )'s is a biased estimator for μ. With inverse probability weighting, define a new function y (i) = y(i)(Nπ i ) −1 and the respective estimator X. Li and K. Rohe where n = |T| is the sample size. Then, E π (μ IP W ) = E π (y ) = μ. As such, the sample average of the y (X τ )'s is an unbiased estimator of μ. Unfortunately, the values π i are unknown. In practice, RDS participants are asked various questions to measure how many friends they have in G. Under the simple random walk assumption, π i is proportional to the number of friends of i. Therefore the Volz-Heckathorn estimatorμ Hájek estimator based upon deg(i) [17]. Under the simple random walk assumption, this estimator provides an asymptotically unbiased estimator of μ. For each node τ ∈ T, let |τ | be the distance of the node from the root; this is also called the "wave" of τ . For every pair of node σ, τ ∈ T, define d(σ, τ ) to be the distance between σ and τ on T (as a graph). For each non-leaf node σ ∈ T, let η(σ) be the number of offspring of σ. A tree is said to be an mtree of height h if η(σ) = m for all σ ∈ T with |σ| < h and η(σ) = 0 for all |σ| = h. Here, both m and h are a natural numbers (i.e. m, h ∈ N). T is said to be Galton-Watson if η(σ) are i.i.d random variables in N. While the theorems below only study 2-trees; the computational experiments in Section 5 suggest that the conclusions of the analytical results are highly robust to replacing the 2-tree with a Galton-Watson tree. There are two primary concerns about the model and estimator used in the main results below. First, the Markov model allows for resampling. Second, the results below only apply to m-trees, not more general trees. The simulations in Section 5 suggest that the analytic results continue to hold under a more realistic setting that addresses both of these concerns. Main results Let T be an m-tree and λ 2 be the second largest eigenvalue of P . The variance ofμ IP W decays at the standard rate if and only if m < λ −2 2 [13]. In other words, if m > λ −2 2 , then var 1 |σ ∈ T : |σ| ≤ h| σ∈T:|σ|≤h y(X σ ) → ∞ as h → ∞. As such, using the traditional scaling, no central limit theorem holds above the critical threshold. Because of this, the theorems focus on the case m < λ −2 2 . When m > λ −2 2 , the simulations in Section 5 suggest that the central limit theorem does not hold for any scaling. Theorem 1 is a central limit theorem for an estimator constructed from the tree-indexed Markov chain. The theorem holds for any function y, any reversible transition matrix with second largest eigenvalue satisfying |λ 2 | = 1, and any m < λ −2 2 . Theorem 1. Suppose that P is a reversible transition matrix with respect to the equilibrium distribution π, and that the eigenvalues of P are 1 = λ 1 > |λ 2 | ≥ ... ≥ |λ N |. Without loss of generality, suppose that E π (y) = 0. Define If T is an m-tree with m < λ −2 2 , then The sequence of random variables considered in Theorem 1 are not exactly sample averages, but a reweighted form of sample average. Samples in the same wave are equally weighted, while samples from different waves are not. The following theorem provides a theoretical guarantee on the distribution of sample average for a specific class of transition matrix and node feature. For a vector x, one of the conditions uses the notation x ∞ = max i x i . is a technical condition on the symmetry ofμ h that is necessary in the proof. The following proposition provides a sufficient condition for (c1). Proof. Under the conditions of the proposition, the distribution ofμ h is symmetric with respect to 0. Thus E(μ 2k+1 Conditions (c2)-(c3) can be substituted by the following condition (c2'): Condition (c2') is weaker than (c2) and (c3) combined, but is stronger than (c3) alone. To see this, let f be the eigenfunction of the second eigenvalue, and it follows that |λ 2 | < 1 √ 2 . It can be easily seen that one necessary condition for (c2') is that 4876 X. Li and K. Rohe In other words, all the rows of P must be close to π. As previously discussed, condition (c3) is actually a necessary condition for the central limit theorem [13], in the sense that the variance ofμ h tends to infinity if |λ 2 | ≥ 1 √ 2 . For clarity in the exposition of the theorem and the proof, we have only proved the theorem for the 2-tree. Results for more general m−tree can be proved with a similar technique. Extension to the Volz-Heckathorn estimator When P is restricted to be the transition matrix of the simple random walk on G, the following corollary shows that Theorem 2 can be extended to the Volz-Heckathorn estimator [17]. Denoted = 1 N i∈V deg(i) as the average node degree. Following Remark 1, the IPW estimator contains 1/(Nπ i ) which is equal tod/deg(i). The Volz-Heckathorn estimator first estimatesd with the harmonic mean of the observed degrees. Because this harmonic mean converges tod in probability, the following corollary applies Slutsky's Theorem to give a central limit theorem for the Volz-Heckathorn estimator. Corollary 1. Let T be a 2-tree. Suppose in particular that P is the transition matrix of the simple random walk on G. Define a new node feature y (i) = y(i)/deg(i). Without loss of generality, suppose that E π y = 0 (this is not equivalent to E π y = 0). Definê . Illustrating the conditions with a blockmodel Consider G as coming from a blockmodel with two blocks [11]. Previously, [5] studied RDS with this model. It serves as an approximation to the Stochastic Blockmodel. In particular, suppose that each node i = 1, . . . , N is assigned to a block with z(i) ∈ {1, 2}. Suppose that each block contains N/2 nodes. For suppose that every pair i, j has w i,j = B z(i),z(j) ∈ (0, 1). Thus, under the construction of P in Equation (2.1), Given the structural equivalence of nodes within the same block, it is sufficient to study the conditions (c2) and (c3) with a Markov chain where the state space is reduced to the block labels {1, 2} and the transition matrix is P = B ∈ R 2×2 . See Section C in the Appendix for a further discussion of this fact. Notice that λ 2 = (p − r)/(p + r) is the second eigenvalue of both P and P. then conditions (c2) and (c3) are satisfied. This example can be expanded to study a blockmodel with 2K blocks, where Suppose that the outcome y i depends only on the block label, i.e. Estimating the variance For some node featureỹ (e.g. HIV status y or the y in Remark 1 that leads to the IPW estimator), letμ denote the sample average. Denote σ 2 μ as V ar T,P (μ), where the subscript T, P denotes that the data is collected via a (T, P )-walk on G. This subsection studies a simple plug-in estimator for σ 2 μ . The following function is essential to expressing σ 2 μ [13]. Definition 1. Select two nodes I, J uniformly at random from the tree T. Define the random variable D = d(I, J) to be the graph distance in T between I and J. Define G as the probability generating function for D, In practice, T is observed. So, the function G can be computed. In many studies there are multiple seed nodes. In these cases, we suggest computing d(I, J) on a tree which has an artificial root node that connects to all of the seeds; this root node could be imagined as an individual that is responsible for finding the seed nodes. In this tree, two different seed nodes would be distance 2 apart. Because the data has been sampled proportional to π, the plug-in quantity for var π should not explicitly adjust for π. Namely, we have where {T \ 0} contains all nodes except the root node 0 (because p(0) does not exist). Using these plug-in quantities, defineR. Then, the estimator iŝ A popular bootstrap technique for estimatingσ 2 μ resamples y(X τ ) as a Markov process (i.e. in addition to X τ being a Markov process, the bootstrap procedure also assumes that y(X τ ) is Markov) [15]. This model is akin to the blockmodel with two blocks in Section 3.2. The following assumption is weaker than this assumption: Proposition 2. Under Assumption 1, While Assumption 1 is weaker than the previous assumption in [15], the next proposition highlights the danger of this assumption. It uses a different assumption which is a rather weak assumption. Because G is a probability generating function, it is always convex on [0, 1]. As such, we only need to be worried about negative values. Recall that the central limit theorems above only hold when |λ min | < 1/ √ 2 ≈ .7 (the smallest possible value for λ min is −1). Some simulated trees given in the appendix suggest that if G is not convex, it often fails in the neighborhood of −1. As such, the assumption that |λ min | < 1/ √ 2 ≈ .7 is likely to imply Assumption 2. In practice, one observes the referral tree T. Thus, one can compute the second derivative of G. Eigenvalues of P close to negative one arise in antithetic sampling, where adjacent samples are dissimilar. For example, if the population in G was heterosexuals and edges in G represent sexual contacts, then men would only refer women and vice versa. In this case, λ min would be exactly −1. While easily imagined, such settings are not current practice for RDS. As such, large an negative values are uncommon; λ min is likely close to zero. The following proposition follows from an application of Jensen's inequality. A proof is given in Appendix D. Proposition 3. Under Assumption 2, Because Assumption 2 is not very restrictive, the inequality in Proposition 3 highlights the danger in breaking Assumption 1 (and thus the Markov model in [15]); breaking Assumption 1 leads toσ 2 μ underestimating the variance. Numerical results In this section, we illustrate the theoretical results on simulated data. The simulations are performed on networks simulated from the Stochastic Blockmodel [7]. The four colors in Figure 1 correspond to four different networks, from four different parameterizations of the model. Each of the four networks has N = 5, 000 nodes, equally balanced between group zero and group one. The probability of a connection between two nodes in different blocks is r and the probability of connection between two nodes in the same block is p. To control the eigenvalues of the 5000 × 5000 transition matrix, consider the transition matrix between classes given by P = E(D) −1 E(A). The second eigenvalue of P is [14] λ 2 (P) = p − r p + r , where expectations are under the Stochastic Blockmodel. In our simulation, the second eigenvalue of the actual transition matrix is typically very close to λ 2 (P). We take p + r = 0.01 in all four Stochastic Blockmodels so that the average degree is about 25. As such, λ 2 (P) is actually controlled by p − r. For each of the four networks, we carry out four different sampling designs. Let T be either a 2−tree or a Galton-Watson tree with E(η(σ)) = 2. For the Galton-Watson tree, the distribution of η(σ) is uniform on {1, 2, 3}. For each T, we consider both with replacement sampling (i.e. the (T, P )-walk on G) and without replacement sampling (i.e. referrals are sampled uniformly from the friends that have not yet been sampled). Note that the conditions of Theorem 2 may be violated when either the Galton-Watson tree or without-replacement sampling is used. We take the first 8 waves of T as our sample. As such, the sample size is roughly N/10. For each social network and sampling design, we repeat the sampling process 2000 times and computeμ = 1 n n i=1 y(X i ) for each sample. The Quantile-Quantile (Q-Q) plot ofμ is shown in the left panel of Figure 1; note that the Q-Q plot centers and scales each distribution to have mean zero and standard deviation one. In addition, we repeat the above simulation for the Volz-Heckathorn estimator, and the Q-Q plot ofμ V H is shown in the right panel of Figure 1. It is clear from Figure 1 that there are two patterns of distribution: when λ 2 < 1/ √ m ≈ 0.7, i.e. λ 2 =0.5 or 0.6, the Q-Q plots ofμ andμ V H approximately lie on the line y = x for all sampling design; when λ 2 > 1/ √ m ≈ 0.7, i.e. λ 2 =0.8 or 0.9, the Q-Q plot ofμ andμ V H departs from the line y = x. Taken together, Figure 1 suggests that the distribution ofμ andμ V H converges to Gaussian distribution if and only if m < λ −2 2 . In fact, when m > λ −2 2 , the distribution of the estimators has two modes. The relationship between the expectation of the offspring distribution and the second eigenvalue of the social network determines the asymptotic distribution of RDS estimators, regardless of the node feature, the particular structure of the tree or the way we handle replacement. Discussion A recent review of the RDS literature counted over 460 studies which used RDS [18]. Many of these studies seek to estimate the prevalence of HIV or other infectious diseases; for these studies, a point estimate of the prevalence is insufficient. These studies have used confidence intervals constructed from bootstrap procedures and from estimates of the standard error. These standard error intervals implicitly rely on a central limit theorem and this paper provides a partial justification for such techniques, so long as m ≤ 1/λ 2 2 . Figure 1 suggests that if m is larger than 1/λ 2 2 , then the simple estimators (μ andμ V H ) are no longer normally distributed. The theorems in this paper do not apply to general trees, only to m-trees. If T is a Galton-Watson tree with E(η(σ)) < λ −2 2 , then the simulations support the following conjecture: where σ 2 can be computed from the results in [13]. To prove this result requires a more careful study of the structure of {X σ } σ∈T . We leave this problem to future investigation. Appendix A: Proof of Theorem 1 In the appendix, we give a proof of the theorems and propositions in the paper. First, we give an outline of the proof of our main theorem. Consider the martingale where {F h } is a filtration to be defined later. Using the Markov property and the estimation of var(Y h ), we show that the martingale difference sequence satisfies the condition of the martingale central limit theorem. In this section, P will be a reversible transition matrix with eigenvalues 1 = λ 1 ≥ |λ 2 | ≥ ... ≥ |λ N | and corresponding eigenfunctions f 1 , ..., f N satisfying k f i (k)f j (k)π k = δ ij for any i, j. We refer to [10] for the existence of such eigen-decomposition. Unless stated otherwise, expectations are calculated with respect to the tree indexed random walk on the graph. We begin with some lemmas. Lemma 1. (Lemma 12.2 in [10]) Let P be a reversible Markov transition matrix on the nodes in If λ is an eigenvalue of P , then |λ| ≤ 1. The eigenfunction f 1 corresponding to the eigenvalue 1 is taken to be the constant vector 1. If X(0), . . . , X(t) represent t steps of a Markov chain with transition matrix P , then the probability of a transition from i ∈ G to j ∈ G in t steps can be written as where y, From the reversibility of the Markov chain and Lemma 1, we have and the lemma is proved. The next lemma gives the expression of var(Y h ). Proof. For k = 0, 1, ..., h, denote by s hk the number of ordered pairs (σ, τ ) such that |σ| = |τ | = h and d(σ, τ ) = 2k. Then s h0 = m h , and Central limit theorems for network driven sampling The next lemma is a convergence argument which we will use in the proof of Theorem 1. Lemma 4 (Slutsky's lemma). If The following theorem from [3] is essential to the proof of our main theorem. Theorem 3 (Martingale central limit theorem). Suppose that {Z Now we are ready to prove our main theorems. Proof of Theorem 1. Define Y h in the same way as Theorem 1. Without loss of generality, suppose that E π (y) = 0. Since m < λ −2 2 , Then y is also a function on the state space. We will first argue on the new node feature y and then convert back to y. Define Then {Z h , F h } h≥1 is a martingale difference sequence. We will verify that {Z h , F h } h≥1 satisfies (1) and (2) in Theorem 3. X. Li and K. Rohe We have For any σ ∈ T, denote by p(σ) the parent node of σ. Z h can also be expressed as It follows from the definition of V h and the Cauchy-Schwarz inequality that in probability, where Central limit theorems for network driven sampling 4885 = var π (y ) − var π (P y ), and condition (1) in Theorem 3 is satisfied. Similarly, we have where C 0 , C 1 , C are constants. Thus E(Z 4 h ) ≤ C for any h, and Condition (2) is also satisfied. From Theorem 3, we have From Lemma 4 and the definition of y , in distribution, where σ 2 = var π (y ) − var π (P y ) = var π (( √ mP − I) −1 y) − var π (P ( √ mP − I) −1 y). The proof is now complete. B.1. Proof of moments convergence Let X r be the root of the 2-tree. Define and Our key observation is that the left and right subtree can be seen as i.i.d copies of the whole tree given the left and right child of the seed, which makes it possible to build a relationship between γ k,h (i) and γ k,h−1 (i). Only condition (c3) is needed throughout the proof. We need the following Lemma. Lemma 5. Let {a h } be a sequence satisfying Proof. Without loss of generality, suppose that c h = 0 and and the lemma is proved. We use an induction on k. First, we will prove that γ 1 = 0. In fact, from Lemma 1, for all i, and |γ 1,h (i) − γ 1 | = O(ρ h ) for γ 1 = 0. Now we move from k − 1 to k. Without loss of generality, suppose that γ 2,h (i) > 1 for all h, i (or we can multiply y with a large constant). It follows that γ 2k,h (i) ≥ (γ 2,h (i)) k > 1 for all k. We can decompose γ k,h (i) into If k is even, then we know that where the first inequality follows from Jensen's inequality and the second inequality follows from our assumption that γ 2k,h (i) > 1 for all k. Likewise, if k is odd, then we have Let X lc and X rc be the left and right child of the root and T l and T r the left and right subtree, and we have X. Li and K. Rohe If k = 2, Equation B.2 and B.4 reduce to Thus by setting δ 1 = 0 we have ν h = P h ν 1 + h k=1 P k δ h−k , and it is not hard to verify that all the components of ν h (i.e., every γ k,h (i)) converge to γ 2 = π t ν 1 + ∞ h=1 π t δ h with rate ρ h . Now suppose that k > 2. Since k is fixed, there are a fixed number of terms in S 1 as h goes to infinity. Since |γ l,h (i) − γ l | = O(ρ h ) for all i ∈ S and l < k − 1, we have Thus, Let h tend to infinity in Equation (B.5), we have Now suppose that ξ 1 ∼ N (0, γ 2 ) andγ k = Eξ k 1 . Let ξ 2 be an i.i.d copy of ξ 1 . Thenγ Hence {γ k }, k ∈ N also follows Equation (B.7). Since γ 1 =γ 1 = 0 and γ 2 =γ 2 , we have γ k =γ k for every k, and the argument is proved. B.2. Proof of uniform sub-gaussianity To prove thatμ h are uniformly sub-gaussian for all h, we need to show that there exists some θ such that for all . Let c 1 be a large constant to be defined later and and for all and h. Again we use an induction on . Since γ 1,h (i) = O(|λ 2 | h ), we can choose c 1 large enough such that the inequalities in Equations B.8 and B.9 hold for all (h, ) with h = 1 or = 1. Suppose that Equations B.8 and B.9 are verified for all ≤ k. We will prove that they are also true for = k + 1. By condition (c1) and (c2), we know that From our assumption of induction we have Thus, Then s 2k+2,h ≤ (1 + M 2 −h ) 2k+2 (I 1 + I 2 ). We have where the last equality follows from Equation (B.7). On the other hand It can be directly verified that for all m, Thus, Combining Equation (B.11) and B.13, we have (B.14) Therefore, and the theorem is proved. B.3. Proof of Corollary 1 By Theorem 2 and Slutsky's lemma, it suffices to prove thatd →d in probability. → 0 for all > 0, and the corollary is proved. Appendix C: Reducing the state space of the Markov chain This section justifies the simplification in Section 3.2. Recall that P ∈ R N ×N is a Markov transition matrix on N nodes, where each node i is assigned to one of two classes z(i) ∈ {1, 2}, and Let X t ∈ {1, . . . , N} for t ∈ 0, 1, . . . be a Markov chain with transition matrix P that is initialized from the stationary distribution. One can construct a Markov chain {Z t } t on the block labels {1, 2} that is equal in distribution to {z(X t )} t . Define Z t ∈ {1, 2} for t = 0, 1, . . . as a Markov chain with transition matrix P = B and initialize Z 0 from the stationary distribution of P. Induction shows that {Z t } t is equal in distribution to {z(X t )} t . The following is a proof of Lemma 6 from [10]. Proof of Lemma 6. var The following is a proof of Proposition 3. Proof of Proposition 2. In the case whenỹ(i) = μ + σf (i), this implies that y = μf 1 + σf j . By the orthonormality of the eigenvectors, ỹ,f 2 π varπ(ỹ) = 1{j = } for > 1. As such, the inequality in equation (D.2) holds with equality. Proposition 3 presumes that G is convex. Figure 2 plots G for twenty different Galton-Watson trees with offspring distribution p(0) = .1, p(1) = .1, p(2) = .3, p(3) = .5. This offspring distribution has expected value 2.2. The construction of each tree was stopped when it reached 5000 nodes; if it failed to reach 5000 nodes, then the process was started over. In these simulations and in others not shown, G is often convex. When it is not convex, the second derivative of G(z) is positive when z is away from −1. This simulation was selected because it shows that even when the trees are sampled from the same distribution, even when there is nothing strange about the offspring distribution (e.g. all moments are finite), even when it is a very big tree, even under all of these nice conditions, some of the trees have a convex G and some of the trees have a non-convex G. Similar results hold when the trees have 500 nodes; the only thing that changes is that the red regions extend slightly further away from −1.
7,691.6
2015-09-15T00:00:00.000
[ "Mathematics" ]
Impact of Local Stiffness on Entropy Driven Microscopic Dynamics of Polythiophene We exploited the high temporal and spatial resolution of neutron spin echo spectroscopy to investigate the large-scale dynamics of semiflexible conjugated polymer chains in solutions. We used a generalized approach of the well-established Zimm model of flexible polymers to describe the relaxation mode spectra of locally stiff polythiophene chains. The Zimm mode analysis confirms the existence of beads with a finite length that corresponds to a reduced number of segmental modes in semiflexible chains. Irrespective of the temperature and the molecular weight of the conjugated polymer, we witness a universal behavior of the local chain stiffness and invariability of the bead length. Our experimental findings indicate possibly minor role of the change in π-electron conjugation length (and therefore conjugated backbone planar to non-planar conformational transition) in the observed thermochromic behavior of polythiophene but instead point on the major role of chain dynamics in this phenomenon. We also obtained the first experimental evidence of an existence of a single-chain glass state in conjugated polymers. Polymer synthesis. P3HT samples were prepared using controlled Kumada catalyst-transfer polymerization following the procedure described in ref. 31 . The preparation was carried out using an external catalytic initiator prepared via the reaction of 5-bromo-2,2′-bithiophene and bis [1,3-bis(diphenylphosphino)propane] nickel(0), and the polymer samples with different molecular weights were obtained through the variation of the ratio between the external catalytic initiator and 5-bromo-4-hexyl-2-thienylmagnesium chloride monomer. The polymers were additionally purified using Soxhlet extraction, with successive extraction with methanol, hexane, and CHCl 3 . Determination of M n and polydispersity index (PDI) was carried out with GPC, and regioregularity of the P3HT samples was determined using 1 H NMR spectroscopy ( Fig. 1) as described in ref. 31 . In order to minimize aggregation or chains folding upon themselves owning to their strong intramolecular π-π stacking interactions 30 , solutions of P3HT in DCB-D 4 were prepared via heating and stirring P3HT samples with the solvent at 70 °C overnight. In every neutron scattering experiment, the samples were equilibrated at constant temperature for 30 mins in a tumbler before measurements were performed. Theoretical description. Our theoretical approach is based on detailed analysis of the spectrum of relaxation modes to account for the entropic forces and hydrodynamic interactions of polymer in solutions. NSE spectroscopy measures the normalized dynamic structure factor, S(Q, t)/S(Q), as a function of Fourier time, t at a given momentum transfer, Q. At the intermediate length scale, the center of mass diffusion and segmental relaxation of a polymer melt is well described by the Rouse model. The molecular motion originates from the balance between entropic and frictional forces caused by the surrounding heat bath and is best described by its spectrum of relaxation modes 32 . For polymers in solution, the hydrodynamic interactions become important, and the dynamic structure factor can be formulated within the framework of the Zimm model 1 Here n, m are the polymer segment numbers where the summation runs over the total number of monomer segments, N. The statistical segment length is given by ℓ and is obtained from  = ν R N ee 2 2 2 . The first part in Eq. 1 describes the Zimm center of mass diffusion with a diffusion coefficient Label M n (kg/mol) PDI = M w /M n T (K) R g (nm) R ee (nm) D z × 10 −2 (nm 2 ns −1 ) τ z (ns) p p min α R rigid (nm) Table 1. Sample labels, molecular weight M n , polydispersity M w /M n (determined by 1 H NMR and GPC), radius of gyration R g (determined from SANS). The chain end-to-end distance, R ee , Zimm diffusion, D Z , and the Zimm time, τ z , are calculated. The Zimm modes, p, the estimated number of modes, p min , above which the NSE relaxation spectra is independent of p, the stiffness parameter, α, and the dynamic rigid length, R rigid , as obtained from NSE experiments. ity. The constant pre-factor, α D = 0.196 (Θ-solvent) and α D = 0.203 (good solvent) 33 . S chain (Q) represents the static structure factor of the chain. The third term represents the more local dynamics, including rotational diffusion (p = 1). It is represented by a sum over relaxation modes of the polymer chain with mode number, p, and character- The corresponding Zimm segmental relaxation time is given by τ η = . R k T 0 325 /( ) Z s ee B 3 33 , with η s being the solvent viscosity at a thermal energy k T B , where k B is the Boltzmann constant. Chain conformation. We determined the unperturbed chain dimensions by SANS experiments. The scattering data, intensity vs. momentum transfer, Q, can be found in the SI. In these data, we see a typical form factor of an aggregated polymer. The Flory exponent and the radius of gyration, R g , can be conveniently extracted from the Kratky plot, as illustrated in Fig. 2. For the sake of clarity we omitted the intensity values at low Q 34 . , shows an increasing radius of gyration, R g , or chain end-to-end distance, = √ R R 6 ee g , with increasing molecular weight and a slight variation with temperature (Table 1). Slight deviations of the fit at high Q are due to the incoherent background scattering that increases the noise level, but does not change our results on R g . Since, ∝ R M g n 2 , which is valid within experimental accuracy as shown in Table 1. This result is in favor of our assumption of Gaussian statistics of the chain with stiff segments, where we observed an increase of R g with temperature by 10 to 15%. First, we assumed a rigid polymer model and calculated D Z and τ Z from the solvent viscosity η s , and the chain end-to-end distance R ee as obtained independently from SANS ( Table 1). As can be seen, this model does not suffice to describe the measured data (cf. SI). The much faster decay of our experimental data indicates a substantial contribution of another relaxation mechanism. In a next step, we considered P3HT in solution as a rigid worm like chain as proposed by McCulloch et al. 25 , which requires to add the rotational diffusion (p = 1). The comparison with the experimental data shows that this is still not sufficient (cf. SI). Hence, we improve the model by considering a polymer coil with mobile segments, that requires to include the segmental relaxation (p > 1). We obtained an accurate description of S(Q,t) by adding only a finite number of modes, p = 2, …, P, cf. Figure 3a,b. The number of modes needed to describe the data is surprisingly low, with p ranging from 15 to 27. Within the experimental accuracy, the p is temperature independent but changes with M n . This observation is expected because the number of modes is proportional to the number of repeating units in the polymer chain 35 . The corresponding analysis protocol for different modes is presented in the SI. The parameter, p, represents the number of modes necessary to describe the experimental dynamic structure factor S Q t S Q ( , )/ ( ) at different temperatures and molecular weights simultaneously for all Q's. We would like to emphasize that this theoretical description of the experimental NSE data involves no free parameter, except p. Discussions Limiting the analysis to finite number of modes P ignores a substantial part of the mode spectrum and seems to be unjustified. On the other hand, the increased stiffness caused by the delocalized π-electron system in P3HT introduces a finite correlation length (dynamic equivalent to the static Kuhn segment), which can be taken into account by adding a fourth-order term, p p 2 4 α + , to the entropic spring constant ( with the dynamic stiffness parameter α 15,36,37 . The modified Zimm scattering model is obtained by replacing the mode dependence, p 3v , of τ z (in Eq. 1) by p p 3 4 α + ν ν − and the corresponding cosine amplitude, ν+ p 2 1 , which evolves as α + ν+ p p 2 1 415 . Unlike limiting the number of modes, we now exploit the fact that by increasing the momentum transfer Q, the dynamic structure becomes more sensitive to higher modes. In addition, for a given Q, the calculated S(Q,t)/S(Q) becomes independent of p, beyond a certain threshold (p > p min ). This uses the fact that 2π/Q probes a certain finite length, which limits the number of modes required to describe the experimental data theoretically. As a consequence, the spatial resolution is only determined by the Q dependence of S(Q, t) but not affected by the maximum Q. The solid lines in Fig. 4a,b compare the result of our analysis with the experimental dynamical structure factor. We can accurately describe our experimental data by simultaneously fitting all the Q's. From this analysis, we obtain the stiffness parameter α, that decreases with increasing molecular weight and/or temperature, cf. Table 1. Based on this result, we can now estimate the minimum number of modes, p min , that are required to theoretically describe the experimental S(Q, t), within the Q range of our NSE experiments, by solving To calculate the mode independent parameter α, above the threshold, p > p min , we summed over p = 1… 1000. We obtain considerably greater p min than earlier determined p values. This p min is the maximum mode numbers which are visible in our experiment. If we compare the quality of the fits based on the stiffness parameter ( Fig. 4) with those calculated assuming a low number of modes (Fig. 3), we observe a similarly good description irrespective of their physical origins. The description of the relaxation of a chain by its mode spectrum assumes a certain number of statistically independent segments, connected by entropic springs. Numerous experiments justified the assumption of an infinite www.nature.com/scientificreports www.nature.com/scientificreports/ number of modes in case of flexible polymers like poly(ethylene-alt-propylene) or poly(ethylene glycol) (with α = 0) 38,39 . In the present case the conjugated polymer P3HT, the increased stiffness caused by the delocalized π-electron system introduces a finite correlation length, which decreases the number of statistically independent beads. Thus, the calculation of S Q t S Q ( , )/ ( ) using a reduced number of modes is formally equivalent to the calculation using a stiffness parameter α (cf. Table 1). The absence of higher order modes elucidates the fact that the chain dynamics is partially frozen. Indeed, this is the first experimental evidence of the existence of single chain glass (SCG) state in a conjugated polymer. As the highest Q is limited in experiments, S Q t ( , ) cannot represent the entire mode spectrum. However, higher Q values probe more local structures. If the stiffness already impacts the smaller momentum transfers, it is very likely that the wider angles would not change this discussion. However, we re-emphasize, if Q-values are reached that start to probe more local dynamics, then additional processes are to be incorporated in the model 40,41 . However, in the current situation there was no indication that this is the case with P3HT. The comparison of the data with the Zimm model with all modes illustrates that the equivalent flexible polymer relaxes faster. At least two potential reasons can explain why the relaxation appears to be slower: (1) a reduced number of modes (Fig. 3), or (2) damping of the modes (Fig. 4). Apparently NSE data can be described by a finite number of modes (no damping). A decay of S Q t ( , ) sets in, once modes contribute to the relaxation. Therefore, fewer modes result in less relaxation and more modes lead to a faster decay of S(Q, t). However, the momentum transfer corresponds to a certain length-scale, Q d 2 / π = . Therefore, the higher the Q the more local the NSE experiment is, which implies higher modes. In a simplified wording, moving to the higher Q's requires more modes contributing to S Q t ( , ). In this context, we exploit the fact that each Q has a maximum number of modes and increasing the number of modes would not change the calculated S Q t ( , ) at this specific Q * and at every Q < Q*. Obviously, this calculated S Q t ( , ) relaxes faster than the experimental data. However, including damping slows down the decay. Therefore, we have now the opposite description. This explanation can be rationalized by a simple estimation. For semi-flexible polymers, the number of modes, p min in Eq. 1, limits the displacement, p m N cos( / ) min π , over = m N p / min segments. Therefore, we can estimate a dynamic rigid length, R rigid . For distances less than R rigid , the segments are correlated. These modes will be absent in the analysis. Thus, within a bead spring approach R rigid represents the length of a bead. It is given by: Table 1, it is evident that the effects of temperature and molecular weight are negligible on R rigid , and we obtain R rigid = 4.72 ± 0.1 nm. From the structural standpoint, R rigid could likely be interpreted as the polymer conjugation length. Conjugation length is a length of a planarized chain segment where π-bonding is maintained over the entire segment, and is a key parameter which determines electronic and optoelectronic properties of conjugated polymers. Indeed, the value of R rigid corresponds to a bead length of approximately 12 thienyl repeating units, that is within the range of polythiophene conjugation length reported in literature (ranging between 10 and 20 repeating units) 42 . It needs to be mentioned that the value of R rigid determined from the dynamic data is substantially higher than the P3HT persistence length (2.9 ± 0.1 nm) determined from wormlike chain modeling of static SANS data 25 , and reflects the fact that π-electron delocalization in P3HT extends on essentially longer distances than the geometrical persistence length. It should be noted that, independently of the observed length scale, we obtained two significant parameters, namely, finite global stiffness, α and a finite size of the bead, R rigid . The parameter α describes the damping of the mode relaxation. In the Rouse or Zimm approach, normal coordinates are introduced to solve the Langevin equation by simple exponential functions. The orthogonality of these normal coordinates follows from the uncorrelated random forces. This assumption corresponds to the freely jointed chain model that neglects correlations between bond vectors. In a good approximation, those finite correlations in a real polymer can be neglected if greater distances along the chain contour are considered. This leads to the introduction of R rigid and similarly to α. In order to investigate the scaling behavior between the chain end-to-end distance and the dynamical chain stiffness α, we systematically varied R ee from low to high values. As shown in Fig. 5, we have used five different linearly spaced values above and below the experimentally obtained R ee . This was done for both temperatures and polymer molecular weights. This reveals the dependence of α on the chain length. In addition to our results on P3HT, we have included the stiffness parameter α PNB of polynorbornene (PNB) of different molecular weights in a good solvent 15 . For a better comparison, we rescaled α PNB by a factor ~ 7. Irrespective of the polymer, molecular weight and temperature, we observe a generic power-law scaling, R ee As a consequence, the molecular weight dependence of α is attributed to the increase in Gaussian coil dimension, R ee by a factor ~ 1.26. We now want to explore how our findings based on the analysis of polymer dynamics, can be translated to macroscopic materials properties of conjugated polymers. As a special important case, we consider the correlation between the large-scale chain dynamics and thermochromism. Polythiophene shows a distinct thermochromic behavior both in solution and in solid state, as the polymer electronic absorption band undergoes reversible hypsochromic shift upon temperature increase 43 . Let's sum up some of the essential facts. (i) The radius of gyration depends on the molecular weight as expected for a Gaussian coil, and increases around 15% with increase in temperature. At the same time, within the Q-range of our SANS experiments the aggregation is nearly independent of molecular weight or temperature. (ii) The bead size, R rigid , is independent of molecular weight and temperature. (iii) The stiffness parameter, α, decreases with increasing temperature and molecular weights. (iv) The absorption spectra of both P3HT samples in DCB-D 4 are independent of the molecular weight but show a thermochromic blue shift and an increase in band gap energy, E g , with increasing temperature, cf. Figure 2 in the supplemental information (SI). These spectroscopic results agree with those found earlier for regioregular P3HT and seem to be common for semiconducting polymers [44][45][46][47][48] . As it is widely accepted in the literature, the thermochromic blue shift in the absorption spectra of polythiophenes, including P3HT, upon increasing temperature is related to cooperative static conformational twisting (i.e. planar to non-planar conformational transition) of the π-electron conjugated backbone [49][50][51][52] . From our analysis, both the conjugation length (as reflected in the value of R rigid ) and our scaling law, α ∝ ν − R ee 8 , show no dependence on temperature. It elucidates the fact that within the observed temperature range the constant bead size excludes a correlation with the observed changes in the absorption spectra. Also, the static chain end-to-end distance is not associated with the thermochromic blue-shift. Therefore, our results do not support static intramolecular conformational twisting of the π-conjugated backbone, and thus reduction of the conjugation length as a key factor in the observed thermochromic behavior. The SANS data in Fig. 2 cannot access the bead size since R rigid = 4.7 ± 0.1 nm corresponds to Q R 2 / rigid π = = 0.13 Å −1 , which is at the upper Q-limit of the SANS experiment. As the competition between coherent and incoherent scattering may contribute in this region, we abstain from the discussion of weak effects, which may not be related to the structure. Therefore, it impossible to see a structural peak. However, our SANS data at low Q indicate significant aggregation of P3HT (cf. SI) even at higher temperature, we should suggest that temperature-affected changes in the interchain aggregation may be responsible for the thermochromic blue-shift at the higher temperature. This finding emphasizes the unique role of the large-scale dynamics in understanding the fundamental physics of locally stiff polymers and deriving correlations between the chain stiffness and the macroscopic material properties, which has not been explored in the literature so far. We should emphasize that our findings derived from P3HT behavior in dilute solution have been only studied for the narrow temperature range (313 to 353 K). They may not be directly applied to thermochromism in solid state. Nevertheless, they do www.nature.com/scientificreports www.nature.com/scientificreports/ agree with recent conclusions about rather complicated nature of thermochromic phenomenon in conjugated polymers where multiple contributing factors are responsible for the observed spectroscopic changes 53 . Summary To conclude, we showed that conjugated polymers such as P3HT are ideally suited to understand the impact of locally stiff segments on the large-scale chain dynamics by SANS and NSE studies. This is the first experimental demonstration of single chain glassy state in conjugated polymers. We generalize the well-established Zimm model approach of flexible polymers and successfully describe the relaxation mode spectra of locally stiff chains. Only one parameter, the damping constant α, is sufficient to represent the full mode spectrum of both flexible and semiflexible chains. The increase in stiffness is reflected by a bead element R rigid of increased size, in concert with a reduction of the number of modes p min. We derived a renormalized stiffness α from the generic scaling of the stiffness α ∝ ν − R ee 8 , and the molecular weight. Irrespective of the temperature, the band gap energy, and the molecular weight of the conjugated polymer, we obtain a universal behavior of the local chain stiffness. Our findings impressively confirm that the so-called local stiffness is the only controlling parameter to describe the dissipation of the entropic forces in large-scale polymer dynamics. As related to macroscopic materials properties, our results show a rather minor role of the conjugated backbone conformational twisting (planar to non-planar single-chain conformational transition leading to decrease in conjugation length) in the thermochromic behavior of P3HT, and indicate that interchain phenomena (such as change in interchain aggregation) and chain dynamics are likely responsible for the thermochromic phenomenon. We hypothesize that our findings may also be applicable for understanding of other related phenomena such as solvatochromic behavior of conjugated polymers where interplay of complex pathways has been recently shown to affect observed spectroscopic changes 26 . In this way, our findings open up new frontiers for understanding the macroscopic properties like viscoelastic and optoelectronic response for material processing as well as macromolecular crowding associated with the biological functioning of living organisms. Methods Sample preparation. All reactions toward P3HT preparation were performed under an atmosphere of dry nitrogen, unless mentioned otherwise. Tetrahydrofuran (THF) for polymerization was dried by passing through activated alumina using a PS-400 Solvent Purification System from Innovative Technology, Inc. The water content of THF was periodically controlled by Karl Fischer titration, using a DL32 coulometric titrator from Mettler Toledo. Isopropylmagnesium chloride (2.0 M solution in THF) was purchased from Acros Organics. All other reagents and solvents were obtained from Sigma Aldrich and Alfa Aesar and used without further purification. Deuterated solvents (chloroform-D and 1,2-dichlorobenzene-D 4 (DCB-D 4 )) were purchased from Cambridge Isotope Laboratories. Determination of the polymer Mn and polydispersity index (PDI) was carried out with GPC (using Agilent 1100 chromatograph equipped with two PLgel 5 μm MIXED-C and one PLgel 5 μm 1000 Å columns connected in series, using THF as a mobile phase) calibrated against polystyrene standards Small angle neutron scattering (SANS) measurements. SANS experiments were performed at the GP-SANS in High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL) 54 . All the samples were measured in a standard 1 mm Hellma Banjo cells. The sample-to-detector distance d and the neutron wavelength λ were kept at d = 19.2 m for λ = 12 Å; d = 8.8 m for λ = 4.75 Å and d = 1.1 m for λ = 4.75 Å. This configuration covers a Q -range from ~ 0.005 Å -1 to ~ 0.23 Å -1 , where the momentum transfer, π θ λ = Q 4 sin( /2)/ , for the scattering angle θ. A wavelength resolution of Δλ/λ = 15% was used. All data reduction into intensity I Q ( ) vs. momentum transfer = → Q Q was carried out following the standard procedures that are implemented in the SPICE SANS reduction package for the Igor software. The data scaling into absolute units (cm -1 ), and the detector sensitivity correction was done with a porous silica standard measurement. The solvents and empty cell were measured separately as backgrounds and were subtracted. Neutron spin echo (NSE) measurements. NSE spectroscopy was performed at the Spallation Neutron Source (SNS), ORNL, using the SNS-NSE spectrometer at BL-15 55 . We detect the normalized dynamic structure factor representing the sum of coherent S coh and incoherent S inc scattering. The coherent signal dominates [56][57][58][59] , i.e., here, coh σ and inc σ are the coherent and incoherent scattering intensities, respectively. For the NSE experiment an incoming wavelength band, Δλ, from 5 to 8 Å was used with 42 time channels for the time-of-flight data acquisition. This allowed to access a dynamic range of 2 ps ≤ t ≤ 25 ns over a momentum transfer Q = 0.062-0.124 Å −1 . For the measured coherent NSE data, corrections were performed using resolution data from Al 2 O 3 , sample and background from the DCB-D 4 solvent. The background subtraction was performed from the neutron spin-echo amplitude (A) to spin up-down intensity ratio (Up Dwn − ) as described by Monkenbusch et al. 60 . We used specially designed two-part Al sample containers sealed with PTFE (PolyTetraFluoroEthylene), attached to a tumbler, and maintaining a sample thickness of 4 mm. The data reduction was performed with the standard ECHODET software package of the SNS-NSE instrument. The incoherent and coherent contributions were determined by polarization analysis in the diffraction mode of the spectrometer. The elastic incoherent scattering from the background, including the solvent, the scattering that results from empty cell, sample environment and instrument, were subtracted accordingly to obtain the coherent dynamic structure factor. For further details the reader is referred to refs. 56,60 .
5,863.2
2017-10-02T00:00:00.000
[ "Materials Science", "Physics" ]
Conservation of gene essentiality in Apicomplexa and its application for prioritization of anti-malarial drug targets New anti-malarial drugs are needed to address the challenge of artemisinin resistance and to achieve malaria elimination and eradication. Target-based screening of inhibitors is a major approach for drug discovery, but its application to malaria has been limited by the availability of few validated drug targets in . Here we utilize the recently available large-scale gene Plasmodium essentiality data in and a related apicomplexan pathogen, Plasmodium berghei to identify potential anti-malarial drug targets. We find Toxoplasma gondii, significant conservation of gene essentiality in the two apicomplexan parasites. The conservation of essentiality could be used to prioritize enzymes that are essential across the two parasites and show no or low sequence similarity to human proteins. Novel essential genes in could be predicted Plasmodium based on their essentiality in . Essential genes in showed T. gondii Plasmodium higher expression, evolutionary conservation and association with specific functional classes. We expect that the availability of a large number of novel potential drug targets would significantly accelerate anti-malarial drug discovery. Gajinder Pal Singh ( ) Corresponding author<EMAIL_ADDRESS>Singh GP. How to cite this article: Conservation of gene essentiality in Apicomplexa and its application for prioritization of anti-malarial 2017, :23 (doi: ) drug targets [version 1; referees: 2 approved with reservations] F1000Research 6 10.12688/f1000research.10559.1 © 2017 Singh GP. This is an open access article distributed under the terms of the , which Copyright: Creative Commons Attribution Licence permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Data associated with the article are available under the terms of the (CC0 1.0 Public domain dedication). Creative Commons Zero "No rights reserved" data waiver The work is supported by an Early Career Fellowship to G.P.S. by the Wellcome Trust/DBT India Alliance (IA/E/15/1/502297). Grant information: The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: No competing interests were disclosed. 09 Jan 2017, :23 (doi: ) First published: 6 10.12688/f1000research.10559.1 Referee Status: Introduction Malaria killed an estimated half a million people in the year 2015, 70% of them were children under the age of five 1 .The emergence and spread of Plasmodium falciparum strains resistant to all currently used anti-malarial drugs 2 has created an urgent need to discover new drugs.New anti-malarial drugs are also needed for malaria elimination and global eradication, for which the currently available drugs are not adequate 3 .There are two main approaches for drug-discovery against pathogens: Phenotype screening and target-based approach 4 .In phenotype screening, compounds are identified that inhibit the cellular growth of the pathogen.Large-scale screening of millions of compounds against the erythrocytic stage of P. falciparum has identified thousands of such inhibitors 5 .Some of these inhibitors have progressed to clinical trials 6 .In the target-based approach, compounds are identified that inhibit the activity of a protein essential for the viability of the pathogen.Thus target-based approach requires previous knowledge about genes that are essential for the pathogen.Only a few essential genes have been identified in P. falciparum, hampering the target-based approach for antimalarial drug discovery.Consequently, target-based approach has only identified a few anti-malarial candidates 6 .However, recent large-scale screening of about 2500 genes in a rodent malaria parasite P. berghei has identified about 1200 essential genes 7,8 .A recent genome-scale CRISPR screen in a related apicomplexan parasite Toxoplasma gondii has identified about 3000 essential genes 9 .Here we analyse this data and find significant conservation of gene essentiality in these two pathogens.From this, we identified potential anti-malarial drug targets that exhibit conserved essentiality in apicomplexan parasites; we predict novel essential genes in Plasmodium based on the essentiality of their orthologs in T. gondii.These targets could serve as starting points for target-based anti-malarial drug discovery. Fitness data for knockout mutants The genome-wide CRISPR screening data on the relative fitness of T. gondii genes during infection of human fibroblasts cells was obtained from Sidik et al. 9 .The authors defined log 2 fold change in abundance of single guide RNA (sgRNA) targeting a given gene as the "phenotype" score for that gene 9 .It was found that for a previously determined set of 81 essential and non-essential genes, a phenotype score of less than -2 identified most of the essential genes, but none of the non-essential genes 9 .We thus defined all genes with a phenotype score of less than -2 as essential (2870 genes).Genes with a phenotype score greater than 0 were defined as non-essential (3071 genes), while those with a phenotype score between 0 and -2 were not classified (2210 genes).The in vivo relative growth rate data for 2574 genes of P. berghei were obtained from the PlasmoGEM database 7,8 (http://plasmogem.sanger.ac.uk/phenotypes).The authors generated knockout mutants by transfection with large pools of barcoded gene knockout vectors.The in vivo growth rate in Balb/c mice was obtained by counting barcodes by next generation sequencing daily between days 4 and 8 post transfection 7 .Essential genes were defined as genes with a growth rate not significantly different from 0.1 (growth rate of the wild type taken as 1), while non-essential genes were defined as genes with growth rate not significantly different from 1 7 . Functional data RNA-seq data (FPKM values) for different stages of P. berghei was obtained from Otto et al. 13 .Proteomics data on different stages of P. berghei and dN, dN/S values were obtained from Hall et al. 14 .Gene Ontology information for P. falciparum was obtained from PlasmoDB 10 , and these functions were assigned to their orthologous proteins in P. berghei.Enzyme Commission (EC) numbers for P. berghei and P. falciparum were also obtained from PlasmoDB.Trans-membrane regions were identified using TMHMM 15 .All statistical analyses were performed in the R software version 3.3.1 (https://www.r-project.org/). Conservation of gene essentiality in apicomplexan parasites The relative in vivo growth rate of knockout mutants for 2574 P. berghei genes (out of total 5076 genes in P. berghei) has recently been measured, of which 1198 genes (46%) with very low growth rate were classified as essential 7,8 .Similarly, in vivo relative fitness of knockout mutants for 8151 T. gondii genes have been measured 9 , of which 2870 genes (35%) with very low relative fitness values were classified as essential (see Methods).Of the 2574 P. berghei genes with fitness data, 1617 genes have an ortholog in T. gondii.P. berghei genes with an ortholog in T. gondii were significantly more likely to be essential, compared to P. berghei genes without an ortholog in T. gondii (53% vs. 36%; Fisher test p = 7e-18; Figure 1A).P. berghei genes with an essential ortholog in T. gondii were significantly more likely to be essential, compared to P. berghei genes with a non-essential ortholog in T. gondii (71% vs. 17%; Fisher test p = 6e-59; Figure 1A).There was a significant correlation in relative fitness values of P. berghei and T. gondii (Spearman correlation coefficient 0.47; p = 3e-89; n =1617; Figure 1B).The essentiality of 2502 P. berghei genes was not tested, but the essentiality information of T. gondii orthologs may be used to predict their essentiality in P. berghei.There were 687 genes in P. berghei with an essential ortholog in T. gondii, and thus may be predicted as essential in P. berghei (Dataset 1 16 ). Prioritization of anti-malarial drug targets We argue that genes identified as essential in both the apicomplexan parasites could be more useful drug targets for the following reasons: 1) Genome-scale fitness screens often involve significant false positives and false negatives 7 , thus genes identified as essential in independent experiments in different parasites could be more confidently assigned as essential; 2) the substantial conservation of gene essentiality between the two parasites demonstrates that essentiality information in T. gondii offers relevant information about gene essentiality in P. berghei; 3) genes that are essential in both P. berghei and T. gondii should be more likely to be essential in human malarial species, such as P. falciparum and P. vivax; 4) genes that are essential in both P. berghei and T. gondii should be more likely to be essential across different developmental stages of Plasmodium, which is a highly desirable property of Plasmodium drug targets 17 .We thus identified 710 genes that were essential in both species.A total of 289 of these 710 genes encode enzymes, which are typically used as drug targets against pathogens.Of these 289 genes, 245 had an ortholog in all Plasmodium species and did not have more than one trans-membrane segment.We removed proteins with more than one trans-membrane segments, as these are often difficult to purify for in vitro assays.Of the 245 proteins, 30 showed no significant sequence similarity to any human proteins (listed in Table 1), and 83 showed less than 30% identity and 151 showed less than 40% identity to any human protein (Dataset 1 16 ).Figure 2 shows the flow chart of the selection process. Among the P. berghei enzymes that were not tested for essentiality, 186 had an essential ortholog in T. gondii and thus may be predicted as essential in P. berghei.To increase the confidence of these genes to be essential in Plasmodium, we considered 53 genes that were conserved across Plasmodium and apicomplexan species.Among the enzymes tested for essentiality, such a criteria led to a set with 77% enzymes as essential, suggesting high enrichment for essentiality among predicted essential enzymes.In total, 28 of these enzymes had low sequence similarity (<40% identity) with human proteins and thus may also be considered as potential drug targets (Dataset 1 16 ). Properties of essential P. berghei genes Essential genes show different expression, evolutionary and functional properties 9 .We thus tested whether similar patterns would be observed for P. berghei.Essential P. berghei genes showed higher mRNA expression levels in asexual stages, but lower expression levels in sexual stages compared to non-essential genes (Figure 3A).Proteins encoded by essential genes were more likely to be detected by mass-spectrometry in different developmental stages compared to non-essential genes (Figure 3B).(A) P. berghei genes with an ortholog in T. gondii were more likely to be essential, compared to P. berghei genes without an ortholog in T. gondii (Fisher test p = 7e-18).P. berghei genes with an essential ortholog in T. gondii were significantly more likely to be essential compared to P. berghei genes with a non-essential ortholog in T. gondii (Fisher test p = 6e-59).(B) There was a significant correlation in relative fitness values of P. berghei and T. gondii (Spearman correlation coefficient 0.47; p = 3e-89; n =1617).Genes classified as essential in both species are colored red.Genes classified as non-essential in both species are colored blue.Genes that are essential in only one of the species are colored green. Discussion The recent availability of gene essentiality data from P. berghei and the related apicomplexan T. gondii provides an unprecedented opportunity to identify potential drug targets to accelerate antimalarial drug discovery.We find a significant correlation of gene essentiality between P. berghei and T. gondii (Figure 1).Thus, the information about gene essentiality in T. gondii provides independent experimental support for gene essentiality in P. berghei, which not only increases the confidence of gene essentiality in P. berghei, but also increases the likelihood that these genes would be essential in other Plasmodium species that cause human malaria, and probably in different Plasmodium developmental stages.Drug targets (A) Essential P. berghei genes showed higher mRNA expression levels in asexual stages, but lower mRNA expression levels in sexual stages.The mean FPKM values for the essential and non-essential genes were calculated for different development stages and their log 2 ratio was taken.All stages except 'ookinete 24h' showed a statistically significant difference between essential and non-essential genes (t-test; p < 0.05).The RNA-seq data was taken from Otto et al. 13 .(B) Proteins encoded by essential genes were more likely to be detected by mass-spectrometry in different stages compared to non-essential genes.All stages except 'sporozoites' showed a significant difference between essential and non-essential genes (Chi-square test; p < 0.05).Overall 47% of the tested genes were essential.The proteomics data was obtained from Hall et al. 14 (C) Essential genes showed a lower evolutionary rate and higher conservation across apicomplexan species.The mean dN and dN/dS values for essential and non-essential genes was calculated and their log 2 ratio was taken.This data was taken from Hall et al. 14 .The mean number of apicomplexan species (out of six), in which an ortholog was identified, was calculated for essential and non-essential genes and their log 2 ratio was taken.dN and conservation in apicomplexan species showed a statistically significant difference between essential and non-essential genes (t-test; p < 0.05), but not dN/dS. that are essential in multiple species and stages of Plasmodium are particularly desirable 17 .Novel essential genes in Plasmodium could also be predicted based on the essentiality of their orthologs in T. gondii.Further prioritization of these genes could be made based on their conservation across Plasmodium and apicomplexan species, low sequence similarity to human proteins, as well as practical information, such as previous availability of clones, assays, protein structure and inhibitors 18,19 .The high conservation of essentiality between P. berghei and T. gondii may allow prediction of essential genes in other apicomplexan pathogens, such as Cryptosporidium. We found gene and protein properties significantly associated with essentiality in P. berghei.At the mRNA level, essential genes, compared to non-essential genes, were expressed at higher levels in asexual stages, but at lower levels in sexual stages (Figure 3A).Since gene essentiality was measured at the asexual stage, this might explain the positive correlation between essentiality and mRNA expression in asexual stages.Proteins encoded by essential genes were more likely to be detected by mass-spectrometry in different development stages (Figure 3B).Essential genes showed lower evolutionary rates and higher conservation across apicomplexan species (Figure 3C).The higher evolutionary conservation of essential genes is well-documented 20 .We find Gene Ontology classes "Translation", "Ribosome", "DNA replication", "Intracellular protein transport", "Cytoplasm", and "Nucleus" to be significantly enriched in essential genes (Figure 4)."Translation" class was also enriched in essential genes after excluding "Ribosome" genes (69% essential; Chi-square test; p = 0.0001), suggesting that enrichment of essential genes in the "Translation" category is not only due to ribosomal genes.Thus enzymes involved in protein translation may be important targets for anti-malarial drug discovery. 5. Open Peer Review Current Referee Status: This Research Note reports on an interesting and potentially useful exercise to identify and to prioritize candidates for target-based drug development in Plasmodium.The whole approach is relatively straightforward and provides a list of candidates to think about, not more, not less.Additional considerations could subsequently be applied by others to home in on reasonable targets to focus on.Overall, this short report was worth publishing, but would benefit from some revisions outlined below. Specific comments: How many genes are experimentally essential in both species is mentioned in the text at a relatively late stage of the presentation.It would be helpful to mention it earlier, e.g. in the legend to Figure 1 (the number of red dots). At some point, the author focuses on enzymes as targets.I do not think that enzymes are the only druggable targets.But if that's what the author wants to focus on, the term "enzyme" should be defined.Is it just based on the GO term associated with these genes/proteins?40% sequence identity is still a lot, and may be too much if active sites are even more highly conserved.Moreover, in this conext I also agree with point 2 of the referee report by Gregory Crowther . While I agree with Gregory Crowther's comment 3 about the relevance to drug discovery of the data in Figures 3 and 4, I still find this analysis interesting and not superfluous in the context of the overall story presented here.This paper analyzes genome-wide data on gene essentiality from two apicomplexan parasites: Plasmodium berghei (the cause of malaria in rodents) and Toxoplasma gondii (the cause of toxoplasmosis).The paper is a new analysis of previously reported data (rather than a presentation of new wet-lab results), which is fine.Those whole-genome datasets are so rich that the papers with the original data cannot possibly cover every interesting angle, so I am happy to see interesting follow-up papers such as this one, which offers additional insight into the datasets. The following comments go from broad to specific. Broad While the analysis is interesting, I'm not fully convinced that it advances malaria drug discovery in important ways; it might actually be most useful as an investigation of basic apicomplexan parasite biology.Target-based drug discovery researchers are certainly glad to know whether particular genes of interest (corresponding to specific enzymes or pathways in which they have expertise) are essential or not.However, the figures present genome-wide trends that, while interesting, don't seem that helpful in prioritizing possible drug targets. Figure 1 is probably the most relevant to drug discovery.It shows that genes found to be essential in one species (P.berghei or T. gondii) are more likely to also be essential in the other; thus, P. berghei genes not covered by the Gomes et al. (2015) screen are fairly likely to be essential if their T. gondii orthologs are essential. Figure 2 shows a prioritization exercise which is not incorrect, but I don't think sequence similarity to human proteins is an especially useful criterion.(This is also a limitation of Table 1, in my view). The hope is that we can avoid toxicity by targeting parasite proteins that are dissimilar to human proteins; however, overall sequence similarities tell us very little about whether a parasite protein will have any binding pockets (each of which represents a small part of the total amino acid sequence) that, in three dimensions, closely resemble any binding pockets of human proteins. Figure 3 shows gene expression data at the level of transcripts and proteins; I don't think this information really applies to drug discovery.(For example, I don't think anyone should say of a particular target, "Well, this isn't highly expressed; maybe it isn't a good/essential target after all.".If I recall correctly, some excellent targets such as DHFR and PfATP4 are not expressed that highly) Figure 4 shows that some functional classes of proteins have a higher percentage of essential 1 Figure 4 shows that some functional classes of proteins have a higher percentage of essential proteins than others -but I don't think this helps us choose possible drug targets either.Even the right-most categories have plenty of essential genes, which is why, for example, there is interest in targeting fatty acid metabolism, the second-lowest category in terms of percent essentiality (see, for example, Shears et al. ).Likewise, the unimpressive-looking "transport" category (~52% essential) includes PfATP4, a red-hot target of current Plasmodium research (see Wells et al. ). Drug discovery researchers do not usually think in terms of the big broad categories shown in Figure 4, so knowing percent essentiality by category won't help them much with target selection. The above observations lead me to the overall recommendation to revise the paper in one of two ways.Option 1 is to emphasize the drug-discovery stuff less and the basic biology more.Option 2 is to enhance the drug-discovery theme by addressing my concerns about the figures (i.e.explaining why they are more relevant to drug discovery than I'm giving them credit for) and/ or adding analyses that have clearer, stronger relevance to drug discovery.The paper does not currently try to combine the essentiality data with genome-wide predictions of "druggability" (which are hard!), but perhaps a collaborator could be enlisted to help with that.In general, most proteins (including most essential proteins) are not that druggable, so essentiality information in the absence of druggability information does not get us that far down the drug-discovery road. Specific Figure 1B: The legend says that green dots represent "non-conserved" proteins.I think that only conserved proteins are shown in this panel, and the green dots are proteins that are neither essential in both species nor nonessential in both species.Please check.Figure 3: For 3A and 3B, the transcriptome data (relative abundance) don't seem to correlate that closely with the proteome data (detectable or not).For example, essential gene expression in the sexual stages looks low at the level of RNA in 3A but average-to-high at the protein level in 3B.Are such discrepancies surprising/interesting? Discuss in the Discussion!Also, briefly define dN and dS (nonsynonymous and synonymous substitutions; 3C) somewhere in the paper.Also, to improve clarity, consider using one color for the bars corresponding to the asexual stages and another color for the bars corresponding to the sexual stages. Figure 1 . Figure 1.Conservation of essentiality betweenPlasmodium berghei and Toxoplasma gondii.(A) P. berghei genes with an ortholog in T. gondii were more likely to be essential, compared to P. berghei genes without an ortholog in T. gondii (Fisher test p = 7e-18).P. berghei genes with an essential ortholog in T. gondii were significantly more likely to be essential compared to P. berghei genes with a non-essential ortholog in T. gondii (Fisher test p = 6e-59).(B) There was a significant correlation in relative fitness values of P. berghei and T. gondii (Spearman correlation coefficient 0.47; p = 3e-89; n =1617).Genes classified as essential in both species are colored red.Genes classified as non-essential in both species are colored blue.Genes that are essential in only one of the species are colored green. Figure 2 . Figure 2. Selection of potential drug targets in Plasmodium. Figure 3 . Figure 3. Properties of essential Plasmodium berghei genes.(A)Essential P. berghei genes showed higher mRNA expression levels in asexual stages, but lower mRNA expression levels in sexual stages.The mean FPKM values for the essential and non-essential genes were calculated for different development stages and their log 2 ratio was taken.All stages except 'ookinete 24h' showed a statistically significant difference between essential and non-essential genes (t-test; p < 0.05).The RNA-seq data was taken from Otto et al.13 .(B) Proteins encoded by essential genes were more likely to be detected by mass-spectrometry in different stages compared to non-essential genes.All stages except 'sporozoites' showed a significant difference between essential and non-essential genes (Chi-square test; p < 0.05).Overall 47% of the tested genes were essential.The proteomics data was obtained from Hall et al.14 (C) Essential genes showed a lower evolutionary rate and higher conservation across apicomplexan species.The mean dN and dN/dS values for essential and non-essential genes was calculated and their log 2 ratio was taken.This data was taken from Hall et al.14 .The mean number of apicomplexan species (out of six), in which an ortholog was identified, was calculated for essential and non-essential genes and their log 2 ratio was taken.dN and conservation in apicomplexan species showed a statistically significant difference between essential and non-essential genes (t-test; p < 0.05), but not dN/dS. Figure 4 . Figure 4. Prevalence of essential genes in different functional classes.The Gene Ontology information for Plasmodium falciparum genes was obtained from PlasmoDB 10 and assigned to their P. berghei orthologs.Classes with a significant difference (Chi-square test; p < 0.05) in essential genes are marked with *. of Cell Biology, University of Geneva, Geneva, Switzerland Figure 2 : 6 I 1 1I Figure 2: I share the confusion with Gregory Crowther with respect to the math here.The text at the bottom of page 3 clearly suggests that 245 = 30+83+151, which of course cannot be.This needs to be fixed/clarified. Figure 2 : Figure 2: Aside from my above-mentioned concern about homology to human proteins, it might make sense to show the arrows as follows: 710 => 289 => 245 => 151 => 83 => 30, thus showing the winnowing of the targets with additional criteria.In its current form, the figure initially led me to think, incorrectly, that the 245 genes could be split into subgroups of 30, 83, and 151. Figure 4 : Figure4: Others must have done analyses like this for other (non-apicomplexan) species, e.g., of bacteria.Please compare the Figure4data to previous work in the Discussion.Also, why did the "cytoplasm" category come out as statistically significant?Are there a huge number of genes in that category?
5,576.2
2017-01-09T00:00:00.000
[ "Biology", "Medicine" ]
Numerical Solution of Fractional Integro-Differential Equations by Least Squares Method and Shifted Chebyshev Polynomial Many problems can be modeled by fractional Integrodifferential equations from various sciences and engineering applications. Furthermore most problems cannot be solved analytically, and hence finding good approximate solutions, using numerical methods, will be very helpful. Recently, several numerical methods to solve fractional differential equations (FDEs) and fractional Integrodifferential equations (FIDEs) have been given. The authors in [1, 2] applied collocation method for solving the following: nonlinear fractional Langevin equation involving two fractional orders in different intervals and fractional Fredholm Integro-differential equations. Chebyshev polynomials method is introduced in [3–5] for solving multiterm fractional orders differential equations and nonlinear Volterra and Fredholm Integro-differential equations of fractional order.The authors in [6] applied variational iterationmethod for solving fractional Integro-differential equations with the nonlocal boundary conditions. Adomian decomposition method is introduced in [7, 8] for solving fractional diffusion equation and fractional Integro-differential equations. References [9, 10] used homotopy perturbation method for solving nonlinear Fredholm Integro-differential equations of fractional order and system of linear Fredholm fractional Integro-differential equations. Taylor series method is introduced in [11] for solving linear integrofractional differential equations of Volterra type. The authors in [12, 13] give an application of nonlinear fractional differential equations and their approximations and existence and uniqueness theorem for fractional differential equations with integral boundary conditions. In this paper least squares method with aid of shifted Chebyshev polynomial is applied to solving fractional Integro-differential equations. Least squaresmethodhas been studied in [14–18]. In this paper, we are concerned with the numerical solution of the following linear fractional Integro-differential equation: Introduction Many problems can be modeled by fractional Integrodifferential equations from various sciences and engineering applications.Furthermore most problems cannot be solved analytically, and hence finding good approximate solutions, using numerical methods, will be very helpful. Recently, several numerical methods to solve fractional differential equations (FDEs) and fractional Integrodifferential equations (FIDEs) have been given.The authors in [1,2] applied collocation method for solving the following: nonlinear fractional Langevin equation involving two fractional orders in different intervals and fractional Fredholm Integro-differential equations.Chebyshev polynomials method is introduced in [3][4][5] for solving multiterm fractional orders differential equations and nonlinear Volterra and Fredholm Integro-differential equations of fractional order.The authors in [6] applied variational iteration method for solving fractional Integro-differential equations with the nonlocal boundary conditions.Adomian decomposition method is introduced in [7,8] for solving fractional diffusion equation and fractional Integro-differential equations.References [9,10] used homotopy perturbation method for solving nonlinear Fredholm Integro-differential equations of fractional order and system of linear Fredholm fractional Integro-differential equations.Taylor series method is introduced in [11] for solving linear integrofractional differential equations of Volterra type.The authors in [12,13] give an application of nonlinear fractional differential equations and their approximations and existence and uniqueness theorem for fractional differential equations with integral boundary conditions. In this paper least squares method with aid of shifted Chebyshev polynomial is applied to solving fractional Integro-differential equations.Least squares method has been studied in [14][15][16][17][18]. In this paper, we are concerned with the numerical solution of the following linear fractional Integro-differential equation: with the following supplementary conditions: where () indicates the th Caputo fractional derivative of (); (), (, ) are given functions, and are real variables varying in the interval [0, 1], and () is the unknown function to be determined. Basic Definitions of Fractional Derivatives In this section some basic definitions and properties of fractional calculus theory which are necessary for the formulation of the problem are given. Solution of Linear Fractional Integro-Differential Equation In this section the least squares method with aid of shifted Chebyshev polynomial is applied to study the numerical solution of the fractional Integro-differential (1).This method is based on approximating the unknown function () as where * () is shifted Chebyshev polynomial of the first kind which is defined in terms of the Chebyshev polynomial () by the following relation [23]: and the following recurrence formulae: with initial conditions , = 0, 1, 2, . .., are constants.Substituting ( 7) into (1) we obtain Hence the residual equation is defined as Let where () is the positive weight function defined on the interval [0, 1].In this work we take () = 1 for simplicity.Thus ( 0 , 1 , . . ., ) So, finding the values of , = 0, 1, . . ., , which minimize is equivalent to finding the best approximation for the solution of the fractional Integro-differential equation ( 1).The minimum value of is obtained by setting = 0, = 0, 1, . . ., . Applying ( 15) to ( 14) we obtain By evaluating the above equation for = 0, 1, . . ., we can obtain a system of ( + 1) linear equations with ( + 1) unknown coefficients 's.This system can be formed by using matrices form as follows: ) ) ) , where By solving the above system we obtain the values of the unknown coefficients and the approximate solution of (1). Numerical Examples In this section, some numerical examples of linear fractional Integro-differential equations are presented to illustrate the above results.All results are obtained by using Maple 15. Applying the least squares method with aid of shifted Chebyshev polynomial of the first kind * (), = 0, 1, . . ., at = 5, to the fractional Integro-differential equation ( 19) we obtain a system of (6) linear equations with (6) unknown coefficients , = 0, 1, . . ., 5.This system can be transformed into a matrix equation and by solving this matrix equation we obtain the inverse which is given in Figure 1 and we obtain the values of the coefficients.Substituting the values of the coefficients into (7) we obtain the approximate solution which is the same as the exact solution and the results are shown in Figure 2. Similarly as in Example 1 applying the least squares method with aid of shifted Chebyshev polynomial of the first kind * (), = 0, 1, . . ., at = 5, to the fractional Integrodifferential equation (20) the numerical results are shown in Figures 3 and 4 and we obtain the approximate solution which is the same as the exact solution. subject to (0) = φ (0) = 0 with the exact solution () = 2 .Similarly as in Examples 1 and 2 applying the least squares method with aid of shifted Chebyshev polynomial of the first kind * (), = 0, 1, . . ., at = 5, to the fractional Integrodifferential equation ( 22) the numerical results are shown in Figures 5 and 6 and we obtain the approximate solution which is the same as the exact solution. Conclusion In this paper we study the numerical solution of three examples by using least squares method with aid of shifted Chebyshev polynomial which derives a good approximation.We show that this method is effective and has high convergency rate. Figure 1 : Figure 1: The matrix inverse of Example 1.
1,611.4
2014-06-12T00:00:00.000
[ "Mathematics" ]
Developments of theory of effective prepotential from extended Seiberg-Witten system and matrix models This is a semi-pedagogical review of a medium size on the exact determination of and the role played by the low energy effective prepotential ${\cal F}$ in QFT with (broken) extended supersymmetry, which began with the work of Seiberg and Witten in 1994. While paying an attention to an overall view of this subject lasting long over the two decades, we probe several corners marked in the three major stages of the developments, emphasizing uses of the deformation theory on the attendant Riemann surface as well as its close relation to matrix models. Examples picked here in different contexts tell us that the effective prepotential is to be identified as the suitably defined free energy $F$ of a matrix model: ${\cal F} = F$. To be submitted to PTEP as an invited review article and based in part on the talk delivered by one of the authors (H.I.) in the workshop held at Shizuoka University, Shizuoka, Japan, on December 5, 2014. Introduction The notion of effective action plays a vital role in the modern treatment of quantum field theory. (See, for instance, [1,2].) In this review article, we deal with a special class of low energy effective actions that are controlled by (broken) extended rigid supersymmetry in four spacetime dimensions and permit exact determination exploiting integrals on a Riemann surface in question. A main object in such study is the low energy effective prepotential to be denoted by F generically in this paper, which has proven to be central not only in the original case of unbroken N = 2 supersymmetry initiated by the work of Seiberg-Witten [3,4] but also in the case where this symmetry is broken by the vacuum or by the superpotential. The review will be presented basically in a chronological order, following the three major stages of the developments that took place during the periods 1994 ∼, 2002 ∼ and 2009 ∼. Each of the three subsequent sections will explain pieces of work done in its respective period. An emphasis will be put on the deformation theory of the effective prepotential on the Riemann surface as an extension of the Seiberg-Witten system consisting of the curve, the meromorphic differential and the period as well as its close relation to matrix models. We conclude from the examples taken here in the different contexts that the effective prepotential is in fact identified as the suitably defined free energy F of a matrix model: F = F . While this is hardly a surprising conclusion from the point of view of mathematics of integrable systems and soliton hierarchies, the number of examples in QFT where this is explicitly materialized is not large enough. This note may serve to improve the situation. In the next section, after presenting the curve for N = 2, SU(N) pure super Yang-Mills theory as a spectral curve of the periodic Toda chain, we discuss the deformation of the effective prepotential by placing higher order poles to the original meromorphic differential. We give a derivation of the formula which the meromorphic differential extended this way obeys. In section three, we discuss the degeneration phenomenon of the Riemann surface necessary to describe the N = 1 vacua that lie in the confining phase and introduce the prepotential having gluino condensates as variables. We apply the formalism in section 2 here, and describe the situation by the use of mixed second derivatives. After discussing the emergence of the matrix model curve and giving sample calculation, we finish the section with the case of spontaneously broken N = 2 supersymmetry in order to illustrate the role played by the two distinct singlet operators one of which is the QFT counterpart of the matrix model resolvent. In section four, we go back to the situation of N = 2 and discuss the developments associated with the AGT relation and the upgraded treatment of the all-genus instanton partition function and therefore the deformation of the Seiberg-Witten curve to its noncommutative counterpart. A finite N and β-deformed matrix model with filling fractions specified emerge as an integral representation of the conformal/W block and we discuss the direct evaluation of its q-expansion as the Selberg integral. We finish the section with mentioning some of the more recent developments. Please note that the model or theory hops from one to the other as the sections proceed and that each section has its open ending, indicating calls for further developments of this long lasting subject. curves, periods and meromorphic differentials The list of papers which discuss subjects closely related to that of this subsection include [3-10, 14-25, 31-57]. Let us recall the most typical situation and consider the low energy effective action (LEEA) for N = 2, SU(N) pure super Yang-Mills theory. The symmetry of LEEA at the scale much smaller than that of the W boson mass is U(1) N −1 . The relevant curve is a hyperelliptic Riemann surface of genus N − 1 described as Here, and s k (h ℓ ) are the appropriate Schur polynomials. Introducing the spectral parameter z, we write the curve as that of the periodic Toda chain: The distinguished meromorphic differential for the construction of the effective prepotential is given by The characteristic feature of this is the existence of double poles at ∞ ± . Later in this section, we interpret this to be the case where only T 1 has been turned on. The defining property is that the moduli derivatives are holomorphic: The prepotential F SW is introduced implicitly by the A cycle and B cycle integrations on the Riemann surface: going to be coordinate independent. This is supported by the pieces of evidence we present here that the effective prepotential is identified as the free energy of a matrix model. We would now like to review the deformation of the effective prepotential above which we have denoted by F SW . The basic idea of this extended theory of effective prepotential often referred to as Whitham deformation is to deform both moduli of the Riemann surface and the meromorphic differential above consistently without losing the defining properties: We have adopted the choice that z is fixed when the moduli derivatives are taken. We carry out the deformation by adding higher order poles to the original meromorphic differential containing the double poles. Let us denote the local coordinates in their neighborhood generically by ξ and In order to describe the deformation, let us introduce a set of meromorphic differentials dΩ ℓ that satisfy dΩ ℓ = ξ −ℓ−1 dξ + non-singular part ℓ = 1.2, 3, · · · . (2.11) We are still left with the ambiguities that any linear combination of the canonical holomorphic differentials dω i can be added to the right hand side. In order to remove these, let us require a set of conditions The ones which are not subject to the conditions eq.(2.12) are denoted by d Ω ℓ . Let us first state the formula and outline its derivation below. As before, a i are defined to be the local coordinates in the moduli space while T ℓ , referred to as time variables or T moduli, are given by once eq.(2.13) is established. One then regards a i and T ℓ as independent, taking h k dependent: The derivation of eq.(2.13) begins with the introduction of the time variables T ℓ via a solution dŜ(T ℓ |h) to eq.(2.9), namely, ∂dŜ ∂T ℓ = dΩ ℓ , and hence In terms of our intermediate bases d Ω ℓ , eq.(2.9) reads connection with the planar free energy of matrix models Already at this stage of the developments, a keen connection of the extended Seiberg-Witten system with the construction of matrix models in general, or more specifically, the similarity of the effective prepotentials with the (planar) free energy of matrix models was visible. In fact, starting from the homogeneity of the moduli and the prepotential, it is possible to derive an integral expression for F which resembles that of matrix model planar free energy in terms of the density one-form on the eigenvalue coordinate. See, eq. (4.12) of [24]. Also [14,16,18]. One of the goals of the present review is to put together subsequent several developments that took place and have made this phenomenon more prominent. These are presented in the next two sections. Gluino condensate prepotential One major use of the deformation theory of the effective prepotential presented above took place in the context of gluino condensate prepotential built on various N = 1 vacua in contrast to F SW and its extension in section 2. We first consider the case in which the breaking to N = 1 from N = 2 supersymmetry is caused by the superpotential in the action. Later we will contrast this with the case in which N = 2 is broken spontaneously to N = 1 at the tree level [70][71][72][73][74]. 1 degeneration phenomenon and mixed second derivatives The list of papers which discuss subjects closely related to that of this subsection include [30,. Let's fix an action to work with: it is a U(N) gauge theory consisting of adjoint vector superfields and chiral superfields with canonical kinematic factors and the superpotential turned on in the N = 2 action drives the system to its N = 1 vacua. As a phenomenon occurring on a Riemann surface, we consider the situation where a degeneration takes place and some of the cycles coalesce to form a new set of cycles. As for the description of the low energy effective action (LEEA), some of the original Coulomb moduli disappear and the product of these U(1) s gets replaced by non-Abelian gauge symmetry n i=1 SU(N i ). We tabulate these pictures below. 1 Actually, supersymmetry is broken dynamically in the metastable vacua in both cases as was demonstrated in [75,76] in the Hartree-Fock approximation. The N = 1 vacua are labelled by the set of order parameters representing gluino condensates: The proportionality constant will be fixed in subsequent subsections. We now review, following the observation made in [105] that the condition for a curve to degenerate or factorize is given by that the kernel of the matrix made of the mixed second derivatives of the deformed prepotential be nontrivial. Continuing with the general discussion of subsection 2.2, let us first note that we obtain two different expressions for the mixed second derivatives from eq.(2.16): We impose the condition Eq.(3.3) has following straightforward implications: i) there exists a nonvanishing column vector c 1 , Here, we have exploited eq. (2.17) in the second equality and eq. (2.10) in the third equality. The former equality implies that d Ω ≡ ℓ c ℓ dΩ ℓ has vanishing periods over all A i & B i cycles.Then one can integrate this form along any path ending with a point z to define a function holomorphic except at punctures. As for the order of the poles at the punctures, it is generically arbitrary according to the construction. But this is contradictory to the Weierstrass gap theorem 2 derived from the Riemann-Roch theorem. To avoid a contradiction, we must have a degeneration. ii) there exists a nonvanishing row vector c 1 , c 2 , · · · , c N −1 such that in accordance with the second formula of eq. (2.16). Eq. (3.7) follows from which is regarded as the statement of the vanishing discriminant. The moduli depend actually on less than N − 1 arguments. Once we are convinced of the degeneration of the surface, we can proceed further by factorizing the original curve, which, in the current example, is the hyperelliptic one. Let n − 1 be the number of genus after the degeneration. Following [88,89], we state     Finally let us examine the last equality of eq.(3.4). Let The Weierstrass gap theorem states that for a given Riemann surface M, with genus g, and a point P ∈ M, (3.5) and g integers satisfying 1 = n 1 < n 2 < · · · < n g < 2g, (3.6) there does NOT exist a function f holomorphic on M \{P } with a pole of order n j at P . and x j−1 √ F 2n serve as bases of the holomorphic differentials of the reduced Riemann surface. Actually, only the j = 1 ∼ n − 1 differentials are holomorphic and the j = n one has been added through the blow-up process, which physically implies that the overall U(1) fails to decouple. We obtain and therefore Here, f k−1 is a polynomial of degree k − 1. This is the curve appearing in the k-cut solution of the matrix model. We still need to see that W k+1 (x) introduced above is in fact a tree level superpotential. This is easily done by taking the classical limit Λ = 0: 14) The original Seiberg-Witten differential becomes Here, we have used that the canonical holomorphic differential becomes in this limit. The period integrals over the A i cycles just pick up the residues at the poles p i : The degeneration in this limit is described as In fact, the N j poles coalesce at β j , j = 1, · · · , n and the canonical holomorphic differentials on the degenerate curve are The condition eq. (3.11) becomes which tells us that β j must coincide with one of the roots α j of W ′ k+1 . The vev's of the adjoint scalar fields are thus constrained to the extrema of W k+1 . Let us set k = n for simplicity. We have the reduced curve of g = n − 1: Let us now proceed to discuss the use of this machinery in calculation. As the condensates S i are quantum mechanical in nature, one can develop loop expansion using these, including the Veneziano-Yankielowicz term which contains the logarithmic singularity [77]. The first question to be raised is what the distinguished meromorphic differential is to be used for such calculation. It must be "almost" holomorphic after the b ℓ derivatives are taken. Recall that the bases of the "holomorphic" differentials are taken as x j−1 y , j = 1, · · · , n − 1, n. Rather obviously, such differential is found as As before, the effective prepotential is introduced through the period integrals  and We have, however, no reason to set equal to zero. This tells us the presence of the cutoff at the infinities of the surface. The expansion of F in S i was done in [97], exploiting eq. .38)). Yet, there exists a simpler procedure, namely, a calculus from T moduli thanks to the machinery discussed in the present review. The T moduli are easily identified as The dependence of the prepotential on the T moduli is determined by the equations Here Λ ℓ+1 is the term introduced in [ The differential dŜ mat of eq. (3.24) has a straightforward expansion inS i . Therefore, A i cycle integrations followed by the inversion provide an expansion ofS i in S j Here, we have introduced α ij = α i − α j , and ∆ i = j =i α ij . Another useful machinery is the T m moduli derivatives of the roots α i of the superpotential, which read Using these, the right hand side of eq. (3.29) is evaluated as which is trivially integrated in u m to provide an answer. Let us mention that this procedure is straightforwardly generalizable to higher order contributions in S i and that the terms independent of α i can be easily obtained by several other methods. The expansion form of F (S|α) which we managed to have proposed in [97] is 3 Here, we have denoted by F k+2 (S|α) the contributions of the k + 2 order polynomials in S i . The explicit answer for F 3 (S|α) is For the computation of higher orders as well as the inclusion of matter, see, for instance, [128,[131][132][133]. case of spontaneously broken N = 2 supersymmetry and Konishi anomaly equation The list of papers which discuss subjects closely related to that of this subsection include [70-74, 87, 134-174]. The N = 2 effective action is completely characterized by the effective prepotential while, in the N = 1 case, a typical observable is (the matter induced part of) the effective superpotential. The interplay of these two upon the degeneration of the original Riemann surface is most clearly seen by dealing with the case of spontaneously broken N = 2 supersymmetry. This case accomplishes a continuous deformation from one to the other by tuning the electric and magnetic Fayet-Iliopoulos parameters. The action S F in N =2 realizing this is given by Here, ξ, e, m are the electric and magnetic F-I terms and we vary these to interpolate the two ends, keepingg ℓ = mg ℓ (ℓ ≥ 2) fixed: large (ξ, e, m) small (ξ, e, m) In this subsection, we have denoted by the symbol F in an input function in the effective action eq. (3.39). For definiteness, we let the function F in be a single trace function of a The left-hand side is the contribution of the Konishi anomaly [80], which arises from the behavior of the functional integral measure under the transformation [175,176]. Introducing the two generating functions, we recast this into the following set of equations [161]: where f (z) and c(z) are polynomials of degree n − 1 and, with some abuse in notation, The explicit form of f (z) and that of c(z) are not really needed in what follows. Let us make a few comments on this set of equations. The equation for R(z) is identical in form to that of the planar loop equation of the one-matrix model for the resolvent. This fact is shared by the theory in the large FI term limit, namely, N = 1 theory of adjoint vector superfields and chiral superfields with a general superpotential [145]. The equation for T (z), on the other hand, contains the cubic derivatives in F in and is distinct from that in the large FI term limit. This, in fact, leads us to the deformation of the formula connecting the effective superpotential with the object identified as the matrix model free energy from its well-known expression [90][91][92] in S N =1 , namely, the one in the large FI term limit. Our final goal in this subsection is to derive a formula for the effective superpotential. Let us define the one point functions as In terms of v ℓ we define F as Using F , we can state the relation to be proven: Before proceeding to the proof of this relation, let us go back to eqs. (3.44) and (3.45) to obtain the complete information. We consider the most general case that the gauge symmetry U(N) The indices i, j, · · · run from 1 to k while the indices I, J, · · · run from 1 to N. Of course, N I = 0 (I = k + 1, · · · , n). Solving eq. (3.44), we obtain where the Riemann surface Σ is genus n − 1 but its A I cycles for I = k + 1, · · · , n are vanishing. We conclude that the meromorphic function lives on a factorized curve Here N n−k (z), F 2k (z) are polynomials of degree n − k and 2k respectively. On the other hand, substituting eq. (3.50) into eq. (3.45), we obtain . (3.53) Let us list a few formulas that are obtained from eq. (3.50) directly. The first set is [117] ∂R(z) , i = 1, · · · , n is a set of normalized holomorphic functions, as is easily seen by taking the derivatives of the A cycle integrations. Also, define h(z) = − i N i g i (z). The second one is where we have used eq. (3.46). The proof eq. (3.49) goes by observing that it is equivalent to the truncation of the following equation up to the first n + 1 terms in the 1/z expansion, complete as soon as we obtain Observe that there are two expressions for N i : and therefore Another consistency condition is in the integrand of eq. (3.60) and that of eq. , ℓ = 0, · · · , n − 1 of the original curve, we deduce eq. (3.57). AGT relation and 2d-4d connection via matrices The contents of the two preceding sections later had the upgraded treatments mentioned in the introduction. In this section we outline these developments triggered by the work [177]. Let us recall that the low energy effective action (LEEA) of N = 2 SU(N c ) SUSY gauge theory is specified by the effective prepotential denoted in this section by F SW (a i ) and that it has undetermined VEV called Coulomb moduli a i = φ i . The bare gauge coupling and the θ parameter are grouped into and F SW (a i ) consists of the one-loop contribution and the instanton sum It was shown in [181] that F (SW) inst is microscopically calculable in the presence of Ω background equipped with the deformation parameters ǫ 1 and ǫ 2 as The corrections to the original F (SW) inst are regarded as higher orders in the genus expansion with g 2 s = −ǫ 1 ǫ 2 . Its expansion in q is computable by the localization technique with ǫ 1 , ǫ 2 acting as Gaussian cutoffs. is the "volume" of the k-instanton moduli space. Let T Nc−1 be the maximal torus of the gauge group SU(N c ). Since we also have the maximal torus T 2 of SO(4), namely, the global symmetry of R 4 , the T = T 2 ×T Nc−1 action can be defined on the instanton moduli space. Then the integral in eq. (4.5) are computed T -equivariantly and consequently we obtain the regularized results. According to the localization formula, eq. (4.5) is reduced to the summation of the contribution from the fixed points which are parametrized by N c Young diagrams Y = (Y (1) , · · · , Y (Nc) ), where | Y | = Nc i=i |Y (i) | is the total number of boxes. Each Z Y is provided through a combinatorial method. β-ensemble of quiver matrix model and noncommutative curve The list of papers which discuss subjects closely related to that of this subsection include . In this subsection, we give a general discussion of β-deformed matrix models at finite N (size of matrices) and with generic potentials and the attendant noncommutative curve. The curve at the planar level, which the original S-W curve for SU(N c ) gauge group with 2N c flavours are relevant to, turn out to come out in a relatively transparent way in the limit. Let us begin with the β-deformed (β-ensemble of) one-matrix model : is the van der monde determinant. The Virasoro constraints [192][193][194]197], namely the Schwinger-Dyson equations of this model for the resolvent, are obtained by inserting Adopting the operator notation of conformal field theory, Eq. (4.11) can, therefore, be written as Quite separately, let us introduce the "curve" (x, z) = (y(z), z) by (4.14) Two remarks are in order. First of all, in order for the first equality to be true, x and z must satisfy the noncommutative algebra: Second, in order for eq. (4.14) to be algebraic, the singularities in T (z) must be absent. This condition is ensured by the Schwinger-Dyson equation eq. (4.11). Let us turn to the A Nc−1 quiver matrix model (β deformed) which the effective prepotential for the SU(N c ) gauge theory with 2N c flavours are relevant to. This matrix model has been constructed [203] such that it automatically obeys the W Nc constraints at finite N a , a = 1, · · · r, We follow the logic of β-deformed one-matrix model at finite N a . In this model, there exists N c spin 1 currents that satisfy Nc i=1 J i (z) = 0: The curve Σ (x = y i (z), z) that we postulate in [233] is The isomorphism with the Witten-Gaiotto curve has been established by taking the planar limit of this construction as we will see in the next subsection. In fact, the planar limit implies the singlet factorization which assigns the c number value to the operator ∂φ(z) and the curve factorizes as Gaiotto curve The list of papers which discuss subjects closely related to that of this subsection include [177,. Let us specialize our discussion to the three Penner model. Choose the potential as The matrix integrals of this case realize the integral representation of the conformal block and the size of each matrix corresponds with the number of screening charges we have to insert to built the block. As is clear from the discussion above, the planar spectral curve of the A Nc−1 quiver matrix model takes the form for some polynomials Q k (z) in z. On the other hand, the Seiberg-Witten curve for the case of SU(N c ) gauge theory with 2N c massive flavour multiplets, originally proposed in [236], can get converted into the Gaiotto form [238] by where P (k) 2k (t) are degree 2k polynomials in t. The two curves eq. (4.26) and eq. (4.27) are evidently similar. We can also see that the residues of y i (z)dz (i = 1, · · · , N c ) at z = 1, q, 0, ∞ and those of xdt at t = 1, q bare , 0, ∞ on the i-th sheet can be equated. For general N c , these residues in fact match if the weights of the vertex operators are identified with the mass parameters of the gauge theory by the following relations [233] : ( m a − m a+1 )Λ a , (4.28) The matrix model potentials W a (z) (a = 1, 2, . . . , N c − 1) are fixed as With this choice of the multi-log potentials, the A Nc−1 quiver matrix model curve in the planar limit coincides with the SU(N c ) Seiberg-Witten curve with 2N c massive hypermultiplets. direct evaluation of the matrix integral as Selberg integral The list of papers which discuss subjects closely related to that of this subsection include . In this subsection, we consider 2-d conformal field theory which has the Virasoro symmetry with the central charge c. The correlation functions for primary operators Φ ∆ (z,z) with the conformal weight ∆ are strongly constrained by this symmetry. We are interested in the fourpoint functions which can be expressed as (4.31) The sum on I is taken over all possible internal states. Here K ∆ and C ∆ 3 ∆ 1 ∆ 2 are the modeldependent factors. In contrast, the conformal block 4 denoted by F (q|c; ∆ 1 , ∆ 2 , ∆ 3 , ∆ 4 , ∆ I ) is a model-independent and purely representation theoretic quantity, . . , k ℓ ) and Let us consider the four-point conformal block on sphere, The parameter α 4 is determined by the following momentum conservation condition which comes from the zero-mode part: The internal momentum α I is given by Eq. (4.34) has an integral representation as a version of β-deformed matrix model. Actually, the Dotsenko-Fateev multiple integrals, are regarded as a free field representation of eq. (4.34). In order to develop its q-expansion, it is more convenient to interpret this multiple integrals as perturbation of the products of the two Selberg integrals. We have the following expression of the perturbed double-Selbarg model: Here S N L and S N R are the celebrated Selberg integral 43) and the averaging · · · N L ,N R is taken with respect to the unperturbed Selberg matrix model, (4.44) Below we also use · · · N L and · · · N R which imply the averaging with respect to Z Selberg (N L ) and to Z Selberg (N R ), respectively. It takes form Note that a pair of partitions (Y 1 , Y 2 ) naturally appears. In order to apply this to eq. (4.39), let us set γ = b 2 E and for the "left" part. Similar replacement yields the expression for the "right" part. We obtain . (4.59) From the explicit form of Jack polynomials for |λ| < 2 listed in eq. (4.55), we obtain [298] b E For definiteness, let us consider the left-part, into the integrand, we obtain the loop equation at finite N, The expectation value of w N L (z) is the finite N resolvent (4.69) The first one agrees with eq. (4.60). Now, let us determine the 0d-4d dictionary. In the matrix model (0d side), we have seven parameters with one constraint eq. The first two formulas tell us clearly the necessity that the filling fractions of the β-deformed matrix model must be explicitly specified at finite N in order to exhibit the Coulomb moduli. In the next order, the expansion coefficients A 2 are rearranged as where We illustrate our discussion in this section by Fig. 3. more recent developments The list of papers which discuss subjects closely related to that of this subsection include [262,301,. We have reviewed the 2d-4d connection from a view point of the matrix model. In this subsection, we comment on some of the more recent developments. In the last subsection, we have presented the connection between the Virasoro conformal blocks and the four-dimensional SU(2) instanton partition functions via the matrix model and the Selberg integral. This discussion has been generalized in part to that between the W N blocks and the SU(N) partition functions [376]. The both sides also have a natural generalization as a q-lift [364]. The Virasoro/W N symmetry in the two-dimensional CFT side is deformed to the q-deformed Virasoro/W N symmetry while the four-dimensional SU(N) gauge theory is lifted to the five-dimensional theory. It is interesting to consider the root of unity limit q → e 2πi r of the q-Virasoro/W N algebras. The appropriate limiting procedure [386,391] to the root of unity exhibits the connection between the super Virasoro (r = 2) or the Z r -parafermionic CFT and the gauge theory on R 4 /Z r [368,370]. There are several pieces of work [301,363,379,384] which prove the 2d-4d connection. The explicit identification can be established in the case of β = 1 [366,367]. In order to apply to the β = 1 case, the conformal blocks have to be expanded by the generalized Jack polynomial [385] that modifies the standard one. For some lower rank cases, this has been explicitly constructed [388].
7,239.4
2015-07-01T00:00:00.000
[ "Physics" ]
Motion analysis of magnetic spring pendulum In order to analyze the motion characteristics of the spring pendulum under the action of magnetic field force, the motion of the spring pendulum will be studied by applying a uniform magnetic field in the vertical direction. Firstly, a first-order approximate solution is given by studying the micro-vibration around its equilibrium point. And an approximate solution similar to the Foucault pendulum is also presented in the case of a soft spring with strong ductility. Then, according to the resonance conditions of mechanical vibration, the internal resonance phenomenon of magnetic spring pendulum is discovered, and then the conclusion that the energy of the system is cyclically transmitted between the three modes of breathing, oscillating and deflection is presented subsequently. Finally, the influence of magnetic field strength on the motion stability of the spring pendulum is explored, and not only the bifurcation phenomenon at its equilibrium point is found, but also the complex dynamic behavior including chaotic motion occurs. Introduction As an indisintegrable model combining vibration mode and swing mode, spring pendulum is widely used in various engineering damping systems [1,2]. The earliest research on it was found in the article written by Vitt and Gorelik in 1933 [3], and then many scholars have studied the model from many angles such as approximate solution, internal resonance, bifurcation, and chaos. Broucke and Baxa have investigated the generation conditions and stability judgment of the equilibrium point of the periodic solution of the model through the periodic orbit research method [4]. Aldoshin and Yakovlev have discussed the chaotic modes of spring pendulum vibration, their occurrence conditions and possible evolution scenarios [5]. Awrejcewicz et al. have investigated the law of the model when forced vibration is explored by applying periodic external forces in both radial and transverse directions to the spring pendulum and then have used the multi-time scale method to obtain its approximate solution for the initial value problem, and its amplitudefrequency response relationship [6] as well. Olsson has explored the vertical motion of the spring pendulum. When the natural frequency of the spring and the swing frequency are satisfied, the two modes of motion will undergo a strong coupling phenomenon. They stimulate each other, so that the energy is alternately transferred in these two motions [7]. Amer et al. have explored the nonlinear response of the spring pendulum when the suspension point moves elliptically in the vertical plane, and have explained its dynamic behavior by obtaining a timing diagram and phase plane analysis of the approximate solution [8]. As for Alicia Gonzalez-Buelga et al., from an experimental point of view, they have analyzed the motion of the spring pendulum under the condition of periodic external force applied to the suspension point by the hybrid technology of real-time dynamic substructure, and the corresponding timing diagram, phase trajectory diagram and Poincaré cross-section diagram were plotted [9]. At the same time, it is noted that many scholars have recently carried out research on the magnetic pendulum model, such as Boeck, Sanjari and Becker, who have calculated the stability limit of magnetic pendulum in strong and weak electromagnetic coupling by applying Floquet theory and harmonic balance method, and also have discovered the chaotic behavior of finite amplitude [10]. Many other scholars, such as Kitio Kwuimy, Nataraj and Belhaq, have studied the effects of inclined harmonic excitation and parameter damping on the chaotic dynamics of asymmetric pendulum systems and have concluded that the increase of the inclination angle of the excitation leads to the increase of the lower bound of the chaotic domain and produces a singularity in the vertical position of the excitation [11]. Mann has discussed the behavior which a magnetic pendulum in stable equilibrium oscillation escapes from the adjacent barrier to the neighboring attractor and has expanded the existing quasi-steady-state escape criterion based on the influence of parametric excitation and subharmonic response behavior [12]. The initial sensitivity phenomenon and mechanism of magnetic pendulum have been studied by Qin et al. The evolution law of the attraction domain of the fixed point with the movement of the magnet was clarified. And this phenomenon was proved experimentally [13]. Pili has explored the motion of a single pendulum with magnetic damping from both theoretical and experimental perspectives and thus has provided a profound demonstration experiment to explain Lenz's law [14]. This paper aims to further explore the vibration and oscillation law of charged objects in the magnetic field, so the motion law of magnetic spring pendulum can be explored based on the above two pendulum models. Not only two approximate solutions of its kinetic equations can be acquired in Sect. 2, but also a new internal resonance relationship is presented in Sect. 3. Finally, Sect. 4 explores the influence of magnetic field strength on the stability of system motion. Approximate solution As shown in Fig. 1, an insulating and positively charged ball with charge q and mass m in the vertical plane is connected with the origin of coordinates through a light spring with original length and stiffness coefficient k, and a uniform magnetic field B along the z direction is applied. i, j, k are unit vectors in the three coordinate directions of x, y, z, respectively. Firstly, it can be seen from Fig. 1 that the velocity vector of the system is v =ẋ i +ẏ j +ż k, and then its magnetic vector potential is set as A = − By 2 i + Bx 2 j; the generalized potential energy function [15] and the kinetic energy function of the system can be expressed as In the generalized potential energy function represented in Eq. (1), k 2 x 2 + y 2 +z 2 −L 0 2 is the elastic potential energy term, −mgz is the gravitational potential energy term, and Bq 2 (yẋ − xẏ) is the magnetic potential energy term obtained by −q A · v. Then the dynamic equation of the system is given by Lagrangian mechanics For the convenience of the discussion below, let In the above formula, L is the length of the vertical suspension spring pendulum when it is in the equilibrium position. ω s represents the natural frequency of the spring oscillator, that is, the breathing frequency of the spring pendulum when it moves radially. ω p is the oscillation frequency when the spring pendulum keeps the length L in the vertical plane. ω m represents the deflection frequency of the spring pendulum under the influence of the magnetic field. The relative frequencies 1 and 2 represent the ratio of the strengths of ω p and ω m to ω s , respectively, and they are the core parameters of the spring pendulum moving in the magnetic field. X, Y, Z and τ are the introduced dimensionless space variables and time variables, respectively. Therefore, Eqs. (3)∼(5) can be dimensionlessed as Low-order expansion approximation It can be seen from Eqs. (6), (9) that when the stationary pendulum is suspended vertically on the positive half axis of the Z -axis, its X = Y =Z = 0 and Z = 1 can be deduced. Therefore, the coordinate of the pendulum at this equilibrium point is (0, 0, 1). In order to explore the micro-vibration law of the pendulum near the equilibrium position, Z = 1 + Z is set in Eqs. (7)- (9). The root term of the three equations after Z replacement is carried out Taylor expansion with the point (0, 0, 0) as the center, and the lowest order term of X, Y, Z in various forms is retained to obtain the dynamic equations in the following form When the pendulum is micro-vibrating in the region near the equilibrium point, its X, Y and Z are both small variables, resulting in that X 2 and Y 2 in Eq. (12) are high-order small quantities. Therefore, the 1 2 1 − 2 1 X 2 + Y 2 term in this equation is ignored and the following simple harmonic vibration equation can be expressed as If the initial condition is set to Z (0) = H,Ż (0) = 0, the solution of the above equation can be expressed as Then, the complex solution method [16] is used to solve Eqs. (10)-(11), after multiplying Eq. (11) by the imaginary unit I and adding it to Eq. (10), and then the formulation can be presented as The abbreviation U = X + I Y is obvious make the initial condition as U (0) = U 0 ,U (0) = 0, and substitute Eqs. (14) into Eq. (16), then solve its differential equation and then separate the real and imaginary parts to obtain an approximate solution of X and Y where MathieuC and MathieuS are the Mathieu functions [17,18], i.e., the functions MathieuC(a, q, x) and MathieuS(a, q, x) are two series solutions to Mathieu differential equation y + (a − 2q cos(2x))y = 0 [19]. Mathieu differential equation, as a well-known equation in the field of mathematical physics, represents a wide range of engineering dynamics systems [20] and has an important enlightenment role in solving engineering problems. And with the change of parameters in the equation, the periodic solution of the system described by it may lose stability, local partial bifurcation, global bifurcation, chaos, etc., so the study and control of the complex behavior of such systems is not only of great theoretical significance, but also shows important engineering practical value [21]. The following compares the approximate solution of the mag-netic spring pendulum got above with the image of the numerical solution. From Fig. 2a-b, it can be found that the magnetic spring is placed in the X, Y direction with multiple unequal peaks. The phase difference of π 2 is roughly maintained during the movement in these two directions. At the same time, in Fig. 2c-d, it is found that the approximate solution can better match the image of the numerical solution, thus illustrating the correctness of the above exploration. Approximation in the case of a soft spring with strong ductility For soft springs with strong ductility, the elongation of the spring is much greater than the original length L 0 of the spring, which means under the condition of mg k L 0 , 2 1 = mg k L 0 +mg → 1 can be obtained by analyzing the definitions of 1 , ω s , ω p and L in Eq. (6). Therefore, Eqs. (7)- (9) and (16) can be rewritten as Then comparing Eqs. (19)- (20) with the dynamic equations of Foucault in the X and Y directions in the Ref. [22], it can be seen that the dynamic equations of the two models have the same form, so it shows that the two models have the same law of motion at the level. Let the initial conditions in Eq. (21) be the same as Eq. (14), and then solve the equation to obtain its kinematic equation as For Eq. (22), the characteristic root method is used to solve, and the fitting solution is set as Then, after substituting the above equation into Eq. (22), extract the coefficient of e λτ and make it 0, and give According to the above formula, the general solution of differential Eq. (22) should be the combination of two linearly independent solutions where M and N are constants determined by the initial conditions. Meanwhile, since Eq. (22) is a complex equation, the constants M and N should be complex; thus, Eq. (26) can be rewritten as Finally, Eq. (27) was substituted into the above equation and the real and imaginary parts were separated to give the kinematics equation in the X and Y directions Then set its initial condition as The above two equations can be rewritten as The above two equations can be rewritten into a vector equation where let n = sin ( 2 τ ) e X +cos ( 2 τ ) e Y , which represents the unit vector rotating with angular velocity 2 in the X-O-Y plane. Then draw a schematic diagram of ρ rotation In the Fig. 3, the projected motion of the spring on the X-O-Y plane is consistent with the vector ρ. 2 = Bq 2m m k is given according to the definition and analysis of 2 , ω m and ω s in Eq. (6), which is equivalent to the component z of the earth rotation angular velocity in the Z -axis direction of the Foucault pendulum in Ref. [23]. Thus, the motion of a spring with strong ductility and weak elasticity placed in a uniform magnetic field is similar to that of a Foucault pendulum, and its swing plane deflects at a constant angular velocity 2 . At the same time, by making 6116 Y. Meng can be simplified into the standard rose curve equation Then, according to the properties of rose curve [24], when n is odd, the number of leaves of rose curve is n, and its closing period is π . Meanwhile, when n is even, the number of leaves is 2n, and the closing period becomes 2π . When n is irrational, the graphs never overlap. Therefore, different motion trajectories of the spring pendulum can be driven by changing the size of the magnetic field, and finally the motion trajectory diagram of the magnetic spring pendulum is drawn according to Eqs. (23), (30)-(34), as shown in Fig. 4 By observing Fig. 4a-b, it can be seen that when n is equal to 5 and 6, respectively, by changing the magnetic field, the projection tracks of spring placed on the X-O-Y plane show rose curves with leaves number of 5 and 12, respectively. In Fig. 4c, n = e, and its track is an unsealed leafy rose curve. All these phenomena are consistent with the above analysis, which shows the correctness of the above analysis. In addition, it can be seen from Fig. 4d that the movement of spring placed in three-dimensional space is the superposition of the horizontal movement with the three-leaf rose curve as the track and the simple harmonic movement in the vertical direction, and it can be seen from Eq. (23) that its track period is 2π . into the standard cycloid equation According to Ref. [25], when k is a rational number, its orbit is a closed graph. The number of points of the curve is the molecular value of the simplest fraction k. When k is irrational, the trajectory curve is never closed. Finally, according to Eqs. (23), (35)-(38), the motion trajectory diagram of the magnetic spring pendulum is drawn. In Fig. 5a-b, when the magnetic field is equal to 5 and 10/3, respectively, the projection trajectories of its motion on the X-O-Y plane are closed internal cycloid patterns of 5 and 10 points, respectively. In Fig. 5c, when k = π , the projected trajectory is an unclosed endocycloid graph. In addition, in Fig. 5d, the motion of the spring placed in the three-dimensional space is the combination of the motion of the multi-cusped endocycloid in the horizontal direction and the simple harmonic motion in the vertical direction. In addition, if the initial condition is The first matrix at the right end of the equal sign of the above equation is a rotation matrix, which causes the ellipse defined by Eqs. (41)-(42) to rotate clockwise around the coordinate origin at an angular velocity 2 . That is, the projection of the spring on the X-O-Y plane participates in both elliptical motion and uniform circular motion, and its trajectory is the superposition of these two motions. The following is for Eqs. (39)-(40) and (23), and the motion trajectory diagram of the spring pendulum is drawn, as shown in Fig. 6. From Fig. 6a-c, it can be observed that the projection trajectory of the pendulum on the X-O-Y plane is an ellipse that rotates clockwise around the coordinate origin, which illustrates the correctness discussed above. In Fig. 6d, it can be seen that the movement of the magnetic spring pendulum in three-dimensional space is a superposition of the rotational elliptical motion in the horizontal direction and the simple harmonic vibration in the vertical direction. New internal resonance relation Equations (10)- (16) are analyzed again in order to explore the internal resonance phenomenon of magnetic spring pendulum. Firstly, for dynamic Eq. (16) of the system in the horizontal direction, the sum of the terms at the left end of the equal sign is 0 when the initial condition is also U (0) = U 0 ,U (0) = 0, so that the solution can be described as where 2 3 = 2 1 + 2 2 , then use Euler's formula to simplify the above formula to It can be seen from above Eq. (16) that the two natural frequencies of U direction vibration are 3 − 2 and force. Then substitute Eqs. (45), (14) into the right end of Eq. (16) to give By observing the above equation, it can be found that term contains frequencies 2 − 3 + 1 and 1 − 2 − From the left end of the equal sign of the above formula, it can be seen that the natural frequency of the Z direction is 1, and the right end 1 2 2 1 − 1 |U | 2 of the equal sign is equal to the driving force, and then Eq. (45) is substituted into the right end of the equal sign of the above equation and simplified 1 2 From the above equation, it is observed that the frequency of |U | 2 is 2 3 . In Eq. (47), when the frequency of the driving force is equal to the natural frequency, that is, when 2 3 = 1, the vibration of the system in the U direction will cause resonance in the Z direc-tion. Its resonance conditions are the same as those in the U direction, both 3 = 1/2. Therefore, when the system meets the condition of 3 = 1/2, a strong coupling occurs between the horizontal and vertical motion modes, resulting in mutual excitation between the motion modes in different directions, which is the internal resonance phenomenon unique to the vibra- This new internal resonance relation is completely different from the one in Ref. [7] and does not need to satisfy the relation of ω s = 2ω p any more. The resonance images are given by using the new internal resonance conditions described by solving Eqs. (7)-(9) numerically. As shown in Fig. 7a-c, it is obvious that the magnetic spring presents obvious resonance behavior when placed in the X ,Y and Z directions, and its amplitude changes periodically with time. Meanwhile, from the relationship between speed and time shown in Fig. 7d, it can be found that the time when the maximum speed appears in the two directions of X and Y just corresponds to the minimum value of Z , which indicates that energy is transferred alternately in the directions of X ,Y and Z . In addition, Fig. 7e shows the phase trajectory in the X direction, with the pendulum ball jumping back and forth between the two center points in an elliptical trajectory of varying size. Figure 7f shows an elliptic track with ever-changing size centered at the equilibrium point (1,0) in the Z direction. Finally, it can be found that the motion of magnetic spring pendulum is regular three-dimensional motion in the trajectory diagram shown in Fig. 7g-i. In order to further explore the energy transfer process in different directions during the internal resonance, Eqs. (3)-(5) are rewritten into spherical coordinate system as shown in the Fig. 8. Similar to the above, let R = r L , and then dimensionless the above three equations The meanings of 1 , 2 and the dimensionless time variable τ are the same as those of Eq. (6). Then, new internal resonance relation Eq. (49) described above is used to numerically solve Eqs. (53)-(55) to present the motion image. In Fig. 9a-b, it can be observed that the displacement appears resonance in both directions R and θ , while in Fig. 9c, the azimuth increases gradually as time changes. This is interpreted as the magnetic field force represented by 2 at the right end of equal sign in Eq. (55) drives the plane of the spring pendulum to deflect. Thus, the spring pendulum produces a third motion mode, namely deflection mode, in addition to breathing mode and swing mode. Meanwhile, in the trajectory diagram showing R, θ and φ in Figs. 9d-f, the phase traces in the R direction are similar to the trajectory diagram in Fig. 7f in the Z direction, which are concentric ellipses. The phase trace in direction θ is an oblate circle with narrow left and wide right. In addition, the phase trajectory diagram of φ is similar to the resonance diagram of θ in Fig. 9b. Finally, it is observed in Fig. 9g that the peak value of the velocity amplitude in the R direction at the same time corre- sponds to the minimum value of the velocity amplitude in the θ and φ directions, while the peak value of the velocity amplitude in the θ and φ directions corresponds to the minimum value of the velocity amplitude in the R direction. This shows that the system energy is transferred from the breathing mode to the swing mode and deflection mode at the same time and then from the swing mode and deflection mode back to the breathing mode. Therefore, the energy is transferred back and forth in these three modes. The Jacobian matrix at the equilibrium point O 1 for Eq. (56) is Then solve for its eigenvalues λ 11 = I, λ 12 = −I, λ 13 = I ( 3 − 2 ) , The definition of 3 is the same as the third section above. For the equilibrium point O 2 , the Jacobian matrix of Eq. (56) is It can be seen from Fig. 11a-b that when 1 = 0.4, 2 = 0.8 is set, its timing diagram is stable vibration near the equilibrium point O 2 , and its phase trajectory diagram is a concentric ellipse, which also indicates that the equilibrium point O 2 is the center. When Influence of magnetic field strength on stability of spring pendulum motion In order to continue to investigate the influence of the magnetic field strength on the motion stability of the spring pendulum, the local maximum value of X 5 is given by numerical solution in Eq. (56), and then its bifurcation diagram is presented. And the Wolf method [26] is used to analyze the largest Lyapunov exponent of the system, and the results are shown in Fig. 12. In Fig. 12a-b, considering the system motion stability in the case of hard springs ( 1 = 0.1), it can be observed from bifurcation diagram 12a: when in periodic motion. At the same time, according to the largest Lyapunov exponential curve in Fig. 12b, it can be observed that when 2 ∈ (0, 0.1) ∪ (0.4, 0.6), its indication is significantly greater than 0, indicating that the system is in a chaotic state. When in the range of 2 ∈ [0.1, 0.4] ∪ [0.6, 2], the largest Lyapunov index is approximately 0, indicating that the system is in periodic motion. As for the motion stability of the system under weak elasticity ( 1 = 0.5) explored in Fig. 12c-d, it can be seen from bifurcation diagram 12c that when 2 ∈ (0, 2.5), the system is mainly in a chaotic state and has a periodic state many times. When 2 ∈ [2.5, 4], the system is in periodic motion. In Fig. 12d, it can be observed that when approximately 0, indicating that the system is in periodic motion. The timing diagram and phase diagram of Eq. (56) are drawn according to the conclusions obtained above, and the results are shown in Fig. 13. Combined with the largest Lyapunov exponent in Fig. 12, the analysis of the above figure can be concluded as follows: in Fig. 13a-b, e-f, its largest Lyapunov exponent is approximately 0, so the system is in stable periodic motion under the condition of 1 = 0.1, 2 = 0.26, 0.5. The largest Lyapunov exponent in Figs. 13c-d is 0.02, so the system is chaotic in the case of 1 = 0.1, 2 = 0.5. At the same time, in Fig. 13g-h, it can be observed that under the condition of 1 = 0.5, its largest Lyapunov exponent is 0.148 when 2 = 0.28, so the system is in a chaos state. The largest Lyapunov exponent in Fig. 13i-l is approximately 0, so the system is in stable periodic motion under the condition of 1 = 0.5, 2 = 1.16, 3. In summary, these timing diagrams and phase trace dia-6126 Y. Meng Fig. 13 Timing diagrams and phase trace diagrams (X 1 (0) = 0.1, X 5 (0) = −0.82, X 2 (0) = X 3 (0) = X 4 (0) = X 6 (0) = 0) grams clearly reflect the correctness of the stability of the system discussed above. Conclusion This paper explores the motion problem of magnetic spring pendulum, which has rarely been studied in previous literature. Several special approximation analytical solutions are given here. In Sect. 2.2 of this paper, it can be found that from kinetic differential Eqs. (19)- (20) and the various approximate solutions, it is indeed possible to simulate the motion of the Foucault pendulum at different latitudes and under different initial conditions by adjusting the parameters 1 and 2 of the magnetic spring pendulum model and changing the initial conditions of motion. According to the internal cycloid solution, the magnetic spring pendulum has potential value for the engineering design of swing rollers, cycloidal internal combustion engines, cycloid motors, etc. At the same time, new internal resonance relationship (49) due to the magnetic field was discovered, from which it can be seen that if the magnetic field disappears( 2 = 0), the internal resonance condition returns to the equation relationship mentioned in the previous literature [7]. In addition, the internal resonance phenomenon of the spring pendulum described in the previous literature is only the alternating transfer of energy between the breathing mode and the oscillating mode. In this paper, it can be observed that from Eqs. (52), (55), the magnetic field force is used as the driving force to deflect the swing plane of the spring pendulum, so the deflection mode of the spring pendulum due to the presence of the magnetic field is corrected. This causes energy to be transferred among breathing, deflection, and swinging modes. Finally, when exploring the stability of the system motion, it is found that the stability of the equilibrium point can be changed by adjusting the size of the magnetic field. Moreover, it is presented that under the action of a strong magnetic field, the motion of the system will tend to be in a stable periodic state. However, it is worth noting that the motion of the magnetic spring pendulum under the condition that the size and direction of the magnetic field change with time has not been discussed in this paper, and this problem will be further studied in the future. Data Availability All data generated or analyzed during this study are included in this published article. Declarations Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
6,477.8
2022-12-15T00:00:00.000
[ "Physics", "Education" ]
Perspectives and the role of Bosnian defense industry in national innovation system The increased global demand for weapons is growing fast both military and civilian grade equipment. The defense industry is experiencing an increase in its trades and production over the globe. Currently global market trade of arms is about US$ 1.9 trillion, with a trend of increase. This paper investigates global trends in defense through analysis of global defense spending and R&D activities with the focus to Bosnian defense industry perspectives. It was observed that Bosnian defense industry has potential to be one of the key players of the national innovation system through which the national R&D output would make notable positive impact on the national economic performances. Introduction Over the past decade, the international transfer of weapons has significantly increased with the leaders in the export being the United States, China, Russia, France and Germany, and the largest importers being India, Egypt, Saudi Arabia, and the United Arab Emirates (UAE) [1]. Highly influential companies in the industry when it comes to weapon sales are those from the US and the Western Europe. In 2016, the combined sales amount from those two areas was USD 194.8 billion. Overall, the country with the most influence on overall weapons sale trends is undoubtedly the US [2]. The defense industry is an element for reaching the potential sustainable economic growth of a country. Due to the policies a country chooses to pursue, it is decided if and how many resources would be allocated for the defense industry in total [3]. One of the basic concepts of the economics is that producing larger quantities of one commodity would result in producing smaller quantities of another commodity. This means that increased spending in military area would cause lower spending in civilian area. Also, economic growth at the local level is impacted by the defense industry as many people depend on the jobs and incomes possibilities created in this industry. Defense industry is an essential component in providing security for a country while it eliminates the threat posed by other countries, both in terms of hostility and in terms of imposing their products that would be implemented over the domestic products of a country. Moreover, security threats encourage the increase in the development of this industry. In conclusion, defense industry development is the consequence of the overall increase in the country's total industry production [3] [4]. Throughout history, the defense industry was seen as a strictly military branch of industry. Nowadays it has developed in almost every sphere of our life. The industries operating in this branch have shifted from specialization production to companies which became conglomerates, producing everything for naval, air and land equipment's and arms. The goal of this paper to investigate perspectives of Bosnian defense industry in national innovation system as well as global trends and local Bosnian defense sector opportunities for generating significant added value through customized innovative solution for targeted global market. National innovation system is defined as "…set of distinct institutions which jointly and individually contribute to the development and diffusion of new technologies and which provides the framework within which governments form and implement policies to influence the innovation process. As such it is a system of interconnected institutions to create, store and transfer the knowledge, skills and artefacts which define new technologies" [5]. The defense industry is primarily a business with the main objective to maximize profit with minimal investments. Also, Bosnian defense industry has all potential to be recognized as one of the strategic industries for the development of the country. Top global market trends For the purpose of this research, global trends are reflected in two directions: spending trends and R&D trends. Global defense spending continues to increase despite financial pressure due to COVID-19. The global defense spending for period 1988 -2019 is shown in Figure 1. The demand for military equipment is growing as governments around the world focus on military modernization busting worldwide defense expenditure. Global defense spending reached US$1.914 trillion in 2019, and continue to grow in 2020 for about 3.9% despite coronavirus pandemic. To identify general trends in different weapons deliveries and to permit comparison between the data, Stockholm International Peace Research Institute (SIPRI) developed a unique system to measure the volume of international transfers of major conventional weapons using a common unit, the trend-indicator value (TIV). The TIV trends of total arm transfer for period 1950 -2019 is presented in Figure 2. Year It is observed that TIV had been growing over the time until it achieved a peak in 1982. In period between 1982 and 2002 the TIV was significantly decreased. Since 2002 the TIV has been growing until now. It is expected that COVID-19 will slightly slow the TIV growth in upcoming period. The trends in R&D are focused on the artificial intelligence systems, additive manufacturing, as well as cost reduction. Future weapons are not something so Sci-Fi nowadays. Electric rifles, advanced magnetic armor, robot soldiers, etc. are all being devised today by big, both public and private institutes [6]. Defense is interested in technologies, systems and processes that improve intelligence collection, analysis and dissemination across all capabilities within Defense and in all domains: land, maritime and aerospace. This includes the advanced use of biometric data, as well as innovation in cyber technology to support every facet of capability development. Supporting the shift from Intelligence, Surveillance and Reconnaissance to Targeting, as well as the advancement of hypersonic technology as an opportunity for Defense's space capabilities also warrant attention in this stream. The relevancy of smart defense in modern international security has been on NATOs list ever since 2008 financial crisis [7]. Artificial intelligence Global market trends are important indicators for the contractors or companies in the global market as the more updated one is the more market area can be covered. According to case studies done [8], the defense industry shifts towards robotics, cyber weaponry and automated complex systems [9]. Starting with the main technological trends, artificial intelligence (AI) has certainly become a great tool for defense industry in the sense of processing large amounts of data with which organizations have struggled over the past few years. Data processing done by AI allows people to shift their focus on the results and findings in contrast to primarily producing them. Advanced robots and augmented reality are some similar paths companies are already taken interests into with heavily investments already being taking places. Additive manufacturing 3D printing has become a huge disruptor ever since 2017. The militaries are already looking into the potential of printing spare parts and military equipment in-theatre to drive down costs and drive-up availability. The aerospace and defense (A&D) sector is set to become one of the biggest contributors to 3D printing's global revenues, predicted to reach a mammoth $1.4 billion by 2019, producing parts in-house, a development that will completely reshape the relationship between contractors and manufacturers [10], [11]. Cost reduction Another trend is cost technology. Clean technology measures within the A&D sector include energy efficiency, waste management, recycling, use of digital and paperless products, video conferencing, and many more. According to well defined research the budget for the global defense clean technology market will increase 7.5% between 2016 and 2021. This also includes paperless documents and clean power. A difficulty defense companies face as they try to move forward with these environmentally friendly technology initiatives will be integration with current systems. However, the savings over the long run from eco-friendly and costeffective solutions are worth the initial investment. Cost-effectiveness can be listed as the back bone of all the major trends. Innovation is the key factor in this industry and those who are willing to adapt are those who survive. Technology investments are elements that enable manufacturers to innovate on-the-go. These improvements should lower manufacturers' costs and help them stay a relevant member of the industry. Companies in sectors of the healthcare, software and internet show the substantial grow in the R&D spending over the last fifteen years, while typically spend less on R&D and it has been largely flat over recent years. Companies from South Korea, Israel, Europe, Japan, North America and China demonstrate the most significant growth in R&D spending. Figure 3. R&D spending by industry 3 Current state and trend in Bosnian defense industry As result of increased global defense spending, in recent years turnover of Bosnian defense industry started to grow. Reports say that the total income has increased for 21% and the total export has increased for 29% when compared to 2015 [12]. The Defense industry is mainly based on the production of ammunition and artillery pieces. This is far away from the time when Bosnia as a part of Former Yugoslavia, produced a wide variety of both complex and non-complex products. Being a war-ravaged transition country, the reduction in production is not something out of the ordinary. Ukraine, as one of the world's super power in weapon production, experienced a huge decrease in every branch of their defense industry in their transition period of 1991 to 1999 [13]. Through investments in R&D and through arms export to 3 rd world countries, Ukraine's defense industry rocketed sky high again. Although a small country, with small influence, Bosnia can allocate the steps which Ukraine undertook in order to get its industry back in its tracks. Representative products of Bosnian defense industry [14], [15]: Following the world defense industry trends and investing in R&D, Bosnia can take a piece of the world trade cake. Recently, Bosnia developed its first fully automatic, mobile artillery piece. It complies with NATO standards, is lighter than its French and Serbian competition and is ought to cost less than its counterparts. The manufacturer BNT Novi Travnik, devised everything except the sophisticated electronics and the truck [14]. Analyzing the portfolio of local defense industry, it is observed that only recently developed self-propelled howitzer is on the track with global R&D trends. To be on the track with the global R&D trends in defense, a significant upgrade of the existing product is required, and development of solution for the global market. To asses alignment of Bosnian defense sector with the global trends, SWOT analysis is done and the results are provided in Table 1. As previously explained, the strengths are analyzed as one of the internal elements of this analysis. In this case, strengths of the Bosnian defense industry are tradition in making weapons and other metallurgy products, reliability of products and proven brands. On the other hand, weaknesses of the industry could either be lack of knowledge regarding producing complex weapon systems, as well as insufficient investment in research and development, which are driving forces of industry growth. Tapping into the area of drones, or Artificial Intelligence based systems could be considered as opportunity of defense industry of Bosnia, while the competitors and potential political instability as well as new global policies on defense products and distribution are perceived as threats of the industry. Along with the pharmaceutical industry, local defense industry has potential to become one of the key sectors of the Bosnian innovation system. Innovations are key factors to generate significant added-value, to improve economic performance and to ensure business future. Opportunities lie in global defense spending growth, growing trends in smart weapon ammunition, autonomous complex systems, artificial Intelligence, drones. Spending in defense sector on global level is continuously and significantly grooving, therefore this sector should become one of the strategic industries for the Bosnian economy and country development. This is an opportunity to invest in end-user-solutions on the global market through this industry, which will have significant added value and a significant share of knowledge. Comparing the current production program from the Bosnian defense industry with global trends, it is observed that significant upgrade is required to keep pace with global trends, which requires significant investment in research and development. Therefore, potential investors as well as the government should recognize their interest in this sector, explore and encourage investors to invest. The R&D expenditure for selected countries as percentage of GDP for period 1996 -2019 is shown in Figure 4. Recent spending on R&D projects in Bosnia are one of the lowest in the world as well as the lowest in the region. Companies and government from Bosnia spend less than 0.2% of GDP on R&D projects, which is about 5 times lower than regional average (compared with neighboring countries) and about 15 times less that the average of top global investor in R&D. To improve economic performance of Bosnian defense industry, investment in R&D is necessary, since there is significant positive correlation between investment in R&D and economic performance [16]. Over the last decade South Korea has the highest grow in R&D spending as percentage of GDP. Reforms which took place in South Korea would be a good model of restructuring the defense industry and the military in general [4]. Making strategic alliances with countries that are already producing sorts of smart weapons would be an important step in the development of a domestic smart weapon or ammunition. Obtaining the knowledge through partnerships with countries which are leaders in the market, Bosnia would be able to become a regional factor in the production of smart weapons. A good example could be the Israel, Spain and Portugal, which became regional leaders in arms production through investments in R&D, but also through their acute strategic need [17]. 4 The World Bank, https://data.worldbank.org/indicator/GB.XPD.RSDV.GD.ZS As already stated, Bosnia's ammunition products are already used in many nations, all over the world. They are less expensive but battle proven and reliable. So, there is a solid turf at which Bosnia can aim their sights. Following trends and further developing the production of arms, increases interest in products which are made in Bosnia. Cooperation between partners countries can definitely lead them to a profound and well-established R&D and manufacturing industry for the defense systems. For instance, in the US most of the R&D investment were made by business, which is shown in Figure 5. Figure 5. History of R&D Expenditure in the US by business and government [18] Recently the business share in the US R&D expenditure is about 67% of total GDP for R&D, while the government is about 33% [18]. Conclusion Opportunities for Bosnian defense industry in global trends are analyzed. The defense industry is primarily a business, and as in every business the main objective is to maximize profit with minimal investments. Bosnia is a small country and has negligible influence in the global market of arms. This can be considered as an advantage for Bosnian defense sector to be specialized on customized solutions for narrow global market rather than modular ones. Customized solutions for narrow global market might be less interesting to big players, which could make some room for Bosnian defense industry. Current rate of expenditure in the R&D sector is not fare below world average, as well as far below regional average. To bridge the gap between the desire to invest in R&D and the provision of the necessary funds, the public private partnership model could be one with an emphasis innovative solution. In order to achieve this, it is necessary to recognize the domestic defense sector as a strategic branch for economic development. Opportunities lie in growing of global expenditure for the defense products and positive trend in artificial intelligence systems. Tradition in making weapons and other metallurgy products, reliable battle proven products and skilled workforce are major strengths. Among the other domestic sectors, the defense industry along with pharmaceutical has the potential to deliver via R&D end user solutions, which qualifies them to generate increased added-value through the knowledge and branding. Therefore, the defense industry through the R&D output may play a key role in the national innovation system and achieve notable positive impact on the national economic performances.
3,691.8
2021-04-10T00:00:00.000
[ "Economics" ]
Thermal Treatment under Vacuum for Obtaining a Quenchant from Rapeseed Oil : The aim of this study was to improve the quality of a vegetable oil, having in view its use as a quenchant for metallic parts in aircrafts. A process of pyrolysis under vacuum was applied to obtain a bio-oil with reduced viscosity and good quenching properties. Preliminarily, the rapeseed oil was fast pyrolyzed at temperature in the range of 300–375 ◦ C and absolute pressure of 1 µ bar. Some results such as viscosity and yields of bio-oil were obtained with a narrowing of the temperature range between 300–320 ◦ C, for further processing. Quenching tests with bio-oils on stainless steel 25CD4 showed cooling curves closer to those of the standard mineral oil (Castrol Iloquench TM 1), by comparing them with unprocessed vegetable oil. The hardness of the steel after treatment rose from 29–30 HRC to 43–45 HRC, in accordance with requirements (35–45 HRC). Therefore, the conclusion is that bio-oils obtained by pyrolysis under vacuum are good quenchant proceeds from this study. Introduction Quenching is a process of hardening metal parts by rapidly cooling down the uniformly heated piece. This thermal treatment avoids unwanted microstructural changes in material due to the slow cooling after quenching resulting in pieces without metallurgical distortion or stress. Quenchants for blacksmithing are various (oils, gas, water, salts, brine, polymers), and their selection depends on a few factors: the steel type, the dimension of the piece to be hardened, desired properties after quenching [1]. Mineral oils tailored for this specific application are the most frequently used, but also vegetable oil (canola, olive, palm kernel oil) are attractive because they are good cheap quenchants and come from renewable resources [2]. During the last few decades, the alternative of vegetable oils to mineral oil as quenchant agents was extensively studied [3][4][5][6], not only for the final result, the mechanical properties of the testing specimen [7,8], but also for the wetting behavior [9] and chemical stability of the vegetable oil during repeated quenching cycles [10,11]. All the scientists concluded that vegetable oils constitute good replacements for mineral oils in this application, if methods for minimizing its oxidation instability are applied. The methods include expoxydation [12], hydrogenation [13], and esterification [14]. Extensive studies of Prabhu and Fernandes [9,15] on palm, coconut, sunflower, groundnut and castor oil as bio-quenchants showed little difference in surface wettability whence quench severity is comparable to conventional quench mineral oils. Processes 2021, 9, 2189 2 of 9 In many studies, the inquiry went deeper, by calculating the heat transfer rates [3,7,16] from metal surface to the oil, in different stages of the quenching process, and by performing cooling curves analysis [17,18]. Moreover, by combining cooling curves with time-temperature-transformation (TTT) diagrams, it is possible to predict the variation of hardness via the quench factor analysis [16]. The studies [3,6,17] revealed heat transfer coefficients frequently superior for the vegetable oils, but a different aspect of the cooling curve during quenching, due to different physical-chemical properties of the oils. However, the results of quenching were similar in the end [8]. Cooling curves analyses performed on crude and processed soybean oils performed by Totten et al. [19] showed that these oils have similar cooling behavior and the vegetable oils have faster cooling rates at high temperatures than the reference mineral oil [3]. It was thought that higher viscosity of the vegetable oil could be a break in developing a high cooling rate in convective transfer [3]. Starting from this presumption, we questioned if a light pyrolysis of the vegetable oil would create a bio-oil from rapeseed oil with a lower viscosity than the raw and good oxidation stability. The pyrolysis under vacuum is our choice since it was demonstrated that it provides rapid pyrolysis, lowering of process temperature [20,21], short process time and low energy consumption [22]. This method has not been tested yet for this purpose, and the pyrolytic oil will be tested in quenching low-alloy steel parts of aircraft to successfully replace the mineral oil. Materials and Analysis Methods The original quench oil was a mineral one, Iloquench TM 1, and the raw oil used in this study was waste/frying rapeseed oil from a collection centre in Bucharest. Their characteristics are shown in Table 1. The first thermal treatment of stainless steel test specimens, the heating, was performed in a brick oven with electrical resistance, in reducing the H 2 atmosphere. Then, the cooling rate at 300 • C was determined with the IVF Smart Quench apparatus, the same used to trace the cooling curves in quenching process. The Wilson UH4250 apparatus was used to measure the Rockwell hardness of test bars before and after quenching. Pyrolysis under Vacuum Preliminary trials were performed in order to set the optimal temperature range. The pyrolysis took place in an electrical oven coupled with a vacuum system (Figure 1), fol-Processes 2021, 9, 2189 3 of 9 lowed by cooling in argon 99.9% pure atmosphere to obtain the bio-oil. The test procedure is the following: the batch sample is put inside the oven, the oven door is closed tightly, the vacuum system is turned on and, after the work pressure is reached, the heating starts in accordance with the programme authomatically set: heating rate, temperature level, holding time at that level. When the time expired, the inert gas is admitted in the oven, the ventilator is turned on, thus ensuring the cooling the chamber and the bio-oil remaining in the batch is collected. Processes 2021, 9, x FOR PEER REVIEW 3 of 10 tightly, the vacuum system is turned on and, after the work pressure is reached, the heating starts in accordance with the programme authomatically set: heating rate, temperature level, holding time at that level. When the time expired, the inert gas is admitted in the oven, the ventilator is turned on, thus ensuring the cooling the chamber and the biooil remaining in the batch is collected. The trials were undertaken at absolute pressure of 1× E-3 mbar at 300 °C and 375 °C respectively, for 20 min. Process watching was ensured by a recording instrument with one second resolution and six channels for tracking the temperature in the heating chamber, the temperature of the batch in three points and the pressure. In the end, data are automatically processed and represented graphically, as seen in Figures 2 and 3, for the trials at 300 °C and 375 °C. The trials were undertaken at absolute pressure of 1 × 10 −3 mbar at 300 • C and 375 • C respectively, for 20 min. Process watching was ensured by a recording instrument with one second resolution and six channels for tracking the temperature in the heating chamber, the temperature of the batch in three points and the pressure. In the end, data are automatically processed and represented graphically, as seen in Figures 2 and 3, for the trials at 300 • C and 375 • C. As seen in Figures 2 and 3, there is a difference between the behaviour of the system at 300 °C and at 375 °C. Since at lower temperature the pressure is stable during the process, at 375 °C, due to the large quantity of gases formed and exceeding the capacity of the vacuum system, the pressure rises continuously. At 300 °C, the pressure was 1.79×E-3 mbar, and at 375 °C was 1.97×E-1 mbar and no liquid was left in the batch. Also, at 300 °C, the bio-oil yield was already 87%, so the pyrolysis should take place at temperatures close to 300 °C such that the yield does not decrease too much, and the next trials were pressure; batch tem oven temperature. As seen in Figures 2 and 3, there is a difference between the behaviour at 300 °C and at 375 °C. Since at lower temperature the pressure is stab process, at 375 °C, due to the large quantity of gases formed and exceedin of the vacuum system, the pressure rises continuously. At 300 °C, the pressu 3 mbar, and at 375 °C was 1.97×E-1 mbar and no liquid was left in the batc °C, the bio-oil yield was already 87%, so the pyrolysis should take place at close to 300 °C such that the yield does not decrease too much, and the n pressure; Processes 2021, 9, x FOR PEER REVIEW pressure; batch tem oven temperature. As seen in Figures 2 and 3, there is a difference between the behaviour at 300 °C and at 375 °C. Since at lower temperature the pressure is stab process, at 375 °C, due to the large quantity of gases formed and exceedin of the vacuum system, the pressure rises continuously. At 300 °C, the pressu 3 mbar, and at 375 °C was 1.97×E-1 mbar and no liquid was left in the batc °C, the bio-oil yield was already 87%, so the pyrolysis should take place at close to 300 °C such that the yield does not decrease too much, and the n batch temperature; 3, there is a difference between the behaviour of the system ce at lower temperature the pressure is stable during the large quantity of gases formed and exceeding the capacity ssure rises continuously. At 300 °C, the pressure was 1.79×E-7×E-1 mbar and no liquid was left in the batch. Also, at 300 dy 87%, so the pyrolysis should take place at temperatures yield does not decrease too much, and the next trials were oven temperature. As seen in Figures 2 and 3, there is a difference between the behaviour of the system at 300 • C and at 375 • C. Since at lower temperature the pressure is stable during the process, at 375 • C, due to the large quantity of gases formed and exceeding the capacity of the vacuum system, the pressure rises continuously. At 300 • C, the pressure was 1.79 × 10 −3 mbar, and at 375 • C was 1.97 × 10 −1 mbar and no liquid was left in the batch. Also, at 300 • C, the bio-oil yield was already 87%, so the pyrolysis should take place at temperatures close to 300 • C such that the yield does not decrease too much, and the next trials were undertaken at 310 • C and 320 • C, trying to obtain reasonable quantities of bio-oil. In Figures 4 and 5, one can see the diagrams for the pyrolysis at 310 • C and 320 • C. As seen, at 310 °C and 320 °C, the pressure is maintained at the desired temperature level during the process at 1.8×E-3 mbar and 1×E-2 mbar, respectively. Finally, the bio-oils were characterized by kinematic viscosity, flash point, iodine number and then by cooling curves. pressure; batch tem oven temperature. As seen in Figures 2 and 3, there is a difference between the behaviour at 300 °C and at 375 °C. Since at lower temperature the pressure is stab process, at 375 °C, due to the large quantity of gases formed and exceedin of the vacuum system, the pressure rises continuously. At 300 °C, the pressu 3 mbar, and at 375 °C was 1.97×E-1 mbar and no liquid was left in the batc °C, the bio-oil yield was already 87%, so the pyrolysis should take place at close to 300 °C such that the yield does not decrease too much, and the n pressure; Figure 2. Process diagram for the pyrolysis under vacuum at 300 °C. Legend: pressure; batch tem oven temperature. pressure; batch tem oven temperature. As seen in Figures 2 and 3, there is a difference between the behaviour at 300 °C and at 375 °C. Since at lower temperature the pressure is stab process, at 375 °C, due to the large quantity of gases formed and exceedin of the vacuum system, the pressure rises continuously. At 300 °C, the pressu 3 mbar, and at 375 °C was 1.97×E-1 mbar and no liquid was left in the batc °C, the bio-oil yield was already 87%, so the pyrolysis should take place at close to 300 °C such that the yield does not decrease too much, and the n batch temperature; 3, there is a difference between the behaviour of the system ce at lower temperature the pressure is stable during the large quantity of gases formed and exceeding the capacity ssure rises continuously. At 300 °C, the pressure was 1.79×E-7×E-1 mbar and no liquid was left in the batch. Also, at 300 dy 87%, so the pyrolysis should take place at temperatures yield does not decrease too much, and the next trials were oven temperature. As seen, at 310 • C and 320 • C, the pressure is maintained at the desired temperature level during the process at 1.8 × 10 −3 mbar and 1 × 10 −2 mbar, respectively. Finally, the bio-oils were characterized by kinematic viscosity, flash point, iodine number and then by cooling curves. Quenching Tests The thermal treatment was performed on a set of test specimens made of stainless steel 25CD4 with the following composition: C-0.25 wt %, Si-0.25 wt %, Mn-0.7 wt %, Cr-1.05 wt %, Mo-0.25 wt %. The specimens were cylindrical, 8 mm dimetre and 300 mm length. The heating of test specimens in the oven was undertaken at a controlled temperature of 850 • C ± 10 • C, maintaining this temperature for 30 min. Then, rapid cooling took place in the bio-oil. There were five tests: one for the mineral oil, one for rapeseed oil and three for the bio-oils resulting from the pyrolysis process at 300 • C, 310 • C and 320 • C, respectively. The performance of the quenching process was measured as Rockwell hardness of the test samples. Results and Discussion The bio-oil quantities and yields after pyrolysis at 300 • C, 310 • C, 320 • C and 375 • C are found in Table 2. In a vacuum, the pyrolysis oil yield decreases with process temperature increasing, as well as at atmospheric pressure [23] or in the presence of an inert gas [24,25], and in the presence of a catalyst [26,27]. Decreasing of the liquid yield from 87% to 45% over a merely 20 • C range of temperature is influenced by the low pressure strongly favoring the decomposition to gaseous products. The mineral oils used in quench processes have a low viscosity, this facilitating the thermal transfer and ensuring the temperature decreasing from the core of the piece to the surface in its whole volume. The goal of this study was to obtain bio-oils with lower viscosity in order to improve the heat transfer compared with the raw vegetable oil. The results are shown in Table 3. Also, density, flash points and iodine values of resulting bio-oils are presented in Table 3. Even if the viscosity of the bio-oils is higher than the viscosity of the mineral oil at the same temperature, one can see that it decreases with process temperature up to 4.3 units, an important decrease, therefore we expected better performance of this bio-oil in quenching than the performance of raw vegetable oil. During the thermal decomposition, a dehydration took place as well, thus eliminating the water content which might provoke faults in the structure and even cracks in the material during the quenching process. The iodine number of bio-oils is close to that of the raw (5.8 + max. 0.2 g I 2 /100 g) sample, in contrast with the bio-oils obtained from the pyrolysis at atmospheric pressure and higher temperature [23][24][25] in which iodine number increased by up to 2.5 units. This means that, at these relatively low process temperatures, dehydrogenation of the heaviest part in raw oil was minimal, and chemical modifications were minimal in general. The small decrease in iodine number preserves the original stability to oxidation of the bio-oil obtained, this being prone to last as long as the raw, in quenching cycles. The flash points of the bio-oil are also close to the raw, slowly lowering with pyrolysis increasing temperature, so the bio-oil preserved pretty well the quality of the raw. The decrease of the viscosity and density indicates the cracking of longer chains in some molecules in the raw, besides the evaporation of volatiles. The flash point of obtained bio-oils, close to that of raw samples, also indicates the removal of gases and volatile compounds during the process. It should be mentioned that operation at extreme vacuum (in order of 1 × 10 −3 mbar) is expensive in terms of a large-scale process, but moderate values (5-10 mbar) are feasible in industrial vacuum furnaces. The advantage of such low pressure should be an important decrease in process temperatures, thus leading to lower power consumption in the process versus other methods, such as conventional pyrolysis or the use an inert gas [20,21]. Also, in these conditions, fast pyrolysis takes place, thus shortening the operation time, with consequences for the operating costs. The quenching capacity of bio-oils is illustrated in Figure 6 showing the IVF Smart Quench apparatus. There are two kinds of curve: cooling curves (temperature vs. time) and cooling rate curves (cooling rate vs. temperature). The different oils are represented in different colors (as seen in Legend). The cooling curves (temperature vs. time) show a slower cooling in the rapeseed oil and comparable data for the other oils (mineral or bio). The temperature reaches 700 °C in approximately 8 s in the rapeseed oil since in the mineral oil, 700 °C is reached in 6 s and in bio-oils, between 2.5-3.5 s. The cooling rate at 700 °C is superior for bio-oils in (80-90 °C/s) compared with the mineral oil (15 °C/s) and vegetable oil (45 °C/s). A higher cooling rate in the region of 700 °C is better, so the steel perlite transformation is avoided. The cooling rate at 300 °C must be minimized, all the tested oil having close values, between 6 and 8 °C/s, meaning good behavior of cracks and distortions in the material after quenching. Rockwell harness of probes is shown in Table 4. The results of quenching are good for every oil/bi-oil used in the process, the Rockwell hardness being improved by 14-16 units. Surprisingly, the rapeseed oil showed the best result even if it was more viscous and had the lowest cooling rate at 700 °C. However, differences in hardness were small, as differences in their characteristics were small (see Table 3), with every oil performing well. Also, there is a great advantage in using the The cooling curves (temperature vs. time) show a slower cooling in the rapeseed oil and comparable data for the other oils (mineral or bio). The temperature reaches 700 °C in approximately 8 s in the rapeseed oil since in the mineral oil, 700 °C is reached in 6 s and in bio-oils, between 2.5-3.5 s. The cooling rate at 700 °C is superior for bio-oils in (80-90 °C/s) compared with the mineral oil (15 °C/s) and vegetable oil (45 °C/s). A higher cooling rate in the region of 700 °C is better, so the steel perlite transformation is avoided. The cooling rate at 300 °C must be minimized, all the tested oil having close values, between 6 and 8 °C/s, meaning good behavior of cracks and distortions in the material after quenching. Rockwell harness of probes is shown in Table 4. The results of quenching are good for every oil/bi-oil used in the process, the Rockwell hardness being improved by 14-16 units. Surprisingly, the rapeseed oil showed the best result even if it was more viscous and had the lowest cooling rate at 700 °C. However, differences in hardness were small, as differences in their characteristics were small (see Table 3), with every oil performing well. Also, there is a great advantage in using the The cooling curves (temperature vs. time) show a slower cooling in the rapeseed oil and comparable data for the other oils (mineral or bio). The temperature reaches 700 °C in approximately 8 s in the rapeseed oil since in the mineral oil, 700 °C is reached in 6 s and in bio-oils, between 2.5-3.5 s. The cooling rate at 700 °C is superior for bio-oils in (80-90 °C/s) compared with the mineral oil (15 °C/s) and vegetable oil (45 °C/s). A higher cooling rate in the region of 700 °C is better, so the steel perlite transformation is avoided. The cooling rate at 300 °C must be minimized, all the tested oil having close values, between 6 and 8 °C/s, meaning good behavior of cracks and distortions in the material after quenching. Rockwell harness of probes is shown in Table 4. The results of quenching are good for every oil/bi-oil used in the process, the Rockwell hardness being improved by 14-16 units. Surprisingly, the rapeseed oil showed the best result even if it was more viscous and had the lowest cooling rate at 700 °C. However, differences in hardness were small, as differences in their characteristics were small (see Table 3), with every oil performing well. Also, there is a great advantage in using the bio oil at 300 • C Processes 2021, 9, x FOR PEER REVIEW 8 of 10 Figure 6. Quenching capacity of tested oils; Legend: mineral oil rapeseed oil bio oil at 300 °C bio-oil at 310 °C bio oil at 320 °C The cooling curves (temperature vs. time) show a slower cooling in the rapeseed oil and comparable data for the other oils (mineral or bio). The temperature reaches 700 °C in approximately 8 s in the rapeseed oil since in the mineral oil, 700 °C is reached in 6 s and in bio-oils, between 2.5-3.5 s. The cooling rate at 700 °C is superior for bio-oils in (80-90 °C/s) compared with the mineral oil (15 °C/s) and vegetable oil (45 °C/s). A higher cooling rate in the region of 700 °C is better, so the steel perlite transformation is avoided. The cooling rate at 300 °C must be minimized, all the tested oil having close values, between 6 and 8 °C/s, meaning good behavior of cracks and distortions in the material after quenching. Rockwell harness of probes is shown in Table 4. The results of quenching are good for every oil/bi-oil used in the process, the Rockwell hardness being improved by 14-16 units. Surprisingly, the rapeseed oil showed the best result even if it was more viscous and had the lowest cooling rate at 700 °C. However, differences in hardness were small, as differences in their characteristics were small (see Table 3), with every oil performing well. Also, there is a great advantage in using the bio-oil at 310 • C Processes 2021, 9, x FOR PEER REVIEW 8 of 10 Figure 6. Quenching capacity of tested oils; Legend: mineral oil rapeseed oil bio oil at 300 °C bio-oil at 310 °C bio oil at 320 °C The cooling curves (temperature vs. time) show a slower cooling in the rapeseed oil and comparable data for the other oils (mineral or bio). The temperature reaches 700 °C in approximately 8 s in the rapeseed oil since in the mineral oil, 700 °C is reached in 6 s and in bio-oils, between 2.5-3.5 s. The cooling rate at 700 °C is superior for bio-oils in (80-90 °C/s) compared with the mineral oil (15 °C/s) and vegetable oil (45 °C/s). A higher cooling rate in the region of 700 °C is better, so the steel perlite transformation is avoided. The cooling rate at 300 °C must be minimized, all the tested oil having close values, between 6 and 8 °C/s, meaning good behavior of cracks and distortions in the material after quenching. Rockwell harness of probes is shown in Table 4. The results of quenching are good for every oil/bi-oil used in the process, the Rockwell hardness being improved by 14-16 units. Surprisingly, the rapeseed oil showed the best result even if it was more viscous and had the lowest cooling rate at 700 °C. However, differences in hardness were small, as differences in their characteristics were small (see Table 3), with every oil performing well. Also, there is a great advantage in using the bio oil at 320 • C Processes 2021, 9, x FOR PEER REVIEW 8 of 10 Figure 6. Quenching capacity of tested oils; Legend: mineral oil rapeseed oil bio oil at 300 °C bio-oil at 310 °C bio oil at 320 °C The cooling curves (temperature vs. time) show a slower cooling in the rapeseed oil and comparable data for the other oils (mineral or bio). The temperature reaches 700 °C in approximately 8 s in the rapeseed oil since in the mineral oil, 700 °C is reached in 6 s and in bio-oils, between 2.5-3.5 s. The cooling rate at 700 °C is superior for bio-oils in (80-90 °C/s) compared with the mineral oil (15 °C/s) and vegetable oil (45 °C/s). A higher cooling rate in the region of 700 °C is better, so the steel perlite transformation is avoided. The cooling rate at 300 °C must be minimized, all the tested oil having close values, between 6 and 8 °C/s, meaning good behavior of cracks and distortions in the material after quenching. Rockwell harness of probes is shown in Table 4. The results of quenching are good for every oil/bi-oil used in the process, the Rockwell hardness being improved by 14-16 units. Surprisingly, the rapeseed oil showed the best result even if it was more viscous and had the lowest cooling rate at 700 °C. However, differences in hardness were small, as differences in their characteristics were small (see Table 3), with every oil performing well. Also, there is a great advantage in using the . The cooling curves (temperature vs. time) show a slower cooling in the rapeseed oil and comparable data for the other oils (mineral or bio). The temperature reaches 700 • C in approximately 8 s in the rapeseed oil since in the mineral oil, 700 • C is reached in 6 s and in bio-oils, between 2.5-3.5 s. The cooling rate at 700 • C is superior for bio-oils in (80-90 • C/s) compared with the mineral oil (15 • C/s) and vegetable oil (45 • C/s). A higher cooling rate in the region of 700 • C is better, so the steel perlite transformation is avoided. The cooling rate at 300 • C must be minimized, all the tested oil having close values, between 6 and 8 • C/s, meaning good behavior of cracks and distortions in the material after quenching. Rockwell harness of probes is shown in Table 4. The results of quenching are good for every oil/bi-oil used in the process, the Rockwell hardness being improved by 14-16 units. Surprisingly, the rapeseed oil showed the best result even if it was more viscous and had the lowest cooling rate at 700 • C. However, differences in hardness were small, as differences in their characteristics were small (see Table 3), with every oil performing well. Also, there is a great advantage in using the pyrolytic bio-oil since, during the pyrolysis, yields between 13-55% are obtained in products with added value such as gaseous olefins, kerosene-like and diesel-like fractions [23,24] which can be recovered from the vacuum system in the industrial unit. Conclusions This study was designed to find a new means of waste vegetable oil valorisation. The waste frying rapeseed oil was processed by pyrolysis under a vacuum to obtain a bio-oil with reduced viscosity appropriate for use as quenching oil. Such a product was obtained at 1 µbar and 300-320 • C. Its physical-chemical characteristics were close to the raw material, in contrast with the pyrolysis at atmospheric pressure, in the presence or in absence of an inert gas. The kinematic viscosity of this bio-oil was reduced by up to 4.3 mm 2 /s. The bio-oils performed well in quenching tests on stainless steel probes made from the same material as aircraft pieces, the Rockwell hardness improving by 14-16 units, like the results obtained with the dedicated mineral oil. An advantage of pyrolysis is that besides the quality bio-oil of 45-87% yield, other valuable products are obtained (gaseous olefins, kerosene-like and diesel-like liquids), resulting in consistent added value to the waste vegetable oil.
6,587
2021-12-04T00:00:00.000
[ "Agricultural And Food Sciences", "Materials Science" ]
Graphene-Oxide-Based Electrochemical Sensors for the Sensitive Detection of Pharmaceutical Drug Naproxen Here we report on a selective and sensitive graphene-oxide-based electrochemical sensor for the detection of naproxen. The effects of doping and oxygen content of various graphene oxide (GO)-based nanomaterials on their respective electrochemical behaviors were investigated and rationalized. The synthesized GO and GO-based nanomaterials were characterized using a field-emission scanning electron microscope, while the associated amounts of the dopant heteroatoms and oxygen were quantified using x-ray photoelectron spectroscopy. The electrochemical behaviors of the GO, fluorine-doped graphene oxide (F-GO), boron-doped partially reduced graphene oxide (B-rGO), nitrogen-doped partially reduced graphene oxide (N-rGO), and thermally reduced graphene oxide (TrGO) were studied and compared via cyclic voltammetry (CV) and differential pulse voltammetry (DPV). It was found that GO exhibited the highest signal for the electrochemical detection of naproxen when compared with the other GO-based nanomaterials explored in the present study. This was primarily due to the presence of the additional oxygen content in the GO, which facilitated the catalytic oxidation of naproxen. The GO-based electrochemical sensor exhibited a wide linear range (10 µM–1 mM), a high sensitivity (0.60 µAµM−1cm−2), high selectivity and a strong anti-interference capacity over potential interfering species that may exist in a biological system for the detection of naproxen. In addition, the proposed GO-based electrochemical sensor was tested using actual pharmaceutical naproxen tablets without pretreatments, further demonstrating excellent sensitivity and selectivity. Moreover, this study provided insights into the participatory catalytic roles of the oxygen functional groups of the GO-based nanomaterials toward the electrochemical oxidation and sensing of naproxen. Introduction Naproxen (2-(6-methoxynaphthalen-2-yl) propanoic acid (S/R)) is a nonsteroidal anti-inflammatory drug (NSAID) that is used to treat inflammation, fever, rheumatoid arthritis, and stiffness. Naproxen inhibits COX-1 and COX-2 enzymes, which results in the inhibition of the synthesis of certain prostaglandins [1,2]. However, there are two major concerns associated with the use of naproxen. First, overuse can cause adverse side effects such as stomach pain, ulcers, and stomach bleeding [2]. Naproxen overdose may be initiated when an individual takes more than the recommended daily dosage. For instance, the recommended daily dosage of naproxen for temporary pain management one-pot method reported by our group [29]. Boron-doped partially reduced graphene oxide (B-rGO) was synthesized from GO using a facile microwave method. Briefly, GO and boric acid were mixed at a 1:1 weight ratio first; the mixture was then subjected to microwave irradiation (1200W and NNST775S) for 2 min. The obtained B-rGO was washed with water and ethanol and dried at 50 • C overnight. Nitrogen-doped partially reduced graphene oxide (N-rGO) was synthesized using a hydrothermal method from GO and urea [30]. Briefly, GO was dispersed in water at 4.0 mg/ml by ultrasonication. Urea was added slowly while the GO dispersion was stirred. After being stirred for 1 h, the mixture was transferred to autoclave and subjected to hydrothermal treatment at 160 • C for 5 h. The obtained N-rGO was then washed with pure water and ethanol and finally dried at 50 • C overnight. Electrode Preparation and Modification The glassy carbon electrode (GCE) used in the experiments was polished using an alumina powder/water slurry, sonicated in acetone for one minute, and then in deionized water for three minutes, followed by water exchange and further sonication in deionized water for one minute. Subsequent to polishing and sonication, the electrode was characterized using potassium ferrocyanide (conc. 5 mM) in 0.2 M KNO 3 using cyclic voltammetry (CV) in the potential range from −0.1 to 0.6 V to ensure that all electrodes were in pristine condition. Each of the GCE was tested and confirmed to be of a similar quality prior to modification. A 2.5-mg mass of GO, TrGO, B-rGO, F-GO, and N-rGO powders was dissolved in 1.0 ml pure water; 5.0 µL of the mixture was drop-cast onto the surface of the GCE and air dried for 3 h to obtain the modified electrodes. The GCE had a diameter of~3.0 mm, with a surface area of 0.07 cm 2 . Instrumentation and Methodology The morphology and compositions of the fabricated GO, TrGO, F-GO, B-rGO, and N-rGO were characterized using a Hitachi SU-70 Schottky field emission scanning electron microscope (FE-SEM) and X-ray photoelectron spectroscopy (XPS) (Scienta Omicron Inc., Edmonton, Canada), respectively. CV and differential pulse voltammetry (DPV) were performed using a potentiostat (CHI-660D, CHI, USA). All electrochemical experiments were conducted using a three-electrode cell, where the GCE and the modified GCEs were employed as the working electrode. The auxiliary electrode was a platinum wire, whereas the reference electrode was a standard Ag/AgCl electrode (3 M KCl saturated with AgCl). A stock naproxen solution (conc. 20 mM) was prepared in 0.1 M NaOH to ensure complete dissolution of naproxen. All analytical quantifications were performed in a phosphate buffer solution (pH 7.2) at room temperature (22 ± 2 • C). All of the solutions were purged with pure argon gas (99.999%) for 20 minutes prior to the electrochemical measurements to remove any dissolved oxygen. Surface Characterization Scanning electron microscopy (SEM)was employed to characterize the surface morphologies and roughness. Figure 1A,B display the representative SEM images of the GO and F-GO recorded at a high magnification (50,000X). Both images showed the typical two-dimensional (2D) graphene oxide morphologies, which consist of crumbled and folded textures, showing irregular edges and rough surfaces. The synthesized TrGO, B-rGO and N-rGO exhibited a similar morphology to the GO and F-GO. X-ray photoelectron spectroscopic analysis was performed on the GO and all the modified GOsamples to determine their composition and functional group species toward rationalizing their catalytic performance on the electrochemical oxidation of naproxen. Subsequently, the obtained XPS spectra were deconvoluted using Lorentzian and Gaussian functions [14,31]. Figure 2A displays the survey spectra of the GO, TrGO, F-GO, B-rGO and N-rGO, where strong O1s and C1s peaks appeared. The binding energy of C1s was increased in the following order: B-rGO (279.1 eV) < TrGO (284.3 eV) < N-rGO (285.1 eV) < GO (286.2 eV) < F-GO (293.7 eV). The binding energy of O1s was changed in the following order: B-rGO (527.1 eV) < GO (532.2 eV) < TrGO (532.3 eV) < N-rGO (533.1 eV) < F-GO (539.7 eV). Investigations had shown that the incorporation of boron into carbon materials was responsible for the lower binding energy peak for C 1s region, which could appear as broadening of the main peak or appear as a separate feature from the survey scan [32]. In contrast, doping with more electronegative atoms (e.g., N, F) into graphene oxide would decrease the electron density around carbon atoms, shifting the binding energy peak positively [32,33]. The observed order can be further explained by the amount of oxygen functional groups present. It has been reported that sp2 carbon and sp3 carbon binding energy of graphene oxide from the deconvoluted C1s peak appear in the range of 284.6 eV ± 0.3 eV, whereas carbon-bound oxygen functional groups appear at higher energy due to oxygen's electronegativity [14,15]. In addition, the B1s peak of B-rGO was observed at 193.3 eV; the N1s peak of the N-rGO was seen at 400.1 eV, while a small F1s peak of the F-GO appeared at 692.7 eV. The presence of the F, N and B in the modified GO-based nanomaterials was further confirmed by the high-resolution XPS spectra of F1s, N1s and B1s as displayed in the supplementary material ( Figure S1). The compositions of the GO and the modified-GO nanomaterials were calculated based on the survey spectra and listed in Table 1. The oxygen atomic percentage was decreased in the following order: GO (30.89%) > F-GO (29.97%) > B-rGO (21.63%) > TrGO (15.39%) > N-rGO (10.12%). The carbon atomic percentage was also changed due to the reduction and doping. The high-resolution C1s spectra of GO and TrGO are displayed in Figure 2B, C, respectively. The deconvoluted C1s peaks of GO and TrGO for each functional group contribution were C=C (284. [14,32]. The highresolution C1s spectra and the associated deconvoluted peaks of N-rGO, B-rGO and F-GO are presented in Figure S2, S3 and S4, respectively. As expected, the reduction of GO removed most epoxy and hydroxyl groups, while retaining most of the carboxyl and carbonyl groups [16]. As a consequence, numerous sp2 carbons were regenerated upon the cleavage of epoxy and hydroxyl functional groups, which was strongly evident when compared to the deconvoluted C1s peaks of GO ( Figure 2B) and rGO ( Figure 2C). The area under the sp2 carbon functional group curves occupied a X-ray photoelectron spectroscopic analysis was performed on the GO and all the modified GO-samples to determine their composition and functional group species toward rationalizing their catalytic performance on the electrochemical oxidation of naproxen. Subsequently, the obtained XPS spectra were deconvoluted using Lorentzian and Gaussian functions [14,31]. Figure 2A . Investigations had shown that the incorporation of boron into carbon materials was responsible for the lower binding energy peak for C 1s region, which could appear as broadening of the main peak or appear as a separate feature from the survey scan [32]. In contrast, doping with more electronegative atoms (e.g., N, F) into graphene oxide would decrease the electron density around carbon atoms, shifting the binding energy peak positively [32,33]. The observed order can be further explained by the amount of oxygen functional groups present. It has been reported that sp2 carbon and sp3 carbon binding energy of graphene oxide from the deconvoluted C1s peak appear in the range of 284.6 eV ± 0.3 eV, whereas carbon-bound oxygen functional groups appear at higher energy due to oxygen's electronegativity [14,15]. In addition, the B1s peak of B-rGO was observed at 193.3 eV; the N1s peak of the N-rGO was seen at 400.1 eV, while a small F1s peak of the F-GO appeared at 692.7 eV. The presence of the F, N and B in the modified GO-based nanomaterials was further confirmed by the high-resolution XPS spectra of F1s, N1s and B1s as displayed in the supplementary material ( Figure S1). The compositions of the GO and the modified-GO nanomaterials were calculated based on the survey spectra and listed in Table 1. The oxygen atomic percentage was decreased in the following order: GO (30.89%) > F-GO (29.97%) > B-rGO (21.63%) > TrGO (15.39%) > N-rGO (10.12%). The carbon atomic percentage was also changed due to the reduction and doping. The high-resolution C1s spectra of GO and TrGO are displayed in Figure 2B [14,32]. The high-resolution C1s spectra and the associated deconvoluted peaks of N-rGO, B-rGO and F-GO are presented in Figures S2-S4, respectively. As expected, the reduction of GO removed most epoxy and hydroxyl groups, while retaining most of the carboxyl and carbonyl groups [16]. As a consequence, numerous sp2 carbons were regenerated upon the cleavage of epoxy and hydroxyl functional groups, which was strongly evident when compared to the deconvoluted C1s peaks of GO ( Figure 2B) and rGO ( Figure 2C). The area under the sp2 carbon functional group curves occupied a much a larger percentage, whereas the epoxy/hydroxyl groups were significantly decreased. In contrast, the other functional groups were reduced by a much lower Sensors 2020, 20, 1252 5 of 13 degree. The XPS spectra for the other modified graphene electrodes demonstrated the same trend based on the remaining quantity of functional oxygen groups following reduction. Sensors 2020, 20, 1252 5 of 13 much a larger percentage, whereas the epoxy/hydroxyl groups were significantly decreased. In contrast, the other functional groups were reduced by a much lower degree. The XPS spectra for the other modified graphene electrodes demonstrated the same trend based on the remaining quantity of functional oxygen groups following reduction. Electrochemical Characterization of the Fabricated Various GO-based Electrodes The modified electrodes (F-GO/GCE, B-rGO/GCE, N-rGO/GCE, GO/GCE, and TrGO/GCE) and pristine GCE were examined in a KNO3-ferricyanide medium (5 mM K3[Fe(CN)6] in 0.2 M KNO3) to compare their electrochemical performance. As seen in Figure 3A, the redox peak separation of the Electrochemical Characterization of the Fabricated Various GO-Based Electrodes The modified electrodes (F-GO/GCE, B-rGO/GCE, N-rGO/GCE, GO/GCE, and TrGO/GCE) and pristine GCE were examined in a KNO 3 -ferricyanide medium (5 mM K 3 [Fe(CN) 6 ] in 0.2 M KNO 3 ) to compare their electrochemical performance. As seen in Figure 3A, the redox peak separation of the F-GO/GCE, GO/GCE, B-rGO/GCE, N-rGO/GCE, TrGO/GCE, and GCE was measured from the CV curves to be 129, 121, 104, 86, 100, and 99 mV, respectively. Figure 3B presents a comparison of the anodic peak current of these electrodes. The largest peak separations were demonstrated by the F-GO/GCE and GO/GCE, indicating a poor electron transfer efficiency. It is recognized that the doping of GO with fluorine is quite different from the doping with other heteroatoms, as boron as fluorine cannot substitute carbon atoms. Consequently, the incorporation of fluorine atoms generally Sensors 2020, 20, 1252 6 of 13 disrupted additional sp2 carbon pi systems [34,35]. Similarly, graphene oxide typically demonstrates poor electron transfer efficiencies due to excess oxygen groups on graphene sheets, as they tend to disrupt sp2 carbon pi systems. This caused the low conductivities of F-GO and GO, which resulted in a low current density for both electrodes as shown in Figure 3B. Based on the XPS results listed in Table 1, the observed trend may be reiterated as follows: as fewer oxygen atoms are present due to reduction on the graphene sheet, the electron transfer efficiencies and current densities increases, in that more sp2 carbon structures are regenerated. It is therefore logical to verify whether this trend might apply to the catalytic oxidation of naproxen through the use of these electrodes. F-GO/GCE, GO/GCE, B-rGO/GCE, N-rGO/GCE, TrGO/GCE, and GCE was measured from the CV curves to be 129, 121, 104, 86, 100, and 99 mV, respectively. Figure 3B presents a comparison of the anodic peak current of these electrodes. The largest peak separations were demonstrated by the F-GO/GCE and GO/GCE, indicating a poor electron transfer efficiency. It is recognized that the doping of GO with fluorine is quite different from the doping with other heteroatoms, as boron as fluorine cannot substitute carbon atoms. Consequently, the incorporation of fluorine atoms generally disrupted additional sp2 carbon pi systems [34,35]. Similarly, graphene oxide typically demonstrates poor electron transfer efficiencies due to excess oxygen groups on graphene sheets, as they tend to disrupt sp2 carbon pi systems. This caused the low conductivities of F-GO and GO, which resulted in a low current density for both electrodes as shown in Figure 3B. Based on the XPS results listed in Table 1, the observed trend may be reiterated as follows: as fewer oxygen atoms are present due to reduction on the graphene sheet, the electron transfer efficiencies and current densities increases, in that more sp2 carbon structures are regenerated. It is therefore logical to verify whether this trend might apply to the catalytic oxidation of naproxen through the use of these electrodes. Electrochemical Behaviors of Naproxen at the Modified Electrodes The CV curves recorded in a 0.1M PBS buffer (pH 7.2) in the absence (dashed line) and in the presence of 300 mM naproxen are presented in the supplementary materials ( Figure S5). The first shoulder peak at ~0.86 V could be attributed to the electrochemical oxidation of naproxen, where oneelectron oxidation process proceeds with the formation of naproxen cation radicals. The second shoulder peak at ~1.17 V might be due to the further oxidation of the cation radicals to form the ketone (2-acetyl-6-methoxynaphthalene) [10]. However, the observed CV peaks were not so clear. Thus, differential pulse voltammetry (DPV) was carried out in order to achieve a higher detection level. The DPV technique is based on the premise that the decay rate of capacitive current is much faster than Faradaic current, thus limiting the background current [36,37]. Figure 4A illustrates the detection of naproxen at 300 mM in 0.1M PBS (pH 7.2) using DPV. Two well-defined peaks were observed with the first peak being the major product following its decarboxylation [10,38], which is illustrated in Scheme 1. Electrochemical Behaviors of Naproxen at the Modified Electrodes The CV curves recorded in a 0.1M PBS buffer (pH 7.2) in the absence (dashed line) and in the presence of 300 µM naproxen are presented in the supplementary materials ( Figure S5). The first shoulder peak at~0.86 V could be attributed to the electrochemical oxidation of naproxen, where one-electron oxidation process proceeds with the formation of naproxen cation radicals. The second shoulder peak at~1.17 V might be due to the further oxidation of the cation radicals to form the ketone (2-acetyl-6-methoxynaphthalene) [10]. However, the observed CV peaks were not so clear. Thus, differential pulse voltammetry (DPV) was carried out in order to achieve a higher detection level. The DPV technique is based on the premise that the decay rate of capacitive current is much faster than Faradaic current, thus limiting the background current [36,37]. Figure 4A illustrates the detection of naproxen at 300 µM in 0.1M PBS (pH 7.2) using DPV. Two well-defined peaks were observed with the first peak being the major product following its decarboxylation [10,38], which is illustrated in Scheme 1. Figure 4B compares the electrochemical performance of the different electrodes towards the oxidation of 300 μM naproxen. The GO/GCE electrode exhibited the highest performance, with a peak current density of 216.41 μA/cm 2 at 1.13 V. In contrast, the GO/GCE generated the second lowest current density (188.00 µA/cm 2 ) and the second largest peak separation (121.0 mV) during the ferricyanide test ( Figure 3a). Similarly, F-GO/GCE demonstrated the lowest performance with a peak current density of 108.39 μA/cm 2 and the largest peak separation of 129.00 mV in the ferricyanide test; however, the electrode exhibited a better performance over GCE for the detection of naproxen. The GCE generated the lowest signal at 55.52 µA/cm 2 for the detection of naproxen although it showed the highest peak current density of 502.43 µA/cm 2 in the ferricyanide test. The aforementioned results revealed that the oxygen content and dopants played critical roles in their electrochemical performance, showing that the percentage of oxygen (the epoxy and hydroxyl groups, in particular) was the main contributor to the significant differences in the catalytic oxidation performance of the modified GO electrodes. For instance, the GO/GCE possessed 30.89% of the oxygen content compared to 15.39% for the TrGO. The rationale behind the performance of these electrodes may be that as the oxygen content of graphene becomes higher, the catalytic oxidation of naproxen becomes stronger. The only outlier was the F-GO/GCE, as it accounted for only 28.33% of the GO detection signal. However, it is known that fluorine doping completely disrupts the sp2 carbon ring structure, thus further decreasing the conductivity of GO [13]. In comparison, nitrogen or boron-doped graphene oxide typically possessed heteroatoms that replaced the carbon atoms, and became incorporated within the ring structures [39]. According to the frontier molecular orbital theory, the lowest unoccupied molecular orbital (LUMO) of a fluorine-carbon bond would be higher than carboncarbon or carbon-oxygen molecular orbitals, which would cause a decrease in the oxidation tendencies of organic molecules for the fluorinated graphene oxide. Furthermore, Park et al. reported that due to the significant differences in electronegativity between carbon and fluorine, the electrons from the valence band are transferred to the LUMO due to the high electronegativity of fluorine [29]. Due to this occupation by additional electrons polarizing LUMO, the oxidizing ability of the F-GO would decrease, which further explains its low electrochemical performance [40]. This characteristic is also supported by the electrochemical sensing of heavy metal ions using F-GO, in that the metal ions were first reduced at the F-GO surface and then stripped off from the surface [41]. Figure 4B compares the electrochemical performance of the different electrodes towards the oxidation of 300 µM naproxen. The GO/GCE electrode exhibited the highest performance, with a peak current density of 216.41 µA/cm 2 at 1.13 V. In contrast, the GO/GCE generated the second lowest current density (188.00 µA/cm 2 ) and the second largest peak separation (121.0 mV) during the ferricyanide test (Figure 3a). Similarly, F-GO/GCE demonstrated the lowest performance with a peak current density of 108.39 µA/cm 2 and the largest peak separation of 129.00 mV in the ferricyanide test; however, the electrode exhibited a better performance over GCE for the detection of naproxen. The GCE generated the lowest signal at 55.52 µA/cm 2 for the detection of naproxen although it showed the highest peak current density of 502.43 µA/cm 2 in the ferricyanide test. The aforementioned results revealed that the oxygen content and dopants played critical roles in their electrochemical performance, showing that the percentage of oxygen (the epoxy and hydroxyl groups, in particular) was the main contributor to the significant differences in the catalytic oxidation performance of the modified GO electrodes. For instance, the GO/GCE possessed 30.89% of the oxygen content compared to 15.39% for the TrGO. The rationale behind the performance of these electrodes may be that as the oxygen content of graphene becomes higher, the catalytic oxidation of naproxen becomes stronger. The only outlier was the F-GO/GCE, as it accounted for only 28.33% of the GO detection signal. However, it is known that fluorine doping completely disrupts the sp2 carbon ring structure, thus further decreasing the conductivity of GO [13]. In comparison, nitrogen or boron-doped graphene oxide typically possessed heteroatoms that replaced the carbon atoms, and became incorporated within the ring structures [39]. According to the frontier molecular orbital theory, the lowest unoccupied molecular orbital (LUMO) of a fluorine-carbon bond would be higher than carbon-carbon or carbon-oxygen molecular orbitals, which would cause a decrease in the oxidation tendencies of organic molecules for the fluorinated graphene oxide. Furthermore, Park et al. reported that due to the significant differences in electronegativity between carbon and fluorine, the electrons from the valence band are transferred to the LUMO due to the high electronegativity of fluorine [29]. Due to this occupation by additional electrons polarizing LUMO, the oxidizing ability of the F-GO would decrease, which further explains its low electrochemical performance [40]. This characteristic is also supported by the electrochemical sensing of heavy metal ions using F-GO, in that the metal ions were first reduced at the F-GO surface and then stripped off from the surface [41]. Electrode Fouling The robust adsorbing behavior of naproxen is primarily facilitated by the binding interactions between the graphene sp2 carbon rings and carboxylic acid groups of naproxen [37]. Previous naproxen studies have not revealed a workable strategy toward a regenerable electrode for multiple detection events without a significant decrease in the signal current [33,37,38]. Certainly, the size of the naproxen molecule, and the fact that it contains hydrophobic (non-polar) benzene rings and multiple polar functional groups, predispose it to very easily foul electrodes [37]. Figure 5 illustrates this strong fouling effect following multiple scans with DPV, as the oxidized products adsorbed onto the electrode, resulting in the dramatic decrease of the current density of the electrochemical oxidation of naproxen during the 2nd, 3rd and 4th cycle. Most antifouling strategies involve a protective layer that may prevent fouling agents from reaching the electrode surface [42]. However, in our case, where the fouling agent is the analyte itself, this strategy is not viable. Electrochemical activation and surface modification are two strategies that may resolve this issue. Here, we designed an electrochemical activation strategy, which employed short pulses with high anodic potential for the removal of adsorbed species, while preventing oxygen evolution which might peel off the electrode coating. Specifically, this activation method adopted a multi-potential step approach with the following parameters: (i) initial potential at 0.0 V for 5 s; (ii) stepped to 2.8 V for 50 ms; (iii) stepped down to 0.0 V for 5 s; and (iv) repeated over 12 cycles. As shown in Figure 5, after the regeneration process, the 5 th DPV curve was almost identical to the first scan, showing that the activation strategy effectively overcame the fouling issues during the electrochemical detection of naproxen. Electrode Fouling The robust adsorbing behavior of naproxen is primarily facilitated by the binding interactions between the graphene sp2 carbon rings and carboxylic acid groups of naproxen [37]. Previous naproxen studies have not revealed a workable strategy toward a regenerable electrode for multiple detection events without a significant decrease in the signal current [33,37,38]. Certainly, the size of the naproxen molecule, and the fact that it contains hydrophobic (non-polar) benzene rings and multiple polar functional groups, predispose it to very easily foul electrodes [37]. Figure 5 illustrates this strong fouling effect following multiple scans with DPV, as the oxidized products adsorbed onto the electrode, resulting in the dramatic decrease of the current density of the electrochemical oxidation of naproxen during the 2nd, 3rd and 4th cycle. Most antifouling strategies involve a protective layer that may prevent fouling agents from reaching the electrode surface [42]. However, in our case, where the fouling agent is the analyte itself, this strategy is not viable. Electrochemical activation and surface modification are two strategies that may resolve this issue. Here, we designed an electrochemical activation strategy, which employed short pulses with high anodic potential for the removal of adsorbed species, while preventing oxygen evolution which might peel off the electrode coating. Specifically, this activation method adopted a multi-potential step approach with the following parameters: (i) initial potential at 0.0 V for 5 s; (ii) stepped to 2.8 V for 50 ms; (iii) stepped down to 0.0 V for 5 s; and (iv) repeated over 12 cycles. As shown in Figure 5, after the regeneration process, the 5th DPV curve was almost identical to the first scan, showing that the activation strategy effectively overcame the fouling issues during the electrochemical detection of naproxen. Figure 6A displays a series of DPV curves of the GO/GCE in a 10.0 ml of 0.1M PBS buffer (pH 7.2) with different naproxen concentrations, showing the current density was increased with the increase of the concentration. The associated calibration plot from the concentrations, ranging from 10 mM to 1 mM, is presented in Figure 6B, with the R 2 value of 0.9963, which signified a very strong linear relationship. The sensitivity of the sensor was obtained from the slope of the regression line to be 0.60 μAμM -1 cm -2 . The limit of detection (LOD) was calculated to be 1.94 mM, which was acquired using the formula LOD = 3 /s. The limit of quantification (LOQ) was calculated to be 6.47 µM, which was obtained using the formula LOQ = 10 /s, where  represents the standard deviation of the five blank measurements, and s denotes the slope from the calibration curve. Figure 6B, with the R 2 value of 0.9963, which signified a very strong linear relationship. The sensitivity of the sensor was obtained from the slope of the regression line to be 0.60 µAµM −1 cm −2 . The limit of detection (LOD) was calculated to be 1.94 µM, which was acquired using the formula LOD = 3 σ/s. The limit of quantification (LOQ) was calculated to be 6.47 µM, which was obtained using the formula LOQ = 10 σ/s, where σ represents the standard deviation of the five blank measurements, and s denotes the slope from the calibration curve. Figure 6B, with the R 2 value of 0.9963, which signified a very strong linear relationship. The sensitivity of the sensor was obtained from the slope of the regression line to be 0.60 μAμM -1 cm -2 . The limit of detection (LOD) was calculated to be 1.94 mM, which was acquired using the formula LOD = 3 /s. The limit of quantification (LOQ) was calculated to be 6.47 µM, which was obtained using the formula LOQ = 10 /s, where  represents the standard deviation of the five blank measurements, and s denotes the slope from the calibration curve. Interference Studies and Real Sample Analysis The selectivity of the developed GO/GCE sensor was also investigated; Figure 7 presents the DPV response to 200 µM naproxen in the presence of chloride ions, nitrate ions, glutamic acid, glycin, citric acid, sulfate ions, ascorbic acid, and D-glucose (500 µM each). No notable response to the interference species was observed. The GO/GCE exhibited 91% of the oxidation signal compared to pure naproxen, showing that the sensor had excellent selectivity towards the detection of naproxen. Interference Studies and Real Sample Analysis The selectivity of the developed GO/GCE sensor was also investigated; Figure 7 presents the DPV response to 200 µM naproxen in the presence of chloride ions, nitrate ions, glutamic acid, glycin, citric acid, sulfate ions, ascorbic acid, and D-glucose (500 µM each). No notable response to the interference species was observed. The GO/GCE exhibited 91% of the oxidation signal compared to pure naproxen, showing that the sensor had excellent selectivity towards the detection of naproxen. The performance of the GO/GCE sensor was further tested using a Life Brand Naproxen tablet (220 mg) as shown in Figure 8. The tablet was dissolved in 0.1 M NaOH as the preparation of the naproxen stock solution. There were no apparent interference species from the excipients in the tablets. The peak current density measured from the DPV curves ( Figure 8) was fitted to the calibration plot ( Figure 6B) for comparison. Multiple measurements at the same concentration were conducted, and a mean recovery of 96.9% with a relative standard deviation of 2.5% was obtained. These results verified a precise and accurate electrochemical quantification of the sensor. The performance of the GO/GCE sensor was further tested using a Life Brand Naproxen tablet (220 mg) as shown in Figure 8. The tablet was dissolved in 0.1 M NaOH as the preparation of the naproxen stock solution. There were no apparent interference species from the excipients in the tablets. The peak current density measured from the DPV curves ( Figure 8) was fitted to the calibration plot ( Figure 6B) for comparison. Multiple measurements at the same concentration were conducted, and a mean recovery of 96.9% with a relative standard deviation of 2.5% was obtained. These results verified a precise and accurate electrochemical quantification of the sensor. Conclusions In summary, five different GO-based nanomaterials including GO, TrGO, B-rGO, N-rGO and F-GO were synthesized and systemically studied. GO/GCE exhibited the strongest activity toward the electrochemical oxidation of naproxen compared to GCE and other modified GO electrodes. This was primarily due to an enhanced catalytic activity facilitated by the oxygen functional groups of GO, particularly the epoxy and hydroxyl groups, which was confirmed by the XPS analysis. The sensitive Conclusions In summary, five different GO-based nanomaterials including GO, TrGO, B-rGO, N-rGO and F-GO were synthesized and systemically studied. GO/GCE exhibited the strongest activity toward the electrochemical oxidation of naproxen compared to GCE and other modified GO electrodes. This was primarily due to an enhanced catalytic activity facilitated by the oxygen functional groups of GO, particularly the epoxy and hydroxyl groups, which was confirmed by the XPS analysis. The sensitive quantification of naproxen was successfully achieved over a wide linear concentration range from 10 µM to 1 mM by DPV. Naproxen had very strong propensity for fouling electrode surfaces, resulting in a substantial decrease of the current. A facile electrochemical activation strategy based on potential pulses was developed, which successfully overcame the critical fouling problem. The GO/GCE sensor developed in this study exhibited high sensitivity and selectivity over a wide range of concentrations of naproxen. Off-the-shelf naproxen tablets were successfully detected using the developed sensor, which demonstrated strong anti-interference capabilities. Future studies should be conducted to elucidate the specific naproxen catalytic kinetics with oxygenated functional groups, as well as heteroatom dopant levels, catalytic oxidation and antifouling properties toward naproxen. Supplementary Materials: The following are available online at http://www.mdpi.com/1424-8220/20/5/1252/s1. Figure S1. High-resolution XPS spectra of: (A) F1s peak of fluorinated graphene oxide at 692.675; (B) N1s peak of nitrogen doped reduced graphene oxide at 400.137 eV; and (C) B1s peak of boron doped reduced graphene oxide at 197.455 eV. Figure S2: High-resolution C1s spectra with deconvoluted peaks for N-rGO. Figure S3: High-resolution C1s spectra with deconvoluted peaks for B-rGO. Figure S4: High-resolution C1s spectra with deconvoluted peaks for F-GO. Figure S5: CV curves of GO/GCE recorded at a scan rate of 50 mV/s in 0.1M PBS (pH 7.2) buffer in the absence of (dashed line) and in the presence of 300 µM Naproxen (solid line). Figure
7,523.6
2020-02-25T00:00:00.000
[ "Chemistry", "Materials Science", "Medicine" ]
CENTEMENT at SemEval-2018 Task 1: Classification of Tweets using Multiple Thresholds with Self-correction and Weighted Conditional Probabilities In this paper we present our contribution to SemEval-2018, a classifier for classifying multi-label emotions of Arabic and English tweets. We attempted “Affect in Tweets”, specifically Task E-c: Detecting Emotions (multi-label classification). Our method is based on preprocessing the tweets and creating word vectors combined with a self correction step to remove noise. We also make use of emotion specific thresholds. The final submission was selected upon the best performance achieved, selected when using a range of thresholds. Our system was evaluated on the Arabic and English datasets provided for the task by the competition organisers, where it ranked 2nd for the Arabic dataset (out of 14 entries) and 12th for the English dataset (out of 35 entries). Introduction Social network platforms such as Facebook, LinkedIn and Twitter are now at the hub of everything we do. Twitter is one of the most popular social network platforms; as recently as 2013 an incredible 21% of the global internet population used Twitter actively on a monthly basis (globalwebindex, accessed 05/2016). Twitter is used by celebrities, movie stars, politicians, sports stars and everyday people. Every day, millions of users share their opinions about themselves, news, sports, movies and many many other topics. This makes platforms like Twitter rich sources of data for public opinion mining and sentiment analysis (Pak and Paroubek, 2010). However, although these corpora are rich, they are somewhat noisy because tweets can be informal, misspelt and contain slang, emoticons (Albogamy and Ramsay, 2015) and made-up words. Furthermore, Arabic tweets have the added complication of dialects in which the same words or expressions can have different connotations. Multi-label classification of tweets is a classification problem where tweets are assigned to two or more classes. It is considered more complex than traditional classification tasks because the classifier has to predict several classes. There has been much work in the areas of sentiment detection (Rosenthal et al., 2017), emotion intensity (Mohammad and Bravo-Marquez, 2017) and emotion categorisation (Hasan et al., 2014). Sentiment analysis aims to classify tweets into positive, negative, and neutral categories, emotion intensity is determining the intensity or degree of an emotion felt by the speaker and emotion categorisation is the classification of tweets based on their emotions. The most commonly used classification techniques are Naive Bayes and Support Vector Machines (SVM). Some researchers report that SVMs (Barbosa and Feng, 2010) perform better while others support Naive Bayes (Pak and Paroubek, 2010). Furthermore, sophisticated techniques such as deep neural networks have also been proposed but such techniques are rarely used by non-experts of machine learning in practice (Sarker and Gonzalez, 2017) and they also take a long time to train. We propose a simple and effective method to classify tweets that performs reasonably well. Our system does not make use of any lexicons or stop word lists and is quick to train. Methods The SemEval Task E-c requires the classification of tweets into either a neutral emotion or one of eleven emotions (Mohammad et al., 2018). Datasets for tweets are made available in three languages; Arabic, English and Spanish. We focus firstly on Arabic and then English because this links well with our existing work. Datasets from previous SemEval tasks are also available if required. We use the SemEval-2018 development and training data for training our system, with no external resources such as sentiment dictionaries or other corpora. We use the training set to compute scores for each of the classes in conjunction with a self correction stage and a multi-threshold stage to obtain an optimal set of scores. Apart from the preprocessing steps, notably stemming, we use exactly the same machinery for the two languages. We now briefly discuss our approach. Preprocessing. Tweets are preprocessed by lowercasing (English tweets only), identifying and replacing emojis with emojis identifiers, tokenising and then stemming. We developed two tokenisers; one that is NLTK based and does not preserve hashtags, emoticons, punctuation and other content and one that is "tweet-friendly" because it preserves these items. Emojis cause us technical problems due to their surrogate-pair nature so we replace emojis with emoji identifiers (e.g. 45 ). We also separate out contiguous emojis because we want, for example, the individual emojis in a group of repeating unhappy face emojis to be recognised, and processed, as being the same emoji as a single unhappy face emoji. We remove usernames because we believe they are noise since, by and large, they will not reappear in the test set, are not helpful to us and if not removed will compromise our ability to detect useful information. Arabic tweets are stemmed using a stemmer developed specifically for Arabic tweets by Albogamy and Ramsay (Albogamy and Ramsay, 2016). English tweets are stemmed by taking the shortest result from Morphy (Fellbaum, 1998) when tokens are stemmed as nouns, verbs, adjectives and adverbs. Although there are surprisingly few examples of these, we believe that multi-word hashtags, joined by underscore or a dash, also contain useful information so we leave the hashtag as is but also take a copy of the hashtag and split it into its constituent words. This is so that where possible we improve the quality of information in the tweet. Stop word lists are not used at any stage. We debated using stop words vs insignificant words and, as in our previous work (Ahmad and Ramsay, 2016), we prefer to let our algorithms exclude these words. We do however remove less common words on the grounds that if they do not appear very often then we are unlikely to learn anything from them. The English training dataset contains approximately 6300 distinct words after preprocessing, we find that taking the top 2500 of these gives us the most common words and the best results. Our approach is not to collect scores for individual emotions, instead we collect scores relative to the other emotions. Constructing scores in this manner allows us to observe that words such as "blessed" are much more significant for emotions such as "joy", "love" and "optimism" than they are for "anger" and "anticipation". Words that are insignificant will have small scores, words that are significant will have large scores and by using a varying threshold we can determine a best set. Base set. Every tweet in the training dataset is tokenised and we count how many tweets each token in the tweet occurs in. We also remove singletons and calculate an IDF for each token. We iterate through the tokens for each tweet to create a base set of scores and obtain a count of how many times each token occurs in each of the 11 emotions as well as a count of the total number of tokens in each emotion. In a later stage we iterate over a range of thresholds, this base set is the starting point in each and is modified by the various processes as described below. Conditional probabilities. We now use this base set to create a set of emotion probabilities for each token. One, common, way of using probabilities is in conjunction with Bayes Theorem. However, this does not seem to work very well for this task hence we perform the following steps. We calculate the probability of each token appearing in each emotion using P(T|E). We do this only on the top 2500 most important tokens in the dataset, i.e. those with the highest IDF scores. We normalise these probabilities by dividing each value by the sum of all the probabilities for this token for all emotions. We get an average value for these values and subtract this from each of the scores to calculate the distance from the mean. This is, essentially, a local IDF step to ensure that if a token is equally common for all emotions then we do not allow it to contribute to any of them, and if it is below the overall average for a emotion we want it to be allowed to vote against it. We want to assign extra weight to tokens that have very skewed distributions, hence we multiply each score by the standard deviation. This empha-sises the contribution of such tokens to the emotion and allows us to remove unhelpful tokens. In this way we create a set of emotion scores for each token for every emotion. Self-correction. We want to remove tokens that we have incorrectly assigned to emotions. We classify each tweet to determine which emotions it demonstrates and we identify the tokens that led us to these conclusions. A tweet is classified for each emotion by adding the scores for each token for each emotion. These scores are normalised and compared to a threshold t. If the value is less than t we deduce the tweet did not demonstrate the emotion, otherwise it did demonstrate the emotion. We are unsure what a good threshold is so we use a range of values for t from 0 to 1 (in steps of 0.1) to create score sets. We calculate the Jaccard for each of these and use the best one of these for classification. This approach is based on Brills (Brill, 1995) suggestion that one should attempt to learn from ones own mistakes. As each tweet is classified we compare our prediction to the gold standard. For the ones that we predict correctly we increment a counter for each token against the correctly classified emotion. Similarly, for the ones where we failed to classify the tweet correctly we decrement the counter for each token against the incorrectly classified emotion. When all tweets have been classified we examine these counters. For each token, if we have an overall negative score for an emotion we deduce that the token is unhelpful in classifying tweets for that emotion and we downplay its significance in further calculations. Using this technique we are able to remove tokens such as "terrifying" from contributing to emotions such as "love". We have tried repeating this process multiple times, but we find that beyond one iteration the improvement is insignificant. A possible explanation for this may be because the actual numbers of tokens that are removed are quite small; 1% for Arabic and 5% for English. The raw data for each emotion is different and, hence, we find that a single fixed threshold across all emotions produces poor results. We therefore try a range of thresholds from 0 to 1 in increments of 0.1 to classify tweets, using the same mechanism described above, but this time on an emotion-by-emotion basis to generate an individual threshold for each emotion. SemEval results. We classify the training data using our sets of scores and per-emotion thresholds. We identify the set with the best Jaccard score and use it to classify the test data to generate our eventual submission file. Other Strategies Increased training data. We believe that having more training data might improve our classifer. One of the obvious places to get more data is from the datasets for some of the other tasks, specifically EI-reg and EI-oc. A key problem with this data is that both of these tasks only supply datasets for anger, fear, joy and sadness. The Elreg dataset is marked up with a per-tweet intensity value between 0-1 that represents the mental state of the tweeter. The EI-oc dataset tweets are marked up with one of four ordinal classes (0,1,2,3). To expand our training dataset we extract tweets with values of 0.5 and above from the El-reg datasets and tweets with a value of 3 from the EI-oc dataset. The best Jaccard score we obtain with this expanded dataset is 0.417 (English). When we extract tweets with values of 0.9 or above from the EI-reg dataset we improve the quality of tweets, at the cost of decreasing the number of tweets extracted, and this slightly improves our Jaccard to 0.429. Similarly, the competition organisers also make available a corpus of 100 million English tweet IDs. We download 10,000 of these filtered on words that we believe are representative of the emotions we are looking for e.g. "angry", "elated", "trusting". A serious weakness with this technique, however, is that the accuracy of this data is compromised, we therefore classify this data using our classifier. We then combine this data with the standard English dataset and classify it again. We do not want this data to be more relevant than the real data, so we weight down the scores from this data. The best Jaccard score we obtain with this expanded dataset is 0.430. Latent semantic analysis (LSA). Latent Semantic Analysis (LSA) is a theory and method for extracting and representing the contextual-usage meaning of words by statistical computations applied to a large corpus of text (Landauer and Dumais, 1997). Essentially, to improve our classifier we need to improve the quality of our tweets. We use LSA to find words in tweets that are similar to other words, e.g. "car" and "automobile". We do not have the computing power to do this on a per-tweet basis so we do this on a per-emotion basis. The concepts we find, however, are not very reliable, e.g. "blessed" and "happiness". We expand our tweets with these words but find that this does not improve our scores. A possible explanation for this might be because of the relatively small numbers of tweets in the datasets. Duplicate tweets. We note that there are tweets in the English dataset that are semantically similar, e.g. "You offend me, @Tansorma" and "@SunandBeachBum 'you people' infuriate me!". It may be possible to use clustering (Sarker and Gonzalez, 2017) to relate tweets like these as a means to removing duplicates. We further note that there are many cases of tweets that differ only by hashtags or emojis, e.g. "@britishairways term 5 security queues at arrivals" and "@britishairways term 5 security queues at arrivals #shocking". A further study could assess the impact of using Minimum Edit Distance (Wagner and Fischer, 1974) on this later data to improve the quality of the dataset. Emoticon weighting. Emoticons have proved crucial in the automated emotion classification of informal texts (Novak et al., 2015). To increase their significance we double their raw count values. We find that this increases the accuracy of our classifier by 0.44% for both Arabic and English. Word frequencies. We try to use the word frequency as an extra weight to further dampen the contribution of words that are low frequency because low frequency words do not contribute very much. However, because we have earlier taken only the 2500 commonest words we find that this does not improve our scores. Computing Resources The system was written in Python on a MacBook Pro, 2.7 GHz Intel Core i5, 8 GB RAM. The training and classification phase takes approximately 15 minutes. Results, Comments and Conclusion We described a self-correcting, multi-threshold, classifier to solve the problem of multi-label classification of tweets. We find that due to the nature of the data it is difficult to accurately distinguish between emotions such as "joy" and "love" because many of the words that score highly for "joy " also score highly for "love", e.g. "rejoice", "birthday" and "cheerful". Consequently when a tweet is labelled as "love" it is highly likely that it will also be labelled as "joy". We find similar issues with "anger" and "disgust", although not to the same extent, because words like "shit" and "hate" score highly for both emotions. Overall, we believe that we score much higher on emotions such as "anger", "joy", "love" and "disgust", than on "trust" "anticipation", "optimism" and "pessimism". Our results, given in Table 1, show that although processes such as lowercasing, tokenising and stemming do contribute, the tuning stage and the introduction of multiple thresholds yield the biggest improvements. This is because removing words which are implicit in the classifier making wrong decisions and allowing each emotion to have its own threshold are obviously sensible things to do. One unanticipated finding was that our tweetfriendly tokeniser has an adverse effect decreasing the Jaccard score when it is used. A possible ex-planation for this is that the simple tokeniser removes # and @ symbols, thus modifying hashtags such as "#sleep" into "sleep" and allowing them to combine with the word "sleep" in other tweets. On the other hand the tweet-friendly tokeniser preserves the "#sleep" hashtag and it therefore cannot combine with the word "sleep". We want the best of both worlds so we preserve our hashtag but also take a copy and split it into its constituent words. Contrary to expectations, the performance improvement gained from using our Arabic stemmer is disappointingly low at just 2.67%. We believed that our Arabic stemmer would have a bigger impact than demonstrated because the stemmer is aimed at, and specifically developed for, Arabic tweets. In fact our simplistic Morphy English stemmer produced a better improvement of 14.8% for English than our carefully tuned Arabic stemmer did for Arabic. The scores we achieved put us 2nd for the Arabic dataset and 12th for the English dataset despite the fact that we use no external resources, we simply train on the basis of the SemEval data. We will be carrying out further experiments to see whether adding external resources would give us further improvement.
4,029.2
2018-06-01T00:00:00.000
[ "Computer Science" ]
Multi-Modal Residual Perceptron Network for Audio–Video Emotion Recognition Emotion recognition is an important research field for human–computer interaction. Audio–video emotion recognition is now attacked with deep neural network modeling tools. In published papers, as a rule, the authors show only cases of the superiority in multi-modality over audio-only or video-only modality. However, there are cases of superiority in uni-modality that can be found. In our research, we hypothesize that for fuzzy categories of emotional events, the within-modal and inter-modal noisy information represented indirectly in the parameters of the modeling neural network impedes better performance in the existing late fusion and end-to-end multi-modal network training strategies. To take advantage of and overcome the deficiencies in both solutions, we define a multi-modal residual perceptron network which performs end-to-end learning from multi-modal network branches, generalizing better multi-modal feature representation. For the proposed multi-modal residual perceptron network and the novel time augmentation for streaming digital movies, the state-of-the-art average recognition rate was improved to 91.4% for the Ryerson Audio–Visual Database of Emotional Speech and Song dataset and to 83.15% for the Crowd-Sourced Emotional Multi Modal Actors dataset. Moreover, the multi-modal residual perceptron network concept shows its potential for multi-modal applications dealing with signal sources not only of optical and acoustical types. This paper presents a novel end-to-end Deep Neural Network (DNN) based framework, as Figure 1 illustrates, addressing the Audio-Video Emotion Recognition (AVER) problem.Just like human beings understand emotional expressions in our daily social activates through multi-senses (e.g., visual, vocal, textually meaningful), neural computing units as part of intelligent artificial sensors are now playing important roles in emotion recognition tasks.Specialized sensors in Human-Computer Interaction (HCI) capture information responsible for understanding visual and vocal information just like we -the human beings -understand emotional expressions through our multi-senses. Emotion recognition from face expression and voice timbre Intuitively, functionalities of intelligent artificial neurons are assigned with similar concepts as our brain cells are, processing information from the very raw senses independently and appropriately to their types. Visually, information captured by the camera is distributed to several frames as Figure 2 shows.Discrete information in a single frame is firstly delivered to pattern extracting intelligent sensors for features such as Fisherfaces and Eigenfaces [1] or the deep features from Convolution Neural Network (CNN) [2].To fully preserve the information from the discrete signals, some Sequence Aggregation Component (SAC), e.g., Long Short Term Memory (LSTM) [3] or Transformer [4], is then needed to further process the extracted features.Finally, a classifier such as Support Vector Machine (SVM) or some neural dense layer takes the integrated features for the classification.Raw vocal inputs are usually with 10,000 to 44,100 samples per second, while the visual frame rate is about 25-30 image frames per second.Though raw digital signals in the time domain approximate the original signal precisely, their spectral representation, e.g., Spectrogram frame, Mel-spectrogram coefficients, or Log Mel-spectrogram frame, appeared more effectively for sound recognition.The spectral converted vocal signals have shown significant improvements in many classification problems, in spite of some limitations.Expression events do not last at the same time, thus the width of the Spectrogram frames changes, which is not desirable for CNN pattern extractors.Therefore, the extracted features also need further processing from SAC which outputs integrated features.Figure 3 shows the expression events from different categories and with different time duration. Multi-modal emotion recognition AVER solution also follows the sensation of human beings, people claim they hear the sound when looking at the sheet music, smell the odor when recalling the memory from a photo or see the sea sight from the smell of the air.The multi-sensation information is processed by different areas of our cerebral cortex, movement, hearing, seeing, etc., then highly correlated by some other brain areas.Thus, the decision made depends not just on the recognition of uni-modal sensations independently but also jointly. The learning process of neural sensors should also mimic our learning process.The neurons shape their weights just like our cerebral cortex changes from the stimulation of the environments and look for any correlation of them during the learning process in the supervised neural network sensors training.However, we claim the existing late fusion and end-to-end training strategies hold their own advantages but also deficiencies. Paper contribution and structure This paper shows by experiments the deficiencies of two training strategies: late fusion and end-to-end.Late fusion strategy takes trained static uni-modal networks and trains their fusing network components.End-to-end strategy trains all the multi-modal and uni-modal components together.A novel architecture is proposed to take advantage of both solutions and avoid their side effects, respectively.We demonstrate superiority in the novel end-to-end mechanism and architecture comparing with naive fusion mechanism in either late fusion or end-to-end training.The proposed DNN framework, data augmentation procedures, and network optimization strategy are discussed.The detailed analysis and discussion are presented by computing experiments on The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) and Crowd-Sourced Emotional Multimodal Actors Dataset (Crema-d) datasets.Our major contributions are concluded as follows: 1. Multi-modal framework: We propose a novel within-modality Residual Perceptrons (RP) for efficient gradient blending for the neural network optimization using multi-term loss function in MRPN.The sub-networks and target loss functions produce superior parameterized multi-modal features, preserving the original knowledge of uni-modalities, which impedes inter-modal learning.The within-modality RP components reduce the side effects brought from such multi-term loss functions.As the result, we got significantly better performance over direct strategies including late fusion and end-to-end without MRPN. 2. Time Augmentation of input frames: We demonstrate data augmentation in time involving randomly slicing over input frame sequences from both modalities improved the recognition performance to the state-of-art, even without MRPN.We show also the results that time augmentation doesn't solve the cases where uni-modal solutions are better than multi-modal solutions, yet solved by MRPN. Related work 2.1. Superiority in multi-modal approach Many have shown significant improvement of multi-modal solutions.N. Neverova et al. [5] suggest gradual fusion involving the random dropping of separate channels, and this method was adopted by V. Vielzeuf et al. [6] in the AVER solution for their best result. Fusion at the early or late stage is discussed by others.R. Beard et al. [7] proposed multi-modal feature fusion at the late stage, while E. Ghaleb et al. [8] try to project features to a shared space in the early stage and provided external loss functions to minimize the distance of features from different modalities. A. Zadeh et al. [9] proposed Multi-view Gated Memory to gate the multi-modal knowledge from LSTM in the time series.E. Mansouri-Benssassi and J. Ye [10] archive early fusion by creating distinct multi-modal neuron groups. S. Zhang et al. [11] take features from CNN and 3D-CNN models for vocal and visual sources then make global averaging as video features.NC.Ristea et al. [12] take the features extracted by CNNs from both modalities and exploit the fused features for classification purposes.E. Tzinis et al. [13] take cross-modal and self-attention modules.Y. Wu et al. [14] localize events crossing modalities.E. Ghaleb et al. [15] suggest multi-modal emotion recognition metric learning to create a robust representation for both modalities. Potential failures in the existing solutions Let's consider the human brain learning process again.Say the information is wrong in some of the sensory stimulation.A child learned an animal looks just like a dog but having the sound of the cat from the manipulated movies and this kid has never learned the dog and cat in a real-life environment.He will either see a dog and tell it's a cat or hear the cat sound and tell it's a dog.The situation can go even worse if the stimulation he learned from is also fuzzy within their own sensation. His recognition is still intact to some extent that he can sometimes correctly recognize the visual or acoustic information pattern.But the recognized information is distorted, along with the correlation of the inter-modal information.This made the distorted uni-modal knowledge he learned having also a negative impact on the other.The same concept we address to the current multi-modal neural network solutions.The within-modal and inter-modal noisiness of the learned pattern both contribute to the wrong recognition.Despite many advantages from the multi-modal solutions which boost the recognition performance of emotion recognition tasks, we hypothesize the uncontrolled fusion strategy, adopted by [6,7,9,11,14,[16][17][18] could lead to potential deficiencies in either late fusion or end-to-end training strategy. Though many have shown superior performance of late fusion strategy [19][20][21], for instance, for audio events detection in video material, W. Wang et al. [22] illustrate the results of naive fusion from multi-modal features can be worse than the best uni-modal approach.They propose blending the gradient flow by multi-task loss functions, which is referred to as multi-term loss function by us, from uni-modalities and multi-modality, which help better parameterization of the whole system in many other research areas.Though they suggest benefits from blending the gradient flows, multi-tasking could make the features hard to be optimized serving both uni-modal and multi-modal purposes suggested by many researchers [23][24][25].We demonstrate how this proposal can still fail in some inferior cases but is solved by the within-modal RP component in MRPN. Hypothesis In this section, we discuss our hypothesis where fuzzy information from the uni-modalities can cause chaos in not just the uni-modal neurons but also the correlation neurons, namely the fusion component. Within-modal information can be missing or fuzzy The missing or fuzzy information can be noticed in either visual or vocal modality emotion recognition solution, and then the success rate of recognition can not be increased noticeably.Missing information refers to feature data where emotion categories are confused with neutral categories in the uni-modality.Fuzzy information stands for feature data where one emotion category cannot be distinguished from another one in the uni-modality. Namely, the visual modality results from the challenge FER-2013 [26] for single image facial recognition have only improved about 4% to 76.8% over the past eight years by W. Wang et al. [27]. Moreover, for video frames, HW.Ng et al. [28] got 47.3% validation accuracy and 53.8% testing accuracy on EmotiW dataset [29] using transfer learning and averaged temporal deep features.Similarly, for vocal solutions, the results for Interactive Emotional Dyadic Motion Capture dataset (IEMOCAP) [30] with the raw inputs are reported around 76% by S. Kwon [31] and 64.93% by S. Latif et al. [32].The recognition rate for these cases is far from optimal. Apart from the design, functionality, and training of the neural network, the human voting for those datasets draws our concerns.As the teacher in supervised learning, almost all datasets related to emotion recognition have unsatisfactory knowledge.The ones who understand human emotions the best, the human beings themselves, cannot make a majority of the agreement to the author's labeling.On average, the human rate of the emotional categories is 72% from IEMOCAP, human accuracy on FER-2013 [26] is 65±5%, Crema-d [33] holds the accuracy of 63.6% and RAVDESS [34] has the results of 72.3%. All the reports point out that in every uni-modality, the information of data is never crystal, thus the learned knowledge of a uni-modality in emotion recognition, can be corrupted and uncontrolled by the network.We can't identify or agree on which samples are wrong because the boundaries of the clusters are quite subjective. End-to-end modeling for multi-modal data can be distorted Multi-modal solutions seem to find more generalized patterns via the extension of parameters.However, the fused features have left a backdoor for distorted pattern learning, the side effects are concealed by its benefits.Under such circumstances, we do not know which training sample is fuzzy in which modality, not just it causing the fuzzy direction of within-modal learning, but also inter-modal learning in end-to-end training.i.e. information in modality A is fuzzy while crystal in modality B, can results in correct learning for modality B yet fuzzy learning for modality A. In the end, the distribution of the wrong direction learned knowledge is unknown. Figure 5 illustrates the source of deficiency in the architecture during the gradient backpropagation, wherein the blue frame denotes the fusion component.Namely, the concatenation unit of the features from different modalities can backpropagate the gradients modifying jointly weights in each modality, potentially distort the knowledge in some modality. Late fusion modeling for multi-modal data can be insufficient The late fusion seems to prevent the inter-modal learning of the system, however, not just the distribution of the fuzzy information to each modality is unknown, but also the clean data which holds the highly correlated information between modalities are.If the samples contain clean information in all modalities, then the frozen parameters of the shadow layers cannot make proper adjustments to learn inter-modal information from the joint gradient flow. Proposed Methods Addressing the mentioned issues, we proposed a novel MRPN along with a multi-term loss function for the better parameterization of the whole network taking advantage of both late fusion and end-to-end strategies while avoiding their deficiencies.MRPN can eliminate the problems without assuming the data is noisy or clean. Functional description of analyzed networks The functional descriptions of the analyzed deep networks are presented for their training mode (see Figure 6).They are based on the selected functionalities of neural units and components.We use index m for inputs of any modality.In our experiments m = v or m = a. 1. F m : feature extractor for input temporal sequence x m of modality m, e.g.F v for video frames x v , F a for audio segments x a . 2. A m : aggregation component SAC for temporal feature sequence leading to temporal feature vector f m , eg.A v , A a for video and audio features, respectively. 3. Standard computing units: DenseUnit − affine (a.k.a.dense, full connection), Dropout − random elements dropping for model regularizing, FeatureNorm − normalization for learned data regularizing (batch norm is adopted in the current implementation), and Concatenate − joining feature maps, ReLU, Sigmoid − activation units. 4. Scoring − component mapping feature vectors to class scores vector, usually composing the following operations: 5. FusionComponent − concatenates its inputs g v , g a , then makes the statistical normaliza- tion, and finally produces the vector of class scores: In our networks g v , g a are statistically normalized multi-modal features ( fv , fa ) or their residually updated form ( f v , f a ) -cf. those symbols in Figure 6. 6. SoftMax − computing unit for normalization of class scores to class probabilities: CrossEntropy − a divergence of probability distributions used as loss function.Let p is the target probability distribution.Then the following loss functions are defined: where L is multi-term loss function implying the gradient blending in the backpropagation stage. 8. ResPerceptron (Residual Perceptron) − component performing statistical normalization for the dense unit (perceptron) computing residuals for normalized data.In our solution it transforms a modal feature vector f m into f m , as follows: Three networks N 0 , N 1 , N 2 are defined for further analysis: 1. Network N 0 ( f v , f a ; p) with fusion component and loss function L va : 2. Network N 1 ( f v , f a ; p) with fusion component and fused loss function 3. Network N 2 ( f v , f a ; p) with normalized residual perceptron, fusion component and fused loss function L For the networks N 0 , N 1 , N 2 detailed in Figure 6, we can observe: 1. All instances of FeatureNorm unit are implemented as batch normalization units. 2. In testing mode only the central branch of networks N 1 , N 2 are active while the side branches are inactive as they are used only to compute the extra terms of the extended loss function. 3. The above facts make network architectures N 0 , N 1 equivalent in the testing mode.However, the models trained for those architectures are not the same, as weights are optimized for different loss functions.4. In the testing mode all Dropout units are not active, as well. 5. The architecture of FusionComponent is identical for all three networks.The difference between models of N 0 and N 1 networks follows from the different loss functions while the difference between models of N 1 and N 2 networks is implied by using ResPerceptron (RP) components in N 2 network. 6. To control the range of affine combinations computed by Residual Perceptron (RP) component, we use Sigmoid activations instead of the ReLU activations exploited in other components.The experiments confirm the advantage of this design decision.7. The Residual Perceptron (RP) was introduced in the network N 2 to implement better parameterization of within-modal features before their fusion. As we discussed in the hypothesis section, the late fusion strategy has the advantage of preserving the best information in each uni-modality, since the uni-modality extracts generalized deep features which suffer few from the outliers of their own modality.i.e. a small amount of wrongly labeled data in uni-modal solutions won't contribute to the generalized feature patterns, they are "filtered out" by the uni-modal neural network.Thus the additional term of loss functions implies the blended gradient in the shallow layers of each uni-modality, and helped for better parameterization of the features before fusion, preserving the knowledge as uni-modalities are trained respectively.The above facts make the end-to-end strategy suffer less inter-modal information as late fusion does. 2. However, the multi-term optimization can result in extracting inferior uni-modal features as the input to the fusion component.This problem was mentioned in the literature [23][24][25].RP is introduced to make modified uni-modal features, instead of storing all knowledge for uni-modal and multi-modal purposes in one unit, causing the clash of loss converging from two directions, the uni-modal and multi-modal knowledge can be stored in the original uni-modal features and modified multi-modal features, creating a new path for the gradient flow.RP can preserve the best of the uni-modal solution while the modified features from the short-cut can still fulfill the purpose of integrating new multi-modal features. The mentioned two novel properties make MRPN free from side-effects of late fusion and end-to-end strategy while preserving their own advantages.We suggest that MRPN can be adopted in any multi-modal application that involves many multi-modal inputs and one target function or many multi-modal inputs and many multi-modal target functions as Figure 7 shows.In both cases MRPN benefits from many terms of loss functions as the numbers of uni-modalities, updating the whole system together while avoiding learning from inter-modal fuzzy information.MRPN is general to be compatible with any other proposed mechanism. Pre-processing Our data pre-processing includes the procedures for both modality inputs, namely spatial and time-dependent augmentation are applied. Spatial data augmentation for visual frames: The facial area for the visual input frames is cropped using a CNN solution from Dlib library [35].Once the facial area is cropped, spatial video augmentation is applied during the training phase.The same random augmentation parameters are applied for all frames of a video source illustrated in Figure 8. 2. Time dependent data augmentation for visual frames: Obviously, expressions from the same category do not last the same duration.To make our system robust to the inconsistent duration of the emotion events, we perform data augmentation in time by randomly slicing the original frames as Figure 9 illustrates.Such operation should also avoid too few input frames missing information of the expression events.Thus the training segments are selected to have at least one-second duration unless the original duration of the file is less than that. Computational Experiments and their Discussion This section presents an evaluation regarding the advantages of our proposed framework and time-dependent augmentation.Two datasets, RAVDESS and Crema-d are employed for this purpose.The improvement of the time-augmentation mechanism is analyzed in the naive fusion model which brought us state-of-art results even without MRPN design.The inferior cases in such common neural multi-modal solutions are detected and discussed in the comparison.Improvement of MRPN is then presented, not just in the detected inferior sub-datasets, but also in general data samples. Datasets RAVDESS and Crema-d differ in numbers of expression categories, total files, identifies, and also video quality. 1. RAVDESS dataset includes both speech and song files.For the speech recognition proposal, we only use the speech files from the dataset.It contains 2880 files, 24 actors (12 female, 12 male), state two lexically identical statements.Speech includes calm, happy, sad, angry, fearful, surprise, and disgusted expressions.Each expression is produced at two levels of emotional intensity (normal, strong), with an additional neutral expression, in a total of 8 categories.It is the most recent video-audio emotional dataset with the highest video quality in this research area to our best knowledge. 2. Crema-d dataset consists of visual and vocal emotional speech files in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral).7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. For both datasets, the training set and testing set are separated using similar concepts as 10-fold inter-validation.Additionally, identities of the actors are also separated in train and val sets to prevent the results from leaning on the actors.Around 10% of the actors are used for validation while the remaining 90% are used for training, male and female actors are balanced in each set.We rotate the split train/validation sub-datasets to get multiple results over the whole dataset.The crema-d dataset has fewer categories for the classification tasks, but from the report of the authors themselves, Crema-d holds the accuracy of 63.6% from human recognition for 6 categories which are less than RAVDESS at 72.3% for 8 categories.The resolution of the video source is verified not the cause of the worse performance.The better results in the RAVDESS dataset, in our opinion, are the more crystal and natural emotion information inside the RAVDESS dataset. Model organization and computational setup The naive fusion model N 0 , advanced fusion network N 1 , which is equivalent to the Facebook [22] solution and the N 2 (MRPN) have the same CNN extractors at the initial stage of the training.To compare the impact of strategy from features fusion only, CNN extractor architecture is fixed to Resnet-18 [36]. The CNN in visual modality is initialized from a facial image expression recognition task, the challenge FER2013 [26].As for vocal modality, The CNN is pretrained on the voice recognition task from VoxCeleb dataset [37].The initialization of the CNN extractors made the whole system much easier to be optimized. AdaMW optimizer is adopted for the model optimization, with the initial learning rate at 5 • 10 −5 , decreased two times if validation loss is not dropping over ten epochs. Data augmentation cannot generalize multi-modal feature patterns This subsection illustrates the improvement of time-dependent augmentation.The improvement also proves that the inferior case of multi-modal solution doesn't depend on the with-modal patterns.The single modality solutions in our experiments (shown in Table 1) take pretrained Resnet-18 as extractors and LSTM cells as SACs.The naive multi-modal solution takes twice of the components with an additional fusion layer as Figure 6 illustrates on the left panel.Adopting time-dependent augmentation shows overall performance improvements on either single or multi-modal solutions.The Table notations are presented in the follows: In the variational train/val sub-datasets in Table 1, Ax,y stands for the validation files that came from actor x and y, odd number notes for a male actor, and even number for a female actor. Discussion on inferior multi-modal cases The time augmentation shows overall improvements in either uni-modal or multi-modal approach, yet the inferior case where uni-modal solution better than multi-modal solution still exists, which suggests data augmentation cannot generalize multi-modal features.Only one inferior case is detected in Table 1 of the case A9,10, but we argue such deficiency is common in fuzzy multi-modal data.The pattern learning ability from both modalities is well enough, both solutions have performance over 85% in cases like A7,8 and A1,2.But the ratio of mismatched learned and target patterns are ranging along with the shuffling of the sub-datasets. The degeneration of the performance became visible only because the percentage of the pattern mismatched samples has passed some kind of threshold in the training set.If so, by eliminating or reducing such side effects, overall improvements should be expected for any train and testing sub-dataset. Improvement of MRPN This subsection addresses the improvement of MRPN preventing the side-effects in the existing late fusion and end-to-end strategies we hypothesized as Table 2 and Table 3 illustrate.The end-to-end strategy of N 1 , which takes multi-term loss function helped the better parameterization shows improved average performance over naive end-to-end and late fusion training strategies, yet it can still fail in some cases.Our proposed MRPN on the contrary demonstrates the same performance or most improvement in any circumstance.It can been seen from the confusion matrices in Figure 10 and 11 the averaged improvements of N 2 (MRPN) over the late fusion and end-to-end N 0 models.Performance on some specific categories shows a slight decrease for MRPN, especially for the categories of calm and neutral expressions because they are naturally close to each other in the RAVDESS dataset.N 1 doesn't always perform better than the existing solutions, the almost 6% improvements of N 2 (MRPN) over N 1 suggests the level of data fuzziness can make the end-to-end multi-term optimization even harder without proposed RP components.The overall improvements suggest that multi-modal patterns are more generalized from the solution of N 2 (MRPN). Comparing baseline with SOTA Our proposed MRPN shows stat-or-art results on both datasets.It has no conflicts with any potential advantages from another novel mechanism.Experiments regarding the pretraining of the CNN extractors and the time augmentation have made the network robust to overcome the overfitting issues regarding the small amount of the training and testing data. Conclusion This paper focuses on explaining the potential deficiencies in the existing fusion layer of the multi-modal approach to AVER tasks using late fusion or end-to-end strategy.The proposed MPRN architecture along with the multi-term loss function makes superior fused features from multi-modal sources.We observe the elimination of inferior cases of multi-modal solutions with respect to uni-modal solutions. Our results achieve an average accuracy of 91.4% on the RAVDESS dataset and 83.15% on the Crema-d dataset.MRPN solution contributes to a better average recognition rate of approximately 2%.We have observed the maximum improvement of MRPN for a subset to be around 90% from nearly 80%. The proposed data pre-processing by time augmentation makes general overall rate improvements for both, the uni-modal and multi-modal data.It also illustrates data augmentation cannot generalize multi-modal features due to the deficiencies in the existing multi-modal solutions. Moreover, the MRPN concept shows its potential for multi-modal classifiers dealing with signal sources not only of optical and acoustical type. Figure 1 . Figure 1.The proposed multi-modal emotion recognition system using Deep Neural Network (DNN) approach.Upper part: Video frames and audio spectral segments get independent temporal embeddings to be fused by our multi-modal Residual Perceptron Network (MRPN).Lower part: MRPN performs in each modality normalizations via the proposed Residual Perceptrons and then scores their concatenated outputs in the Fusion Component.The uni-modal prediction branches are only active in training mode. Figure 2 . Figure 2. Video frames of visual facial expressions selected from RAVDESS (Ryerson Audio-Visual Database of Emotional Speech and Song) dataset. Figure 3 . Figure 3. Mel Spectrograms of vocal timbres selected from RAVDESS (Ryerson Audio-Visual Database of Emotional Speech and Song) dataset. Figure 4 . Figure 4. Visualization (t-SNE algorithm) of deep features clustering from two different setups where the train/validation sets are shuffled.The clustering with respect to emotion classes are listed.0: neutral, 1: calm, 2: happy, 3: sad, 4: angry, 5: fearful, 6: disgust, 7: surprised.Top part: clustering results from one setup of uni-modalities and multi-modality.Left part: only image modality.Middle part: only audio modality.Right part: multi-modality.Down part: clustering results from the other setup where train/validation sets are shuffled. Figure 4 Figure4shows us the different clustering from shuffled train/validation sub-datasets in each uni-modal solution and multi-modal solution, respectively.The actors in the validation set are different than they are in the training set.Due to the missing and fuzzy information from the uni-modal data, the clustering of the same uni-modality shown in the top and bottom panels differs.It can be seen from the figure, for the same modality solutions, either from uni-modal or multi-modal, missing information is causing the overlapping clustering for the neutral category.The cases appear similarly for the overlapping of emotional categories caused by the fuzzy information.This suggests that patterns within uni-modality are hard to be generalized, aligned with human voting results mentioned.Under such circumstances, we do not know which training sample is fuzzy in which modality, not just it causing the fuzzy direction of within-modal learning, but also inter-modal learning in end-to-end training.i.e. information in modality A is fuzzy while crystal in modality B, can results in correct learning for modality B yet fuzzy learning for modality A. In the end, the distribution of the wrong direction learned knowledge is unknown. Figure 5 . Figure 5. Distorted gradients backpropagation in some modality since the gradients from fused layer makes impact on gradients flow into neural weights of both modalities. Figure 7 . Figure 7. Generalization of our MRPN fusion approach to many modalities.It could be used for either regression or classification applications. Figure 8 . Figure 8. Visual comparison of augmentation procedure for cropped video frames.Top part: original video frames.Middle part: applying random augmentation parameters -same for all frames.Bottom part: applying random augmentation parameters -different for each frame. 3 . Spatial data augmentation for vocal frames: Raw audio inputs are resampled at 16kHz and standardized by their mean and standard deviation without any denoising or cutting, to remove influence from the distance of the speaker to the microphone, or subjective base volume of the speaker.The standardized wave is then divided into one-second segments and converted to spectrograms by a Hann windowing function of size 512 and hop size of 64.The above facts specify the size of the spectrogram at 256 × 250, approaching the required input shape of Resnet-18 -the CNN extractor used in our experiment.The chunk size using such inputs for Resnet-18 [36] is close to its desired performance utilizing the advantage in the middle deep features.4. Time dependent augmentation for vocal frames: Similar to the time augmentation in visual inputs, raw audio inputs are also randomly sliced.The raw data is further over-sampled in both the training and testing mode by a hopping window.A window of 0.2 seconds, 1/5 duration of the input segments to the CNN extractor is specified.The oversampling further improved our results by the increasing number of deep features from the output deep feature sequences of CNN to the SAC.The mechanism grants the opportunity for the SAC to investigate more details of the temporal information in the deep feature vectors. Figure 9 . Figure 9. Examples of time dependent augmentation for visual frames.Top part: original frames.Middle part: sliced frames start at beginning of the original frames.Bottom part: sliced frames start at the middle of the original frames. Figure 11 . Figure 11.Averaged confusion matrices of tested models for Crema-d dataset. Table 1 . Comparison of single modalities models with N 0 model (RAVDESS cases): VM -Visual Modality only, AM -Audio Modality only, JM -Joint Modalities (N 0 model), T -having time augmentation by signal random slicing, NT -not having time augmentation. Table 2 . Comparison for RAVDESS of MRPN approach (network N 2 ) with late fusion strategy (N 0 ), end-to-end strategy (N 0 ), and advanced end-to-end fusion strategy (N 1 ). Figure 10.Averaged confusion matrices of tested models for RAVDESS dataset. Table 4 . Comparison of our fusion models with others recent solutions.Options used: IA -image augmentation, WO -without audio overlapping, VA -video frames augmentation, and AO -audio overlapping.X symbol -there is no report from authors for the given dataset.
7,369.6
2021-07-21T00:00:00.000
[ "Computer Science" ]
Functions of speaking as a successful means of communication Speaking is considered as the productive and oral skill. Speaking is a cognitive skill, is the idea that knowledge become increases automatically through successive practice. INTRODUCTION Speaking is not only having amount of vocabularies and knowing the grammatical structures, but also mastering all elements of speaking above. All messages we deliver will be acceptable by all communicants if we master those elements. Teaching speaking is one of process in improving speaking skill. Improving speaking skill can be started by teaching them how to pronounce the language. Then ask them to practice it to others English learner without afraid of mistakes. The teacher should be able to encourage students for speaking some sounds until they are required to use and do oral language. LITERATURE REVIEW Either five components are generally recognized in analyses of the speech process: Pronunciation including the segmental features-vowels and consonants-and the stress and intonation patterns. Grammar. Vocabulary. Fluency: the ease and speed of the flow of speech. Comprehension. For oral communication certainly requires a subject to respond to speech as well as to initiate it. Speaking seems intuitively the most important skill to master. The success is measured in terms of the ability to carry out a conversation in language. Speaking is an interactive process of constructing meaning that involves producing, receiving, and processing information. Speaking is very important because by mastering speaking skills, people can carry out conversations with others, give ideas, and exchange information. Hence, by speaking in the classroom, learners should work as much as possible on their own. There are several techniques and strategies which are used by the teacher for teaching speaking. The technique or strategy should be interesting to interest students in teaching learning process. One of the teaching strategies in teaching speaking is debate strategy. It is seen as an active learning process because students will learn more through a process constructing and creating, working in a group and also sharing knowledge. Thus, debate is an excellent activity for language learning because it engages students in a variety of cognitive and linguistic ways. "Speaking is considered as the productive and oral skill. Speaking is a cognitive skill, is the idea that knowledge become increases automatically through successive practice." "Speaking is the productive oral skill. It consists of producing systematic verbal utterance to convey meaning." Some scholars add that speaking is the process of building and sharing meaning through use of verbal and non-verbal symbols, in a variety of contexts. It means that speaking is an interaction between speakers with listeners. Based on the previous definitions, the researcher concludes that: -speaking is the process of sharing with others, knowledge, interests, attitudes, opinions or ideas. -The speaker's ideas become real to him and his listener. -speaking skill is the ability to say, to address, to make known, to use or be able to use a given language in the actual communication. In the light of these highlighted CONCLUSION When it comes to the conclusion, the technique of debate plays a crucial role in improving oral proficiency of EFL learners in second language acquisition as well as contribute to the process of the language methodology .There exist many advantageous and positive sides of this technique in second language acquisition. Firstly, the overall experience of the debate and the processes that the students go through while taking part in the debating activity seems to have increased the confidence level of the students to face the audience on any issue at hand and it seems to have increased their ability to put forward ideas and opinions formed after much investigation, research and discussions within the group. Secondly, they help learners to gain new knowledge on the topic in question, generally, learners consider the increase in the level of confidence and critical thinking skills as being the more significant gains achieved through their involvement in the debate. Thirdly, debate can actually be used as a teaching tool/technique in the classroom once students have acquired a reasonably good level of proficiency and facility in the language. It is obviously a technique that can motivate students to challenge one another and encourage them to explore and exploit their facility in the language for the purpose of exploring and expanding points of arguments with the express objective of winning over the audience and also to convince the opposing side to accept their stand on the motion being discussed and debated. In educational process using debates in the classroom provides students the opportunity to explore real world topics and issues. Debates also engage students through self reflection and encourage them to learn from their peers. They prepare students to be more comfortable engaging in dialogue related to their beliefs as well as their areas of study. In improving oral proficiency of EFL learners so as to learn second language the technique of debate and discussion plays a crucial role comprehensively in spite of the some deficiencies . Fourthly, this technique not only improve their oral fluency but also their personal confidence in order to make speech, debates reinforce the mutual collaboration among learners in every aspect of branch as well as it causes to make improvement of knowledge relating to diverse topics . The use of debate in educational system is growing as both a curricular and extra-curricular activity largely because of its educational value. The most obvious benefit is the opportunities students have to develop and practice oral skills. These skills are extremely important to academic and personal development, yet few curriculum materials are available to support the teacher in fostering them. What makes debate especially valuable for fostering development of oral skills is that it is not only structured, but also interactive. Debate requires that participants listen, think and respond. It is not enough for the debater to simply memorize and perform a
1,363.6
2020-01-24T00:00:00.000
[ "Education", "Linguistics" ]
Robust electrocardiogram delineation model for automatic morphological abnormality interpretation Knowledge of electrocardiogram (ECG) wave signals is one of the essential steps in diagnosing heart abnormalities. Considerable performance with respect to obtaining the critical point of a signal waveform (P-QRS-T) through ECG delineation has been achieved in many studies. However, several deficiencies remain regarding previous methods, including the effects of noise interference on the performance degradation of delineation and the role of medical knowledge in reaching a delineation decision. To address these challenges, this paper proposes a robust delineation model based on a convolutional recurrent network with grid search optimization, aiming to classify the precise P-QRS-T waves. In order to make a delineation decision, the results from the ECG waveform classification model are utilized to interpret morphological abnormalities, based on medical knowledge. We generated 36 models, and the model with the best results achieved 99.97% accuracy, 99.92% sensitivity, and 99.93% precision for ECG waveform classification (P-wave, QRS-complex, T-wave, and isoelectric line class). To ensure the model robustness, we evaluated delineation model performance on seven different types of ECG datasets, namely the Lobachevsky University Electrocardiography Database (LUDB), QT Database (QTDB), the PhysioNet/Computing in Cardiology Challenge 2017, China Physiological Signal Challenge 2018, ECG Arrhythmia of Chapman University, MIT-BIH Arrhythmia Database and General Mohammad Hossein Hospital (Indonesia) databases. To detect the patterns of ECG morphological abnormalities through proposed delineation model, we focus on investigating arrhythmias. This process is based on two inputs examination: the P-wave and the regular/irregular rhythm of the RR interval. As the results, the proposed method has considerable capability to interpret the delineation result in cases with artifact noise, baseline drift and abnormal morphologies for delivering robust ECG delineation. Electrocardiography (ECG) signals are a primary criterion for medical practitioners to provide information such as rhythm and heart rate 1 .The development of a reliable, accurate, noninvasive and robust method for automatic ECG signal delineation could assist cardiologists in the study of patients with heart disease.A normal waveform of an ECG signal consists of P-waves, QRS complexes, and T-waves 2 .To diagnose some heart abnormalities, physicians commonly observe ECG morphology manually.Recognizing abnormal ECG signals visually is arduous because of their varying morphology, and ECG signals are incredibly susceptible to noise 3 .ECG signals are very weak bioelectric signals.There are three main types of noise 4,5 , electrode motion artifacts, muscle artifacts, and baseline drift, that commonly accompany ECG signals.This noise may influence the shape characteristics of the amplitude and baseline of ECG signals, increasing the difficulty of delineating ECG signals. An ECG signal is a type of time series data that changes periodically over time, and the aim of ECG delineation is to find key feature points 6 .ECG delineation is a crucial step in processing ECG signals and helps to identify the critical points that indicate the interval and amplitude locations in each wave morphology 6 .There are two main ECG delineation methods: digital signal processing methods [7][8][9] and intelligent processing methods 3,[10][11][12][13][14][15][16][17][18][19] . This study proposes intelligent signal ECG processing for a robust delineation method.ECG delineation consists in computing the onset and offset locations for each ECG wave (P-QRS-T waves).By this approach, classifying P-QRS-T waves is still being worked on by clinical practice 6,10 .We add the role of medical knowledge to provide an interpretable delineation result that automatically identify abnormal morphologies, with a focus on arrhythmias.Medical knowledge is essential in the interpretation of ECG morphological abnormalities.It plays a crucial rule in accurately interpreting and analyzing ECG signals by providing an understanding of the heart's electrical activity and its physiological and pathological aspects.This knowledge allows for the identification and categorization of different ECG waveforms, such as P waves, QRS complexes, and T waves, which provide valuable information about electrical conduction and potential cardiac abnormalities.The main contributions of this paper are as follows: • Proposing a robust ECG delineation model based on a convolutional recurrent network to highly precisely classifying P-waves, QRS-complexes, and T-waves; • Interpreting ECG morphological abnormalities, with a focus on arrhythmias, using the proposed delineation model based on the P-wave and the regular/irregular rhythm of the RR interval as rules guided by medical knowledge; and • Implementing a grid search optimization algorithm to increase the delineation model performance. Materials and methods This section provides a description of the study design which is related to two main tasks; (i) generate a robust delineation model based on a convolutional recurrent network with grid search optimization, aiming to classify the precise P-QRS-T waves through beat-to-beat.(ii) interpret the ECG delineation results, using rules of medical knowledge to make a decision.The research methodology is described in detail to offer a clear understanding of the experimental procedures, as illustrated in Fig. 1.This methodology workflow consists of: (i) generate the ECG delineation model using an experimental dataset from the Lobachevsky University Electrocardiography Database (LUDB) that has been preprocessed by noise cancelation with discrete wavelet transform (DWT).In addition, normalized bounds is applied for amplitude range normalization and segmentation to obtain the current beat to the next beat (beat-to-beat), e.g., from onset of P-wave1 to onset of P-wave2, onset of P-wave2 to onset of P-wave3 and so on; (ii) propose Convolution Bidirectional Long Short-Term Memory (ConvBiLSTM) model with hyperparameter tuning optimization to classify P-waves, QRS-complexes, T-waves, and isoelectric line (no waves) from beat-tobeat; and; (iii) interpret ECG morphological abnormalities, with a focus on arrhythmias, through delineation approach based on two inputs examination: the P-wave and the regular/irregular rhythm of the RR interval.1.To generate the ECG delineation model, we implemented LUDB.LUDB is an ECG signal database that consists of 200 10-s 12-lead (I, II, III, V1, V2, V3, V4, V5, V6, aVR, aVL,and aVF) records 21 .In LUDB, the boundaries and peaks of P-waves, QRS complexes, and T-waves were marked and annotated by two verified cardiologists to represent the distinct morphologies of an ECG signal.Each record has a length of 10 s and was digitized at 500 Hz.Of the 58,429 total waveforms, there were 16,797 P-waves, 21,966 QRS complexes and 19,666 T-waves.We classified the three main ECG waveforms and isoelectric line (no wave).For data splitting, the model was generated by splitting 90% of the data for training and the remaining data for validation.2. To test our delineation model, we have explored QT Database (QTDB) as testing set (unseen) to provide an unbiased evaluation of a best model fit on the training dataset 22 .QTDB has commonly experimented for ECG delineation task in several researches 6,10,18,19 .QTDB has been used due to it was provided the beats that manually annotated by cardiologists.The annotation of the onset, peak and offset of the P-wave, the onset and offset of the QRS-complex, the peak and offset of the T-wave, and (if present) the peak and offset of the U-wave have available.3. To evaluate the proposed ECG delineation model, we attempt to interpret the normal sinus rhythm (NSR), atrial fibrillation (AF) and atrial flutter (AFL) from five ECG databases, including LUDB, the PhysioNet/ Computing in Cardiology Challenge 2017 23 , China Physiological Signal Challenge 2018 24 , ECG Arrhythmia of Chapman University 25,26 and General Mohammad Hossein Hospital (Indonesia) databases.Aforementioned records have been annotated by experts as NSR, AF and AFL, respectively.NSR is rhythm of the healthy human heart.The atria beat irregularly in AF, while AFL, it may experience four atrial beats for every ventricular beat because the atria beat regularly but faster than usual and more frequently.We have also utilized other ECG morphological abnormalities that related to irregular heart rate; i.e., sinus bradycardia (SBR), supraventricular tachyarrhythmia (SVTA), ventricular trigeminy (T), and ventricular tachycardia (VT), from the records of the MIT-BIH Arrhythmia Database 27 (Table 1).SBR, SVTA, T, and VT are arrhythmias that are related to irregular heartbeats, in which SB is slower than expected, and SVTA and VT are much faster than normal.T is an abnormal heart rhythm that occurs when every third beat is a premature ventricular contraction. The research methodology of this study can be explained as follows: ECG preprocessing.The measurement and analysis of the ECG signal are challenging due to noise.Noise can be generated from various sources, such as motion and muscle artifacts, powerline interference, and baseline drift.We implemented the DWT to address the ECG noise cancelation problem.The quality of the denoising process depends on the wavelet function, decomposition level, threshold selection and reconstruction 4 .In this study, we compared some wavelet functions, such as sym5, sym6, sym7, sym8, db2, db4, db5, db6, db7, bior1.3,bior6.8,bior3.5, haar and coif5 (Table 2).To obtain the selected wavelet function, we calculated the signal-to-noise ratio (SNR) to compare the level of an output signal (desired signal) to the level of background noise 28 .The denoising efficiency is measured using the SNR.SNR provides information about the signal quality.Among the experimental wavelet functions, the highest output SNR value was that of coif5, with 8.56 decibels (dB).The input SNR is defined as: The output SNR (SNR o ) is given by the following equation: where x(n) is the original with length n , r(n) is the added noise signal, and x d n is the denoised signal. For decomposition level, we have used 8 levels of decomposition, in which the largest frequency sequence ranges from level 1 to level 8.In this study, the denoising was developed following the soft thresholding.It is first sets to zero the elements whose absolute values are lower than the threshold, and then shrinks the nonzero coefficients toward 0. After ECG noise cancellation, we normalized the amplitude range for efficient computation.We have applied normalize bound, one of the processing subpackage contains signal-processing tools (waveform-database, WFDB) for reading, writing, and processing WFDB signals and annotations.In this method, the value of the signal data with the lower limit (zero) and upper limit (one) values was changed. In the last preprocessing step, we segmented the ECG records beat-to-beat (from the onset of P-wave 1 to the onset of P-wave2, onset of P-wave2 to onset of P-wave3 and so on) (refer to Fig. 2).We assume one beat consists of at least one R-peak.The segmentation process is referred by expert annotation.The input shape of each beat was set to 512 nodes. (1) www.nature.com/scientificreports/ConvBiLSTM model.The primary purpose of ECG signal delineation is to classify P-QRS-T waves.The higher amplitude of the QRS complex is usually simple to identify.It differs from that of P-and T-wave delineation, which is particularly challenging due to its lower amplitude and occasionally noise-accompanied nature. In this study, we were concerned with obtaining the precise locations of P-waves and RR-intervals because it is essential for ECG morphological abnormality such as arrhythmias. In this study, we experimented with ConvBiLSTM as the ECG delineation architecture 18 .This architecture consisted of four convolution layers for feature extraction and BiLSTM as the ECG waveform classifier.For the LSTM input layer must be three dimensions.The meaning of the three input dimensions are; samples, timesteps and features.The total number of nodes in one beat was 512 (as features), with a timestep of (250, 1).If the one segmented beat had fewer than 512 nodes, we added To validate our proposed model, we also tested the other AF database using unseen records (records that were not used for training and validation a zero value for the remaining values (zero-padding technique).The timesteps of input with dimension (250, 1) were fed into the convolution layer.The input of ConvBiLSTM is the ECG waveform which bounded by a vector label indicating the class of each node.The class label is formed in vector with size of (250, 1).We adjusted ConvBiLSTM for P-wave, QRS complex, T-wave, and no wave classification.The ConvBiLSTM was fed into the convolution layer equipped with rectified linear unit (ReLU) and softmax activation functions for the hidden and output layers, respectively.The Adam optimizer is used for optimization techniques in deep learning for stochastic gradient descent, and categorical cross-entropy is used as a loss function. The one-dimensional forward propagation of convolutional neural network (CNN) can be expressed as follows 29 : where x l k is the input, b l k is the bias of the kth neuron at layer l, s l−1 i is the output of the ith neuron at layer l − 1, and w l−1 ik is the kernel from the ith neuron at layer l − 1 to the kth neuron at layer l.The BiLSTM can be expressed as follows: where to generate the output y t , the forward hidden layer h f t and the backward hidden layer h b t are combined. ECG morphological abnormality interpretation. In this study, we interpreted ECG morphological abnormalities, with a focus on arrhythmias, using the ECG delineation approach.Currently, AF and AFL are the two arrhythmias that occur most often.Since AF and AFL conditions have similar physiological characteristics and are frequently present in more than half of AFL patients, which both are rapid upper chamber arrhythmias, they are also frequently linked to other cardiovascular disorders, including stroke and myocardial infarction 30 .AF and AFL are sharing similar physiological characteristics is because they both involve abnormal electrical activity in the atria of the heart.In both conditions, the normal coordinated rhythm of the atria is disrupted, leading to irregular or rapid heartbeats.This similarity in the underlying electrical abnormalities can result in overlapping symptoms and diagnostic features.A common AF and AFL diagnostic method uses a visual evaluation of ECG.Other types of arrhythmias are SBR, SVTA, T, and VT, which they were related to irregular heart rate.Therefore, based on the proposed delineation model, we examine two inputs: the P-wave and the regular/ irregular rhythm of the RR interval.We can identify the P-wave pattern from the delineation result, but the QRS complex must be analyzed to determine any irregular heart rate.Adults typically have a resting heart rate (HR) between 60 and 100 beats per minute (BPM).A resting HR of less than 60 BPM is referred to as bradycardia (slow ventricular response), and that consistently above 100 BPM reflects a rapid ventricular response (tachycardia) 31,32 .In our previous work 20 , we stated that the regular rhythm has a pattern (normal, slow, or rapid ventricular response), and an irregular rhythm indicates no pattern on the ECG signal.Therefore, the NSR, AF, AFL and other arrhythmia were interpreted according to the medical knowledge rules 33,34 : (i) if the P-wave was present and the rhythm was regular, then the condition was the NSR; (ii) if the P-wave was absent and the rhythm was irregular, then the condition was the AF; (iii) if the P-wave was absent and the rhythm was regular, then the condition was the AFL; and (iv) if the P-QRS-T wave was present and the rhythm was irregular, then the condition was others arrhythmia. Evaluation metrics.In this study, we have measured the ECG waveform classification (P-QRS-T waves) based on evaluation metrics for supervised learning, i.e., accuracy (Acc), sensitivity (Sen) and precision (Pre).Using different metrics for performance evaluation, we able to improve the proposed ConvBiLSTM model's overall predictive power before we test on unseen set.In this study, we have calculated Acc, Sen and Pre for www.nature.com/scientificreports/training, validation and unseen sets.Acc defines as the ratio of the number of true predictions and the total number of predictions.Sen defines how many of actual positive class, we able to predict correctly with the proposed model.Pre explains how many of the correctly predicted class actually turned out to be positive.We can define the Acc, Sen and Pre based on mathematical functions below: where TP is the True Positive, TN is the True Negative, FP is the False Positive, FN is the False Negative. Results To comprehensively explain the results obtained from our experiment, we divided them into two main discussions; (i) delineation model performance, and (ii) interpretation of ECG delineation. Delineation model performance. Using the 200 records in LUDB and ConvBiLSTM, an ECG delineation model was generated.For the ground truth of LUDB, we segmented the ECG signal to beat-to-beat; it is the process to segment current beat to the next beat (beat-to-beat), from onset of P-wave1 to onset of P-wave2, onset of P-wave2 to onset of P-wave3 and so on (Fig. 2).The total number of nodes in one beat was 512, with a timestep of (250, 1).If the one segmented beat had fewer than 512 nodes, we added a zero value for the remaining values (zero-padding technique).LUDB includes 12-lead ECGs.However, in this study, we only used single-lead ECGs, i.e., lead II.In some situations, lead II is usually the best lead in which to observe P-waves and is mostly used for arrhythmia interpretation. In the design phase of the ConvBiLSTM model, the grid search optimization (GSO) algorithm was implemented for optimum hyperparameter selection.The GSO algorithm is a common approach to determine the best combination of hyperparameter values for DL models.The impact of each parameter combination on the performance of the model was computationally evaluated.To finetune the ConvBiLSTM architecture, the GSO algorithm was determined by the hyperparameter ranges specified in Table 3.As seen in Table 3, 36 models were tested based on the batch size (8, 16, and 32), learning rate (10 -3 , 10 -4 , and 10 -5 ) and epochs (100, 200, 300, and 400) parameters. To validate our ConvBiLSTM with GSO, we experimented using ConvBiLSTM without GSO.The results of the confusion matrix (CM) are presented in Fig. 3.Both ConvBiLSTM models falsely classified the isoelectric line (no wave) as a P-wave, QRS-complex, T-wave and vice versa.However, the total number of misclassifications in the ConvBiLSTM model without GSO (Fig. 3a) was higher than that with GSO (Fig. 3b).Among all ECG waveforms, the misclassification of T-waves was dominant due to the maximal errors observed for the offset of T-waves. The ConvBiLSTM with the GSO algorithm generated 36 models by combining the main hyperparameters (batch size, learning rate and epochs).The results of 36 models generated using ConvBiLSTM with the GSO algorithm from validation set are presented in Fig. 4. Figure 4 shows three metric evaluations (Acc (red), Sen (green), and Pre (blue)), with ranges above 90%, and the highest value was close to 100%.Among the 36 models based on hyperparameter tuning, the last model (Model 36) was proposed for NSR, AF, and AFL interpretation.The hyperparameters of the best model were constructed using a batch size of 32, a learning rate of 10 -3 , and 400 epochs.All the results using Model 36 achieved above 99.92% for Acc, Sen, and Pre. The distribution of each class performance results in the best model of ConvBiLSTM with the GSO algorithm using validation set (Model 36) is shown in Fig. 5 and Table 4.The performance results for P-waves, QRScomplexes, T-waves and no waves are excellent.We successfully classified the P-waves and QRS complexes with 100% Acc (Table 4).We are concerned with P-wave and RR-interval examinations.However, the results show that the model can be used to interpret arrhythmias.The interpretation of QRS complexes was affected by examining the RR interval.For the T-wave and no wave classes, the results are still good.T-waves represent the ventricular myocardium repolarization that is used to diagnose pathology ventricular arrhythmias. In our previous research 18,19 , the average Sen was only 98.91% 18 for P-waves, QRS-complexes, T-waves, and no waves.Additionally, in 19 , we updated ConvBiLSTM using the unsupervised denoising algorithm denoising autoencoder (DAE), and the results were approximately 98.59% for Acc, Sen, and Pre.The performance results in this study successfully outperformed those of our previous studies 18,19 .Due to the outstanding results, the proposed model was used to interpret ECG morphologies, such as the NSR, AF and AFL. To validate and provide an unbiased evaluation of our robust delineation model, we have tested the Conv-BiLSTM to testing set (unseen) by using QTDB.To delineate ECG waveform, a complete P-QRS-T wave must be required to identify critical points, which they were marked by interval and amplitude locations of each wave morphology.The records of NSR mostly provide a normal heartbeat that produces a regular, identifiable pattern: P-QRS-T waves.Therefore, in this study, we limited only tested 10 records of NSR (sel16265, sel16272, sel16273, sel16420, sel16483, sel16539, sel16773, sel16786, sel16795, sel17453), which it contains the automatic waveform onsets and offset in signal, within a normal range of regular pattern. The total beats of NSR records in QTDB are 300, and all records were sampled at 250 Hz.Each class performance results can be listed in Table 5.Table 5 shows the performance results in several class are decreased.Compared to performance results using validation set (Table 4), with testing set, the average of Acc, Sen and Pre were decreased 1.73%, 7.29%, and 13.95%, respectively.The most significant occurs in T-wave, with the lowest result is only achieved 71.95% Pre.A normal T-wave overlapped with other T-wave characteristics, i.e., inverted T-wave, only upwards, only downwards, biphasic negative-positive, or biphasic positive-negative 2,35 .Aforementioned T-wave characteristics have maximal error, which it is observed for T-wave offset, whose delineation is a well-known hard problem 35 .Due to its problem, P-QRS wave shifted from onset and offset to the predicted results.Our proposed model tends to learn the regular T-wave positive features, which mostly appears in LUDB without considering other T-wave characteristics.Despite the aforementioned challenges, the proposed model consistently maintains a performance level above 85% Acc, Sen and Pre.This suggests that the model could be considered for application in clinical practice. Interpretation of ECG delineation.The best ConvBiLSTM and optimization model was tested to interpret 142, 14, and 3 records for the NSR, AF and AFL, respectively.The interpretation of P-waves is arduous due to their low voltage amplitude and is often compared to noise.Therefore, the quality of P-wave interpretation must rely on restricting the temporal interval of interest from the QRS complex and T-wave.However, in this experimental work, the proposed model successfully achieved 100% truly interpreted NSR, AF, and AFL.The results are presented in Table 6.Table 6 shows that ConvBiLSTM had excellent interpretation for 142 LUDB records as NSR, 14 AF-infected records as AF and 3 AFL-infected records as AFL (only sample visualized).Table 6 presents the P-wave (blue), QRS complex (red), T-wave (yellow) and no wave (white) visualization.In the NSR records (records 2, 3 and 4), the presence of P-QRS-T waves and a regular pattern of rhythm was observed in the ECG signal.For the AF (records 51, 103, and 109) and AFL (records 35, 52, and 103) records, there is an absence of P-waves.Morphologically, both differences are the irregular atria beat in AF and the regular atria beat in AFL, but they are faster than usual and occur more often than those in the ventricles.To validate our proposed model, we also tested the other AF database using unseen records (records that were not used for training and validation) for morphological abnormality interpretation, i.e., the PhysioNet/Computing in Cardiology Challenge 2017 and the China Physiological Signal Challenge 2018 databases.There were 14 and 17 AF records for these databases, respectively.As a result, all 31 AF-infected records were successfully interpreted as AF.To present the results, we visualized the AF-interpreted sample in Table 7.The results cannot be affected by the lead and frequency sampling used.Although both databases have distinct lead and frequency sampling, our best model can interpet AF-infected AF.To interpret AFL, we tested the proposed model with three records from ECG Arrhythmia from Chapman University.With a 500 Hz sampling rate, the ECG records mostly consist of common arrhythmias and additional heart abnormalities.As a result, the proposed model can also interpet AFL-infected AFL. In addition, we tested the MIT Arrhythmias Database to interpret arrhythmias (SBR, SVTA, T, and VT) using ConvBiLSTM.As visualized in Table 7, irregular heartbeats are present in the delineation result.The short/long Discussion In recent years, we generated an ECG delineation model by using digital signal processing methods (wavelet transform) and intelligent processing methods (DL).In 8 , we preliminarily experimented with the wavelet transform using DWT for feature extraction of the onset and offset of P-QRS-T waves.We used a conventional method to develop a low-complexity algorithm for ECG delineation.Based on feature analysis, we detected the onset and offset of P-QRS-T waves using a searching window technique based on the DWT threshold.We used eight levels of ECG reconstruction, where each level represents the P-QRS-T wave calculation.We detected the fiducial point of each wave using a searching window technique.However, the performance results are poor because they can be affected by feature analysis.A high degree of uncertainty and variability may exist due to the subjective aspect of the measurements in the segmentation and measurement phases. To overcome this problem, we first used the DL algorithm for the ECG delineation task 18 .We generated the ConvBiLSTM algorithm for the delineation task to detect the onset and offset of P-waves, QRS-complexes, T-waves, and no waves 18 .The ConvBiLSTM algorithm combines the convolution layers of CNN for feature extraction and long short-term memory (LSTM) to classify ECG waveforms.The results reflect the excellent performance of the model; however, in a case of testing using an expert annotator, the precision of the ST-segment was 69.13%.Additionally, based on the limitations that we stated in 18 , ECG waveform classification was performed with and without considering specific heart abnormalities.Therefore, we improved ConvBiLSTM to detect the onset and offset of the main ECG waveforms considering heart abnormalities using morphology visualization.In 19 , the ConvBiLSTM algorithm was improved to detect heart abnormalities considering T-waves, i.e., T-wave alternans (TWAs).We experimented with a denoising autoencoder (DAE) as the noise cancelation method.As a result, we succeeded in detecting 20 of 30 records of synthesized ECGs with TWA 19 ; unfortunately, there were still missed TWAs. To improve our previous ConvBiLSTM algorithm, we generalized ConvBiLSTM using GSO to obtain a robust ECG delineation algorithm.The GSO algorithm was implemented to tune parameters to obtain optimal hyperparameters automatically.First, we generated 36 models by considering the batch size, learning rate and number of epochs.Second, among the resulting models, the best model (Model 36) was proposed based on its www.nature.com/scientificreports/Generally, P-and T-wave detection in ECG signal recordings is difficult.P-wave detection is the most complicated part of delineation due to its high interpatient variability.However, in this study, we were concerned with ECG waveform classification to detect the onset and offset of P-QRS-T waves using a delineation approach.The best model learned the features from the labels (ground truth) in LUDB.We did not use a mathematical model to calculate the fiducial point of the ECG waveform.Additionally, conventional algorithms for QRS detection, such as Pan-Tompkins or wavelet transform, were not used.Instead, we proposed automatic delineation using DL to detect the onset and offset of ECG waveforms. The varying records tested in this study contain some artifacts and interference.The PhysioNet/Computing in Cardiology Challenge 2017, China Physiological Signal Challenge 2018 and General Mohammad Hossein Hospital (Indonesia) databases, have abnormal artifacts that might distort morphological features, leading to a false diagnosis.Nevertheless, with the proposed methodology in this study, we were able to overcome this problem and achieve the true interpretation of normal and abnormal ECG morphologies to arrive at a decision. The abovementioned problems of ECG can be improved to obtain a robust delineation model.In this study, we successfully developed a robust ConvBiLSTM that was tested using various ECG databases with different leads, frequency sampling, and types of noise.Our model is robust in interpreting ECG abnormalities, with focus on arrhythmias. Some studies also experimented with DL for ECG delineation tasks with the same dataset.Excellent results above 97% Acc, Sen and Pre were obtained [17][18][19][36][37][38] . In comarison, for the performance of Acc, Sen and Pre, our ConvBiLSTM model outperformed the other DL techniques (Table 8).This study aimed to propose a robust ECG delineation model based on ConvBiLSTM to highly precisely classifying P-waves, QRS-complexes, and T-waves.Based on the results of delineation model, we have interpreted the ECG morphological abnormality.We generated 36 models, and the model with the best results achieved 99.97% accuracy, 99.92% sensitivity, and 99.93% precision for ECG waveform classification (P-wave, QRS-complex, T-wave, and isoelectric line class).We compared our results with those of other DL techniques and also improved our ECG delineation model compared with that in our previous works 8,18,19 .The improved ConvBiLSTM model was combined with the simplest optimization hyperparameter tuning model.A grid of hyperparameter values was set up, and for each combination, the model was trained and scored using validation data.In terms of improvement, for P-waves, the results of Acc, Sen and Pre increased from those in our previous works 18,19 .In this study, we successfully classified P-waves so that the abnormal morphologies of AF/AFL can be truly interpreted with other ECG records databases.Additionally, for the QRS complex, we achieved 100% Acc when determining the RR interval calculation.Finally, the T-wave performance results were improved compared with those from our previous works in terms of the highest Acc, Sen and Pre 18,19 .Based on these results, the possibility of other heart abnormality interpreration can be considered for early diagnosis by cardiologists. Using the best model from the ECG delineation task, we were concerned with interpreting the abnormalities of morphology (AF, AFL, SBR, SVTA, T and VT).The performance results were excellent; however, there are limitations to this study.First, in all experimental studies, only single-lead ECG records (lead II) were used to generate the improved ConvBiLSTM with the GSO algorithm.We have not yet applied multilead or 12-lead ECGs for ECG delineation.Many types of heart abnormalities require a standard 12-lead ECG observation because each ECG signal has a different heart vector orientation.Second, T-wave characteristics have maximal error, which it is observed for T-wave offset, whose delineation is a well-known hard problem.Third, the generalization of the ECG delineation model using other ECG databases is extensively required, more datasets could achieve greater generalization and more robust performance. Conclusion In this study, the ECG delineation task was conducted to generate an automated and robust model for ECG abnormality waveform interpretation.Using the DL approach, the performance results were improved when using the ConvBiLSTM model combined with the simplest hyperparameter tuning algorithm.The ECG waveform classification was used to classify the onset and offset P-QRS-T waves and no waves.The performance results were above 99% for Acc, Sen, and Pre.Using the grid search optimization algorithm, we simply divided the tuning domain of the hyperparameters into a discrete grid.This approach was used to determine the optimal hyperparameter that yielded the most precise prediction.From the improved ConvBiLSTM model, we experimented with a DL-based delineation model to interpret the abnormalities of the ECG waveform, i.e., AF, AFL, SBR, SVTA, T and VT.The aforementioned heart abnormalities are affected by irregular/regular heartbeats and the absence of P-waves.In all experimental datasets used in this study, the results showed that the improved ConvBiLSTM can successfully interpret AF, AFL, SBR, SVTA, T and VT infected as AF, AFL, SBR, SVTA, T and VT, respectively.The excellent results of ECG waveform classification reflect the vast opportunity to use the improved ConvBiLSTM model to analyze ECG recordings to diagnose other heart abnormalities related to ECG morphology.The existence of noise does not affect the performance of the proposed ECG delineation task.With varying morphology and features, the model is robust and can be implemented in clinical practice. Figure 4 . Figure 4.The performance results of 36 ConvBiLSTM models using the GSO algorithm from validation set. Figure 5 . Figure 5.The Acc, Sen and Pre of the ECG waveform class of the proposed model. 13:13736 | https://doi.org/10.1038/s41598-023-40965-1www.nature.com/scientificreports/https://doi.org/10.1038/s41598-023-40965-1www.nature.com/scientificreports/RR-interval between beats is clearly observed, giving rise to irregular heart rhythms.Besides testing with public ECG databases, we also tested the proposed model with two ECG records from the General Mohammad Hossein Hospital (Indonesia) database.The ECG signals were recorded by a 12-Channel CardioCare 2000 Bionet ECG machine.All ECG records were digitized at 300 Hz.The morphologies of signals were fulfilled by noise and abnormal ECG patterns.Despite the challenging ECG records for examination, our proposed model can truly interpret AF. Table 1 . The experimented ECG records for abnormality morphology interpretation. Table 2 . The SNR results of wavelet functions. Table 3 . The GSO algorithm combination of hyperparameters and value ranges. Table 4 . Each class performance results in the best model of ConvBiLSTM with the GSO algorithm using validation set. Table 5 . Each class performance results in the best model of ConvBiLSTM with the GSO algorithm using testing set (QTDB). Performance results (%) P-wave QRS-complex T-wave Isoelectric line Zero-padding Average 18gh-performance results in terms of Acc, Sen, and Pre, with performance results above 99.92%.Third, the best model was tested on 142, 14 and 3 records in LUDB for NSR, AF and AFL, respectively.As a result, the best model can interpret the records as NSR, AF and AFL.Fourth, the best model was also tested using other databases, i.e., PhysioNet/Computing in Cardiology Challenge 2017 and the China Physiological Signal Challenge 2018 databases.We were only concerned with interpreting AF in both databases.Additionally, a total of 31 AR records from both databases achieved 100% interpretation of AF.Although the LUDB, PhysioNet/Computing in Cardiology Challenge 2017 and the China Physiological Signal Challenge 2018 recordings were sampled at 500 Hz, 300 Hz, and 500 Hz, respectively, the differences in frequency sampling did not affect the performance of the best model (Model 36).In our previous work18, we stated that the limitation of our first-generation ConvBiLSTM needed to be adjusted to differentiate ECG frequency sampling and leads.With distinct frequency sampling and leads, in this study, we successfully improved the ConvBiLSTM performance results and interpreted NSR and arrhythmia (AF, AFL, SBR, SVTA, T, and VT) records with excellent results. Table 7 . The sample predicted results of AF and AFL interpretation using the PhysioNet/Computing in Cardiology Challenge 2017, China Physiological Signal Challenge 2018, ECG Arrhythmia of Chapman University, MIT Arrhythmias Database and General Mohammad Hossein Hospital (Indonesia) databases. Table 8 . Benchmark studies for the ECG delineation performance.
7,761.6
2023-08-23T00:00:00.000
[ "Medicine", "Computer Science" ]
Dialysis Reimbursement: What Impact Do Different Models Have on Clinical Choices? Allowing patients to live for decades without the function of a vital organ is a medical miracle, but one that is not without cost both in terms of morbidity and quality of life and in economic terms. Renal replacement therapy (RRT) consumes between 2% and 5% of the overall health care expenditure in countries where dialysis is available without restrictions. While transplantation is the preferred treatment in patients without contraindications, old age and comorbidity limit its indications, and low organ availability may result in long waiting times. As a consequence, 30–70% of the patients depend on dialysis, which remains the main determinant of the cost of RRT. Costs of dialysis are differently defined, and its reimbursement follows different rules. There are three main ways of establishing dialysis reimbursement. The first involves dividing dialysis into a series of elements and reimbursing each one separately (dialysis itself, medications, drugs, transportation, hospitalisation, etc.). The second, known as the capitation system, consists of merging these elements in a per capita reimbursement, while the third, usually called the bundle system, entails identifying a core of procedures intrinsically linked to treatment (e.g., dialysis sessions, tests, intradialyitc drugs). Each one has advantages and drawbacks, and impacts differently on the organization and delivery of care: payment per session may favour fragmentation and make a global appraisal difficult; a correct capitation system needs a careful correction for comorbidity, and may exacerbate competition between public and private settings, the latter aiming at selecting the least complex cases; a bundle system, in which the main elements linked to the dialysis sessions are considered together, may be a good compromise but risks penalising complex patients, and requires a rapid adaptation to treatment changes. Retarding dialysis is a clinical and economical goal, but the incentives for predialysis care are not established and its development may be unfavourable for the provider. A closer cooperation between policymakers, economists and nephrologists is needed to ensure a high quality of dialysis care. Introduction Renal replacement therapy (RRT) is a life-saving, long-lasting, expensive treatment. In Europe, Japan, the United States and Canada, about one person in 1000 is presently alive thanks to dialysis or care. Conversely, the cost of dialysis supplies is the most relevant item in most emerging countries. A good marker of the differences is the reuse of dialysers: in highly resourced countries, the cost of working time needed for processing is too high to make this procedure cost-effective, while the reverse is true in many emerging countries where the reuse of dialysers is still a common practice [43][44][45]. Reimbursement for dialysis follows different rules worldwide, including or excluding some items, and considering quality requirements or not. Large studies, conducted to analyse differences in dialysis policies, and investigate what impact different policies have on the results of treatment, like the DOPPS (dialysis outcomes and practice pattern) study, have made the medical community aware that while the care of the individual patient is important, how the system is organized is also a key factor [46][47][48][49][50]. Meanwhile, dialysis is undergoing a series of fundamental clinical changes. In common with most other branches of clinical medicine dealing with chronic diseases, the shift from standardization to personalization has had an impact on perspectives and care [51][52][53]. The present opinion paper has been planned to discuss the potential advantages and drawbacks of different policies of reimbursement for dialysis. Taking into account the development of personalized treatments, it focuses on four paradigmatic issues: the relationship between haemodialysis and peritoneal dialysis; incremental dialysis; intensive haemodialysis; and predialysis care. The authors have used their countries, Italy and France, as main examples of how a given healthcare system can have an impact on the overall care of kidney patients. Costs and Reimbursements: Not the Same Story Expenditure for dialysis and reimbursement for dialysis are closely linked and mutually influence each other. However, they do not have the same meaning [14,15,41]. Costs depend on structure, organization, supplies, and healthcare personnel. Although they are also significantly influenced by social and political issues (e.g., the cost of healthcare workers depends on salaries), costs are largely determined by medical choices (organization of the dialysis ward, choice of materials etc.). The reimbursement system is usually determined by policy decisions (favouring in-hospital or out-of-hospital treatment; financing public or private structures; increasing high-tolerance modalities, etc.) [14,15,41,[54][55][56][57][58]. For example, in Europe, Canada and the United States, as well as in Australia, the cost of healthcare professionals has a greater impact than the cost for materials; the costs of the "structure" (private or public hospitals, etc.) vary widely, and may be relevant in particular in countries where the efficiency of the healthcare system is low, as measured by the high "indirect" costs (costs of the overall hospital structure, including or excluding transportation) that are not always declared but may be as high as 20-30% of the overall expenditure [59]. Overall, in Europe, the costs of materials differ little, while there is a significant range of salaries, which is only partly compensated for by differences in workload: for example, a French centre with up to 15 dialysis beds employs at least two full-time nephrologists, and one with up to 30 dialysis beds at least three. This means, for example, that a pool of 80 in-hospital dialysis patients can be managed by only two full-time physicians, while out-of-hospital figures are even lower [60]. Italian figures are less well defined, but the current rule is that at least six nephrologists are needed in each nephrology structure, thus assuming that a higher number of specialists is needed in a medium-sized dialysis ward. This policy, originally intended as a way to ensure the presence of an adequate number of nephrologists in centres in small towns and rural areas, led to a decrease in the independence of small nephrology structures, and many of them were absorbed by larger internal medicine wards [61]. France and Italy have roughly the same resident population, but Italy has almost twice as many nephrologists as France. The higher number of nephrologists in Italy partially compensates for a lower number of secretaries, nurses and aides. The difference in salaries is difficult to assess, due to the high variability between public and private, and in Italy among regions. This difference also has an important impact on research. Physicians working in French hospital centres are encouraged to conduct studies and publish articles on their research by the SIGAPS-SIGREC system (SIGAPS being for Système d'interrogation, de gestion et d'analyse des publications scientifiques, and SIGREC for Système d'information et de gestion de la recherche et des essais cliniques), which over a period of four years, starting from the year after publication, allocates about €64,000 for each paper published (first or last name) in a journal ranking in the first 10% in its field, and up to €8000 for a paper (first or last name) published in a journal ranking in the last quartile [62]. These incentives do not exist in other countries, such as Italy; however, a gross analysis of the Pubmed database for the year 2017, employing the terms dialysis, haemodialysis or haemodiafiltration and Italy or France, retrieved 665 papers in Italy and 404 in France. While the issue is complex, these data suggest that a higher number of specialists is more efficacious than a high, but delayed economic reward, and that the latter should probably be at least partially converted into employing a larger work force. Dialysis Reimbursement: Per Session, Per Patient, Per Bundle There are three main ways of calculating the cost of dialysis and establishing how it should be reimbursed. The first involves dividing dialysis into a series of elements and reimbursing each one separately (dialysis itself, medications, intradialytic drugs, transportation, chronic treatments, laboratory tests, imagery; consultations; hospitalizations; home assistance). The second, known as the capitation system, consists of merging these elements, partially or entirely, in a per capita reimbursement, while the third, usually called the bundle system, entails identifying a core of procedures intrinsically linked to treatment (e.g., dialysis sessions, tests, drugs and transportation). Each one has advantages and drawbacks, and each one impacts differently on the organization and delivery of dialysis care, as will be discussed in the pages that follow. Reimbursement Per Separate Element: Dialysis Treatment Seen as a Matryoshka Delivering dialysis entails more than merely delivering a session of blood purification. Compensating for a lack of kidney function also includes the use of medications (from erythropoietin to anti-hypertensive drugs), controlling the efficacy of dialysis sessions via regular blood tests, and checking for cardiovascular diseases and other frequently associated comorbidities. The first advantage of dealing with each item separately is that this allows us to better understand the cost of each one, targeting actions needed to control costs to specific issues, such as transportation or blood tests ( Figure 1). publications scientifiques, and SIGREC for Système d'information et de gestion de la recherche et des essais cliniques), which over a period of four years, starting from the year after publication, allocates about €64,000 for each paper published (first or last name) in a journal ranking in the first 10% in its field, and up to €8000 for a paper (first or last name) published in a journal ranking in the last quartile [62]. These incentives do not exist in other countries, such as Italy; however, a gross analysis of the Pubmed database for the year 2017, employing the terms dialysis, haemodialysis or haemodiafiltration and Italy or France, retrieved 665 papers in Italy and 404 in France. While the issue is complex, these data suggest that a higher number of specialists is more efficacious than a high, but delayed economic reward, and that the latter should probably be at least partially converted into employing a larger work force. Dialysis Reimbursement: Per Session, Per Patient, Per Bundle There are three main ways of calculating the cost of dialysis and establishing how it should be reimbursed. The first involves dividing dialysis into a series of elements and reimbursing each one separately (dialysis itself, medications, intradialytic drugs, transportation, chronic treatments, laboratory tests, imagery; consultations; hospitalizations; home assistance). The second, known as the capitation system, consists of merging these elements, partially or entirely, in a per capita reimbursement, while the third, usually called the bundle system, entails identifying a core of procedures intrinsically linked to treatment (e.g., dialysis sessions, tests, drugs and transportation). Each one has advantages and drawbacks, and each one impacts differently on the organization and delivery of dialysis care, as will be discussed in the pages that follow. Reimbursement Per Separate Element: Dialysis Treatment Seen as a Matryoshka Delivering dialysis entails more than merely delivering a session of blood purification. Compensating for a lack of kidney function also includes the use of medications (from erythropoietin to anti-hypertensive drugs), controlling the efficacy of dialysis sessions via regular blood tests, and checking for cardiovascular diseases and other frequently associated comorbidities. The first advantage of dealing with each item separately is that this allows us to better understand the cost of each one, targeting actions needed to control costs to specific issues, such as transportation or blood tests ( Figure 1). Figure 1. Dialysis costs as a matryoshka. A second advantage is that the different items do not compete with one another, and this helps to protect clinical decisions from being influenced by global budget constraints (for example, A second advantage is that the different items do not compete with one another, and this helps to protect clinical decisions from being influenced by global budget constraints (for example, transportation costs, higher in rural areas, do not compete with costs for blood tests in the same settings). A third element in favour of separating items is that in a given setting the amount spent on a dialysis session (dialyser, dialysis machine, healthcare workers) is similar for all patents, while the costs of check-ups, drugs and imagery largely depend on age and comorbidity, and even in the same setting can vary widely from patient to patient. Thus, separating the elements may more easily allow for stratification and may help justifying cost differences, for example as for comorbidity. For instance, a four-hour haemodiafiltration session, performed with a high-flow membrane, has a supply and nursing cost that is roughly the same for a 40-year-old patient who started dialysis two years previously, is waitlisted for kidney transplantation and has a low comorbidity score, and for an 80-year-old patient, with high comorbidity and severe cardiovascular disease. However, the cost of drugs, biochemical controls and imagery increases with age and comorbidity, and a separate analysis is more likely to capture the differences. The cons are, however, many. While this approach is appealing in the care of complex patients, since it avoids potentially dangerous interferences between the items and phases of care, it could lead to limits on the overall budget dedicated to the more complex patients. Separating the different items is generally difficult, and if the distinction corresponds to a separation of providers or payers (as it does in France, where transportation and in-hospital care have separate budgets), an overall advantage of one therapeutic choice may be missed, or result in a paradoxical disadvantage to one of the parties, as the case of incremental dialysis, discussed in a later paragraph, shows. Furthermore, separating items may lead to a focus on issues of lesser relevance and forgetting others; one example, derived from the Italian experience, may be the emphasis put on the reduction of the cost and number of blood tests or consumables, completely forgetting the cost of dialysis waste management, which could be as high as 50% of the overall cost of a new dialyzer and blood lines [63]. In this regard, the separation of the items may lead to losing sight of the overall problem. The "Capitation System" of Reimbursement: Dialysis Treatment Seen with a Distributive Approach There is an obvious advantage to merging everything entailed in dialysis treatment into a single "mega" reimbursement payment [64]. Patients need integrated care, and integrating reimbursement supports a holistic view and helps to avoid fragmenting treatment ( Figure 2). Furthermore, it can make it possible to reinvest in specific aspects of care by favouring the careful distribution of the overall budget. An example is home assistance for patients who wish to be treated at home but who lack a partner for dialysis. Lowering transportation and hospital costs means the money saved can be used to pay a helper, a system that has allowed peritoneal dialysis to be more widely used in some areas in Italy [65,66]. In addition, such a system makes it possible to bypass the need to define a maximum affordable cost per item per patient, thus allowing a nephrology centre to allocate more resources to fragile patients, whose costs are counterbalanced by those of younger and fitter patients, who are less clinically demanding. In such a context, physicians act as "resource regulators" whose role is to favour the use of the least expensive options for each item, and make money available to pay for more expensive treatments for special cases. An example is expanding home dialysis and investing in in-centre daily dialysis for fragile or pregnant patients. This i, however, not fully the case in the United States, where a capitation system was recently modified towards a bundled care system, with positive effects on the development of home care [67,68]. In such a context, physicians act as "resource regulators" whose role is to favour the use of the least expensive options for each item, and make money available to pay for more expensive treatments for special cases. An example is expanding home dialysis and investing in in-centre daily dialysis for fragile or pregnant patients. This i, however, not fully the case in the United States, where a capitation system was recently modified towards a bundled care system, with positive effects on the development of home care [67,68]. There are two requisites for the smooth functioning of the capitation system: dealing with a critical mass of patients, and treating patients with a different case mix ( Figure 2). In other words, performance is optimal only when a sufficient number of patients are treated (more than would normally be in care in a small dialysis centre) to allow physicians to reallocate resources. Furthermore, due to the obvious attrition that accompanies kidney transplantation (or with out-of-hospital dialysis, especially when managed by different providers), the case mix may be uniformly high in in-hospital centres. The paradoxical risk is to penalize the centres with the best overall performance (high and rapid access to kidney transplantation; wide use of out-of-hospital dialysis). The rate of attrition may be particularly important when the system is mixed, for-profit and non-profit, since for-profit structures will tend to select the "least complicated" and therefore least expensive patients [69,70]. A strict capitation system may therefore induce a selection process that is potentially detrimental for non-profit structures, which are, on the contrary, those that tend to have better results [69,70]. Correction for comorbidity can partially correct for these discrepancies. However, assessment of comorbidity is complex; no system is uniformly the best one and the definition of frailty, nutritional status and comorbidity is either very subjective, and not graded, or very complex (and never devoid of a subjective component) [71][72][73][74][75][76][77][78][79][80]. In incremental dialysis, patients start treatment with one or two sessions per week and progressively increase to a full dialysis schedule, or even daily dialysis [52,[81][82][83]. The cost of the supplies for each session does not change, while the cost per patient, for example per month, is deeply affected by the clinical choice of 1-2 (incremental) or 4-6 (intensive) dialysis sessions per week. The usual policy is to check the results in incremental dialysis more There are two requisites for the smooth functioning of the capitation system: dealing with a critical mass of patients, and treating patients with a different case mix (Figure 2). In other words, performance is optimal only when a sufficient number of patients are treated (more than would normally be in care in a small dialysis centre) to allow physicians to reallocate resources. Furthermore, due to the obvious attrition that accompanies kidney transplantation (or with out-of-hospital dialysis, especially when managed by different providers), the case mix may be uniformly high in in-hospital centres. The paradoxical risk is to penalize the centres with the best overall performance (high and rapid access to kidney transplantation; wide use of out-of-hospital dialysis). The rate of attrition may be particularly important when the system is mixed, for-profit and non-profit, since for-profit structures will tend to select the "least complicated" and therefore least expensive patients [69,70]. A strict capitation system may therefore induce a selection process that is potentially detrimental for non-profit structures, which are, on the contrary, those that tend to have better results [69,70]. Correction for comorbidity can partially correct for these discrepancies. However, assessment of comorbidity is complex; no system is uniformly the best one and the definition of frailty, nutritional status and comorbidity is either very subjective, and not graded, or very complex (and never devoid of a subjective component) [71][72][73][74][75][76][77][78][79][80]. What Is Favourable for the Patient and for the System May Not Be Favourable for the Hospital: The Case of Incremental Dialysis In incremental dialysis, patients start treatment with one or two sessions per week and progressively increase to a full dialysis schedule, or even daily dialysis [52,[81][82][83]. The cost of the supplies for each session does not change, while the cost per patient, for example per month, is deeply affected by the clinical choice of 1-2 (incremental) or 4-6 (intensive) dialysis sessions per week. The usual policy is to check the results in incremental dialysis more frequently than in conventional dialysis. Therefore, if expenditures for blood tests have to be added to a payment per session, the single session per week costs more, while the total treatment cost is lower. Furthermore, managing patients with personalized treatments makes organizing the dialysis ward more complex. Using a system of incremental dialysis, the same number of patients can be treated in a lower number of sessions. This means that, seen in the context of reimbursement per session of treatment, incremental dialysis is advantageous for the payer (the healthcare system in France and Italy: fewer sessions, less spent for transportation), but leads to higher expenditures for the provider (public hospital or private provider: difficult and time-consuming organization of the occupation of dialysis posts; higher cost of check-ups where they are considered as part of what a dialysis session costs). Since it is usually the provider that has the final say on the matter, the obvious risk is to disincentivize options that allow a centre to provide better, more personalized care, since they are more complicated and less lucrative. The reverse would be true for a capitation system, where fewer global resources are employed for patients on less frequent dialysis, with a potential advantage for the provider, but with the risk of keeping the number of dialysis sessions to a low, unsafe level. Is Bundled Care the Solution? Defining the Core, Defining Comorbidity: A Difficult Mission An appealing alternative would be to identify a core of dialysis-related activities so that these could be reimbursed together, plus a series of specific "frequent activities" that would be reimbursed according to need (Figure 3). This is what is called the bundled system of care, also referred to as episode-based payment, episode payment, case rate, evidence-based case rate, global bundled payment, and package pricing [40,67,[84][85][86][87][88][89]. Intended to be a middle way between the fee-for-service payment and capitation, this system would determine the amount of reimbursement due on the basis of expected costs for clinically defined "episodes" of care. The concept is appealing and is already partially integrated in the reimbursement of dialysis in many European countries (for example, erythropoiesis-stimulating agents (ESAs) are included in most fees for dialysis sessions). The effect of such a shift in payment policy is enormous. For example, the studies dealing with changes in the use of ESAs in the USA highlight how inclusion in the bundle changed clinical practice, with an enormous reduction in the use of ESAs in favour of higher iron levels. It remains to be determined whether this improved, impaired or had no effect on survival results. Yet, regardless of results, ESAs are a good example of how ethics and economics are linked and demonstrate that medical practice can be rapidly affected by changes in reimbursement policies [86][87][88][89]. The ESA experience shows the need for a careful analysis of the potential effects of further changes in the bundled payment system, for example with the inclusion of oral drugs, initially foreseen for 2025. The potential advantage of the bundle is its flexibility. It can be designed differently, and is adaptable to a variety of contexts; however, unlike the capitation system, bundled reimbursement does not capture all costs, and differently from the fee-for-service model, it may make it difficult to disentangle what was spent on specific elements in the course of treatment. A well-designed bundle system should help clinicians to wisely meet their patients' needs without discontenting providers, but often this is not what happens, and it is not easy to change the system so that it takes variations in patients' care into account. Once more, correction for comorbidity is possible, but there is no single score that precisely captures dialysis-related comorbidity, and given its complexity, variation over time, and the subjectivity of evaluation, grading comorbidity is usually not feasible [90][91][92][93][94]. A Fundamental Question: Haemodialysis or Peritoneal Dialysis? The diffusion of peritoneal dialysis (PD) differs from country to country. The treatment is widely used in both rich and poor settings, in Canada, Australia and New Zealand, where distances make home treatment preferable, as well as in Mexico and Taiwan, where less expensive treatment options are chosen because of budgetary constraints [95][96][97][98][99][100][101][102]. Cost issues are, however, not limited to the emerging countries, since the weight of dialysis is remarkable in all contexts, and the increase in home treatments, and in particular in home haemodialysis, is advocated as a means to optimize costs and resources, with clinical outcomes at least equivalent to hospital dialysis [99][100][101][102]. Even if "peritoneal dialysis first" or "home haemodialysis first" probably represents a winning strategy for patients (more autonomy, more empowerment, better care), and for the health care system (lower costs of transportation, lower overall indirect costs and probably also lower costs of direct treatment, especially where PD is non assisted), this strategy is not uniformly developed, partly because of the fact that reimbursement is often lower and the advantage to the individuals and to society is not uniformly accompanied by an advantage to the dialysis providers [103][104][105][106][107]. Political decisions can play an important role: for example, the recent increase in peritoneal dialysis in Switzerland is due to a combination of favourable reimbursement for PD and a reduction in the reimbursement for haemodialysis if a minimum number of PD patients is not reached [98,99]. The availability of assisted peritoneal dialysis programs could profoundly change the penetration of peritoneal dialysis, in particular in elderly patients. However, the lower prevalence of PD in France, where assisted PD is the rule, as compared to Italy, where assisted PD is not available, once more indicates that things are not as simple as they may seem, and that economic incentives and drawbacks are just some of the potential factors determining treatment choices [96][97][98][99][100][101][102]. One-Size-Fits-All or Tailor-Made Treatments? The heterogeneity of dialysis patients is a crucial point. It has been raised in all international comparisons and extensively discussed in relationship to costs [14][15][16][47][48][49][50]. In an era of precision medicine, individualized treatment and holistic approaches, delivering a fixed dose of dialysis to all Dialysis and drugs and tests and transportations and imaging Dialysis and intradialytic drugs and imaging Dialysis and tests A Fundamental Question: Haemodialysis or Peritoneal Dialysis? The diffusion of peritoneal dialysis (PD) differs from country to country. The treatment is widely used in both rich and poor settings, in Canada, Australia and New Zealand, where distances make home treatment preferable, as well as in Mexico and Taiwan, where less expensive treatment options are chosen because of budgetary constraints [95][96][97][98][99][100][101][102]. Cost issues are, however, not limited to the emerging countries, since the weight of dialysis is remarkable in all contexts, and the increase in home treatments, and in particular in home haemodialysis, is advocated as a means to optimize costs and resources, with clinical outcomes at least equivalent to hospital dialysis [99][100][101][102]. Even if "peritoneal dialysis first" or "home haemodialysis first" probably represents a winning strategy for patients (more autonomy, more empowerment, better care), and for the health care system (lower costs of transportation, lower overall indirect costs and probably also lower costs of direct treatment, especially where PD is non assisted), this strategy is not uniformly developed, partly because of the fact that reimbursement is often lower and the advantage to the individuals and to society is not uniformly accompanied by an advantage to the dialysis providers [103][104][105][106][107]. Political decisions can play an important role: for example, the recent increase in peritoneal dialysis in Switzerland is due to a combination of favourable reimbursement for PD and a reduction in the reimbursement for haemodialysis if a minimum number of PD patients is not reached [98,99]. The availability of assisted peritoneal dialysis programs could profoundly change the penetration of peritoneal dialysis, in particular in elderly patients. However, the lower prevalence of PD in France, where assisted PD is the rule, as compared to Italy, where assisted PD is not available, once more indicates that things are not as simple as they may seem, and that economic incentives and drawbacks are just some of the potential factors determining treatment choices [96][97][98][99][100][101][102]. One-Size-Fits-All or Tailor-Made Treatments? The heterogeneity of dialysis patients is a crucial point. It has been raised in all international comparisons and extensively discussed in relationship to costs [14][15][16][47][48][49][50]. In an era of precision medicine, individualized treatment and holistic approaches, delivering a fixed dose of dialysis to all patients can be likened to using the same washing machine setting for cotton and cashmere ( Figure 4). Furthermore, some individuals, in particular if affected by multiple and severe comorbidity, may not gain any benefit from dialysis, in terms of morbidity and mortality; while the controversy about so-called "palliative" or "conservative" care is behind the scope of this review, the advantage of this open discussion is to point out that the need for dialysis cannot be reduced to a mere series of indexes, each of which is incomplete and potentially misleading [7][8][9][10][11][12][13][51][52][53][54][55][108][109][110][111][112][113][114]. patients can be likened to using the same washing machine setting for cotton and cashmere ( Figure 4). Furthermore, some individuals, in particular if affected by multiple and severe comorbidity, may not gain any benefit from dialysis, in terms of morbidity and mortality; while the controversy about so-called "palliative" or "conservative" care is behind the scope of this review, the advantage of this open discussion is to point out that the need for dialysis cannot be reduced to a mere series of indexes, each of which is incomplete and potentially misleading [7][8][9][10][11][12][13][51][52][53][54][55][108][109][110][111][112][113][114]. The failure of early dialysis to prolong life and improve its quality has caused nephrologists to reflect on the negative effects of treatment [7,[115][116][117][118][119][120]. This was also the starting point for reconsidering incremental dialysis and for realizing that, especially in elderly patients, the advantages of a high dialysis dose are often counterbalanced by the iatrogenicity of treatment [120][121][122][123][124][125]. Increasing the dialysis dose by increasing the number (and/or duration) of sessions may, conversely, be necessary in particular situations, such as pregnancy or high metabolic needs, or be a suitable way to attain tolerance in fragile individuals [49][50][51][52][53][125][126][127]. However, standardization is still the most commonly pursued policy, first because of its simplicity, secondly because it leaves an important part of dialysis management to nurses, thus reducing the number of physicians involved (and cutting costs), and finally because "working by numbers" may be culturally reassuring. Personalization of dialysis is compatible with all reimbursement models, but can create problems in each of them: in a fee-for-service system, each session is reimbursed, and more frequent dialysis may be favourable for the provider; however, there may be limitations (for example, a maximum of three dialysis sessions per week are reimbursed, or only patients on three sessions per week are reimbursed), impairing flexibility and making treatment personalization difficult if not impossible. In a capitation model, combining less frequent (incremental) and more frequent dialysis sessions allows for greater flexibility; once more, however, the model is not devoid of risks, in particular of limiting a higher number of dialysis sessions for economic advantages. The failure of early dialysis to prolong life and improve its quality has caused nephrologists to reflect on the negative effects of treatment [7,[115][116][117][118][119][120]. This was also the starting point for reconsidering incremental dialysis and for realizing that, especially in elderly patients, the advantages of a high dialysis dose are often counterbalanced by the iatrogenicity of treatment [120][121][122][123][124][125]. Increasing the dialysis dose by increasing the number (and/or duration) of sessions may, conversely, be necessary in particular situations, such as pregnancy or high metabolic needs, or be a suitable way to attain tolerance in fragile individuals [49][50][51][52][53][125][126][127]. However, standardization is still the most commonly pursued policy, first because of its simplicity, secondly because it leaves an important part of dialysis management to nurses, thus reducing the number of physicians involved (and cutting costs), and finally because "working by numbers" may be culturally reassuring. Personalization of dialysis is compatible with all reimbursement models, but can create problems in each of them: in a fee-for-service system, each session is reimbursed, and more frequent dialysis may be favourable for the provider; however, there may be limitations (for example, a maximum of three dialysis sessions per week are reimbursed, or only patients on three sessions per week are reimbursed), impairing flexibility and making treatment personalization difficult if not impossible. In a capitation model, combining less frequent (incremental) and more frequent dialysis sessions allows for greater flexibility; once more, however, the model is not devoid of risks, in particular of limiting a higher number of dialysis sessions for economic advantages. In a bundled system the "dialysis package" can be designed differently, allowing a certain degree of personalization (or not). The focus switches to the definition of the "package" itself, maintaining a balance between the need for flexibility and clear definitions. Predialysis Care May Be Good for the Patient and for the Community, but Less Rewarding for the Hospital Economic reasoning also applies to determining the policies of dialysis start. Retarding dialysis is a time-consuming task and the longer a kidney ailment progresses, the greater the need for clinical check-ups and blood tests. However, the average reimbursement for a clinical visit that will require at least 30 min of a physician's time is 10% of what is paid for a dialysis session, which will normally entail no more than 5 min of medical controls. Dialysis usually allows an economic advantage for the provider, once a critical mass of treatments is reached. This may not be the case for outpatient care. The data about the "day hospital" in which patients are taken in for a one-day hospitalisation in the case of a need for complex diagnostics or treatments that cannot be performed outside of hospital, are likewise not reassuring; in France, it has been calculated that the overall cost in 2016 was over 800 euros (213 for logistics and "housing" and 227 for physicians and nurses), against a reimbursement of 614 euros per day. The advantage for the patient and for society of safely retarding dialysis is intuitive, but there is hardly any advantage involved for the structure delivering predialysis care. This means that, while dialysis is expensive, it may be economically advantageous for the structure providing treatment. Prevention is theoretically a good option in all its forms, even the latest ones (prevention of kidney disease should of course be the first goal; prevention of progression should be pursued in all chronic patients, but even in the last stage, stabilizing kidney disease may be seen as a form of "late" prevention of the need for dialysis start). Previous studies by our group suggested that delaying the start of dialysis by two years could save enough money to pay the salary of a nephrologist for a year. This crude estimate, intended to raise interest in secondary prevention of end-stage kidney disease, should be borne in mind in organizing nephrology care [128]. However, the budgets for predialysis and dialysis care are usually separate and it may be difficult to demonstrate that comprehensive care really helps retard dialysis start, an issue that arises in other contexts, for example the dietary management of chronic kidney disease [129][130][131][132][133]. There is a clear need for implementation of a comprehensive network of predialysis care to optimize resources; investment in medical care has the advantage of increasing the flexibility of nephrology structures and making more efficient use of physicians' time. This could then be translated into time to dedicate to clinical tasks and research. Concluding Remarks In the best scenario, all patients in all countries would receive all the treatment they need to preserve life and its quality as long as possible. Personalization, integration and flexibility are increasingly included in this comprehensive vision. Since this is not the rule, but still a goal to pursue, experienced clinicians should probably spend more time with economists and policy-makers to ensure the wise use of our finite resources, and, in line with developments in medical knowledge, adapt our always-imperfect systems to patients' changing needs. Acknowledgments: We thank Susan Finnel for her careful language review and to Nadia Kuprina for her artwork of the cat in the washing machine. Conflicts of Interest: The authors declare no conflict of interest.
8,335.6
2019-02-01T00:00:00.000
[ "Medicine", "Economics" ]
Tunable surface plasmon resonance on an elastomeric substrate In this study, we demonstrate that periods of metallic grati ngs on elastomeric substrates can be tuned with external strain and hence are found to control the resonance condition of surface plasmon p laritons. We have excited the plasmon resonance on the elastomeric gra ting coated with gold and silver. The grating period is increased up to 25 % by applying an external mechanical strain. The tunability of the elasto meric substrate provides the opportunity to use such gratings as efficient su rface enhanced Raman spectroscopy substrates. It’s been demonstrated tha t the Raman signal can be maximized by applying an external mechanical s tr in to the elastomeric grating. © 2009 Optical Society of America OCIS codes: (050.2770) Gratings, (240.6680) Surface Plasmons, (240.6 695) Surfaceenhanced Raman Spectroscopy, (250.5403) Plasmonics References and links 1. J. Homola, S. S. Yee, and G. Gauglitz, “Surface plasmon res onance sensors: review,” Sens. Actuators, B 54, 3–15 (1999). 2. H. P. Liang, L. J. Wan, C. L. Bai, and L. Jiang, “Gold hollow n anospheres: Tunable surface plasmon resonance controlled by interior-cavity sizes,” J. Phys. Chem. B 109, 7795–7800 (2005). 3. J. Becker, I. Zins, A. Jakab, Y. Khalavka, O. Schubert, and C. Sonnichsen, “Plasmonic focusing reduces ensemble linewidth of silver-coated gold nanorods,” Nano Lett. 8, 1719–1723 (2008). 4. Y. Yang, S. Matsubara, M. Nogami, J. L. Shi, and W. M. Huang, “One-dimensional self-assembly of gold nanoparticles for tunable surface plasmon resonance prope rties,” Nanotech.17, 2821–2827 (2006). 5. W. A. Weimer and M. J. Dyer, “Tunable surface plasmon reson ance silver films,” Appl. Phys. Lett. 79, 3164–3166 (2001). 6. A. Biswas, O. C. Aktas, U. Schurmann, U. Saeed, V. Zaporojt chenko, F. Faupel, and T. Strunskus, “Tunable multiple plasmon resonance wavelengths response from multico mponent polymer-metal nanocomposite systems,” Appl. Phys. Lett.84, 2655–2657 (2004). 7. T. R. Jensen, M. D. Malinsky, C. L. Haynes, and R. P. Van Duyn e, “Nanosphere lithography: Tunable localized surface plasmon resonance spectra of silver nanoparticles ,” J. Phys. Chem. B104, 10,549–10,556 (2000). 8. W. Dickson, G. A. Wurtz, P. R. Evans, R. J. Pollard, and A. V. Zayats, “Electronically controlled surface plasmon dispersion and optical transmission through metallic hole arrays using liquid crystal,” Nano Lett. 8, 281–286 (2008). 9. H. L. Chen, K. C. Hsieh, C. H. Lin, and S. H. Chen, “Using dire ct nanoimprinting of ferroelectric films to prepare devices exhibiting bi-directionally tunable surface plas mon resonances,” Nanotech. 19(435304) (2008). 10. G. Xu, Y. Chen, M. Tazawa, and P. Jin, “Surface plasmon res onance of silver nanoparticles on vanadium dioxide,” J. Phys. Chem. B110, 2051–2056 (2006). 11. G. Xu, C. M. Huang, M. Tazawa, P. Jin, and D. M. Chen, “NanoAg on vanadium dioxide. II. Thermal tuning of surface plasmon resonance,” J. Appl. Phys. 104(053102) (2008). #108680 $15.00 USD Received 12 Mar 2009; revised 30 Apr 2009; accepted 30 Apr 2009; published 5 May 2009 (C) 2009 OSA 11 May 2009 / Vol. 17, No. 10 / OPTICS EXPRESS 8542 12. R. A. Alvarez-Puebla, D. J. Ross, G. A. Nazri, and R. F. Aro ca, “Surface-enhanced Raman scattering on nanoshells with tunable surface plasmon resonance,” Langm uir 21, 10,504–10,508 (2005). 13. A. Kocabas, G. Ertas, S. S. Senlik, and A. Aydinli, “Plasm onic band gap structures for surface-enhanced Raman scattering,” Opt. Express 16, 12,469–12,477 (2008). 14. J. B. Jackson and N. J. Halas, “Surface-enhanced Raman sc attering on tunable plasmonic nanoparticle substrates,” Proc. Nat. Acad. Sci. U.S.A. 101, 17,930–17,935 (2004). 15. Y. Lu, G. L. Liu, and L. P. Lee, “High-density silver nanop article film with temperature-controllable interparticle spacing for a tunable surface enhanced Raman scattering sub strate,” Nano Lett. 5, 5–9 (2005). 16. P. C. Lin, S. Vajpayee, A. Jagota, C. Y. Hui, and S. Yang, “M echanically tunable dry adhesive from wrinkled elastomers,” Soft Matter. 4, 1830–1835 (2008). 17. A. N. Simonov, O. Akhzar-Mehr, and G. Vdovin, “Light scan ner based on a viscoelastic stretchable grating,” Opt. Lett.30, 949–951 (2005). 18. A. N. Simonov, S. Grabarnik, and G. Vdovin, “Stretchable diffraction gratings for spectrometry,” Opt. Express 15, 9784–9792 (2007). 19. D.-Y. Khang, H. Jiang, Y. Huang, and J. Rogers, “A Stretch able Form of Single-Crystal Silicon for Electronics on Elastomeric Substrates,” Science 311, 208–212 (2006). 20. A. Kocabas, A. Dana, and A. Aydinli, “Excitation of a surf ace plasmon with an elastomeric grating,” Appl. Phys. Lett. 89(041123) (2006). 21. H. Raether, Surface Plasmons (Springer, Berlin, 1988). 22. R. Schasfoort and A. Tudos, Handbook of Surface Plasmon Resonance (RSC, Cambridge, UK, 2008). 23. R. A. Guerrero, J. T. Barretto, J. L. V. Uy, I. B. Culaba, an d B. O. Chan, “Effects of spontaneous surface buckling on the diffraction performance of an Au-coated elastomeric grating,” Opt. Commun. 270, 1–7 (2007). 24. T. Li, Z. Huang, Z. Suo, S. P. Lacour, and S. Wagner, “Stret chability of thin metal films on elastomer substrates,” Appl. Phys. Lett.85, 3435–3437 (2004). Introduction The surface plasmon resonance (SPR) phenomena observed on noble metal surfaces or nanoparticles has been a great interest in several fields of research such as nanoscale photonics and biological sensing [1]. SPR can simply be defined as collective oscillations of free electrons coupled to the metal-dielectric interfaces. Metal nanoparticles such as nanospheres, nanorods, thin film nanoislands or nanoshells show strong extinction and scattering spectra when excited at the SPR condition. On the other hand, continuous metallic films possessing a periodic perturbation also exhibit similar effects at the SPR condition. The challenge of designing effective structures to manipulate plasmonic fields and utilize them in functional devices still remains. In particular, the use of SPR in surface enhanced Raman spectroscopy (SERS) and biological sensing require an intelligent design in order to maximize the plasmonic enhancement. In general, there is a need to optimize the correlation between the SPR and the excitation laser line in the case of SERS applications. A similar optimization is needed to be done for sensing applications of the presence of a particular biological molecule or a chemical reaction. In this regard, the tunability of the SPR wavelength provides flexibility in many plasmonic sensing applications. In recent years, several different tuning mechanisms have been demonstrated such as controlling the interior cavity sizes of nanospheres [2], changing concentrations of core-shell nanoparticles [3], length of the nanochain gold nanoparticles [4], thermal deposition parameters of silver nanoislands [5], using nanocomposite systems [6], controlling the size, height and shape of silver nanospheres [7]. However, many of the above mechanisms require the parameters to be fixed during the fabrication. On the other hand, flexible designs utilizing electronic [8], ferroelectric [9] or thermal [10,11] tuning mechanisms are also reported in the literature. Those methods are reversible and can be applied after the plasmonic structure is fabricated. Such a repeatable process can find wide applications in the field of Raman spectroscopy and plasmonic sensing. The tunability of SPR wavelength on nanoshells was demonstrated [12] and a maximum SERS signal was achieved by optimizing the SPR wavelength. A similar approach for maxi- mizing the SERS signal was demonstrated on biharmonic grating structures by changing the grating strength and tuning the SPR wavelength [13]. It was reported that by controlling the geometry of the nanoshells, the SERS enhancements can be optimized [14]. A repeatable thermal tuning mechanism using silver nanoparticles for achieving a tunable SERS substrate was reported by Lu, et al. [15]. In this study, we use an elastomeric grating structure in order to excite surface plasmon polaritons (SPP) on its metallic surface. The mechanical tunability of such gratings are used in several applications such as dry adhesives [16] with tunable surface roughness and adhesion, light scanners [17], spectrometry [18], and for possible application to stretchable electronics [19]. Recently such elastomeric gratings are used for the excitation of SPP on flat metallic surfaces [20]. Fabrication We report a way of tuning the SPR by applying mechanical strain on the elastomeric grating structure. The elongation of the elastomer effectively changes the period of the metallic grating. From the well known dispersion relationship [21], it can easily be seen that the SPR wavelength also shifts as the external strain changes the period of the elastomeric grating coated with a thin metallic layer. We fabricated two silicone elastomers (Sylgard 184, Dow Corning) with gratings on top using two different methods. We choose two different periods in order to cover a wide range of wavelengths. The first elastomeric grating was generated using holographic lithography. We recorded a master grating with the desired period -665nm-on top of a bare silicon sample using a holographic He-Cd laser beam with 325 nm wavelength exposure on a photoresistive polymer (AZ1505). After developing the sample, we achieved a grating with 665 nm period on the photoresist. The elastomeric grating is then obtained using the replication procedure: Liquid polydimethylsiloxane (PDMS) is poured on the master grating and cured at 75 • C for at least 2 hours. Note that the thickness of the elastomer is kept around 5 mm. After the Fig. 2. The normal incidence reflection spectra for three different strain values of 7.5%, 15% and 23% for grating A (main) and no strain, 6.4% and 12.8% for grating B (inset). curing procedure, the elastomeric stamp is peeled off from the master grating. Due to thermal contraction, there is a 1% decrease in the periodicity of the PDMS grating. To generate SPP, the PDMS grating is coated with an optimal 55 nm of silver [22] using thermal evaporation. For the second elastomer we used a commercially available ruled grating with 530 nm period as the master grating. The fabrication of the elastomeric grating is the same as in the first case. Similarly, this grating is coated with 55 nm of silver. Results and discussions For a demonstration of the tunable periodicity of the elastomeric grating, we performed a diffraction experiment [23]. We measured the angle of the diffracted beam when the grating is excited by a 514.5 nm Ar + laser for the case of 530 nm grating (A) and by a 632 nm He-Ne laser for the case of 665 nm grating (B). The initial length of the elastomer has been measured using a caliper and the length increments are recorded using the micrometer of the mechanical stage. The mechanical strain is applied using a precision mechanical stage. The angle of the diffracted beam is recorded as a function of the applied mechanical strain. It's been shown in the literature that the thin metallic films on elastomeric substrates can be stretched reversibly without any plastic deformation up to 3% [24]. Elastic deformation for metallic films is important for maintaining the electrical interconnects on elastomeric substrates. In this work we deal with the grating periodicity of the metal layer. In the diffraction experiments, we have seen that the cracks on the metal surface do not change the effective grating period, but lower the quality factor of the surface plasmon resonance condition. In Fig. 1 the linear change in the periodicity of the grating is plotted up to 25% of mechanical strain. To demonstrate the tunability of the SPR condition we used both gratings, A and B. The optical normal incidence reflection spectrum of the PDMS gratings is measured using an ellipsometer (JA Woolam VASE). As seen in Fig. 2, the SPR wavelengths on 55 nm silver coated gratings A and B are approximately 560 nm and 670 nm, respectively, in the absence of applied strain. As the elastomeric grating is stretched, SPR wavelength red shifts due to the increased grating periodicity. Note that the mechanical integrity of the silver film on top of the elastomeric grating may be degraded, if the strain is high [24]. The normal incidence reflection spectra are recorded for different applied mechanical strain values and plotted in Fig. 3. The shift in SPR wavelength follows the linear pattern as measured in diffraction experiments. The elastomeric grating with 530 nm period is used as a SERS substrate to measure SERS signal of Rhodamine 6G (R6G) molecule. The grating is coated with 120 nm of gold layer by thermal evaporation. Raman spectra were obtained with a Jobin Yvon LABRAM Raman Spectrometer equipped with a He-Ne laser which gave an excitation line at 632.81 nm. 20 mW incident laser beam is focused by a 10x objective lens. The scattered radiation was collected by the same objective lens and sent through a Raman notch filter to a Peltier cooled CCD detector. 10 µl of 1.0 × 10 −6 M R6G solution is drop-coated onto the elastomer and is then allowed to dry. The precision mechanical strain setup is used under the objective of the spectrometer. The elastomeric grating with R6G sample is stretched with the mechanical strain device until the Raman signal starts to increase. In Fig. 4, the Raman spectra of R6G are plotted for three different strain values. Raman signal is maximized when the strain is 20.8% which corresponds to a 633 nm grating period. The Raman signal drops immediately when the strain is further increased. Although the 18.8% and 22.9% strains are equally spaced from the optimal 20.8% strain, the enhancement factor drops faster in the latter case. This can be attributed to the shape of the absorption curves seen in Fig. 2, where the reflection spectrum of the grating is not symmetrical with respect to the resonance wavelength. The variation at the short wavelength side is faster than the variation at the long wavelength side. After subtracting the background, the empirical signal enhancement factors were determined using the ratios of peak integrated surface enhanced Raman vibration to the corresponding unenhanced signal from 55 nm thick gold metal surface coated on silicon surface. An enhancement factor of more than 10 5 is achieved when the grating period is approximately 633 nm at the strain value of 20.8%. The enhancement factor is approximately constant for all the Raman peaks of R6G molecule, which shows the increase in the absorption of the excitation laser. Conclusions We have demonstrated the use of elastomeric gratings with tunable surface plasmon resonance condition. We have tuned the surface plasmon resonance wavelength by applying a mechanical strain on elastomeric gratings coated with a thin layer of metal. The shift of the SPR wavelength shows a strong correlation with the shift of the grating period. We have shown the use of the elastomeric grating with a SERS experiment of R6G molecule. It has been found that a maximum Raman signal can be reached by changing the surface plasmon resonance condition on the surface of the elastomeric grating. Note that the presented method is compatible with Raman and Micro-Raman Spectroscopy methods which utilize a fixed incident angle. It provides a simple way of excitation and tuning the surface plasmon resonance condition without using bulky prism couplers or complex scanning mechanisms required for changing the angle of incidence. Additionally, using the right angle reflection method provides a self aligned mechanism for incident and reflected beams. We believe that the method can be used not only in SERS experiments, but also in biosensing and plasmonic enhancement applications.
3,504.2
2009-05-11T00:00:00.000
[ "Physics" ]
Fingerprint Analysis and Identification of Strains ST309 as a Potential High Risk Clone in a Pseudomonas aeruginosa Population Isolated from Children with Bacteremia in Mexico City Pseudomonas aeruginosa is an opportunistic pathogen and is associated with nosocomial infections. Its ability to thrive in a broad range of environments is due to a large and diverse genome of which its accessory genome is part. The objective of this study was to characterize P. aeruginosa strains isolated from children who developed bacteremia, using pulse-field gel electrophoresis, and in terms of its genomic islands, virulence genes, multilocus sequence type, and antimicrobial susceptibility. Our results showed that P. aeruginosa strains presented the seven virulence genes: toxA, lasB, lecA, algR, plcH, phzA1, and toxR, a type IV pilin alleles (TFP) group I or II. Additionally, we detected a novel pilin and accessory gene, expanding the number of TFP alleles to group VI. All strains presented the PAPI-2 Island and the majority were exoU+ and exoS+ genotype. Ten percent of the strains were multi-drug resistant phenotype, 18% extensively drug-resistant, 68% moderately resistant and only 3% were susceptible to all the antimicrobial tested. The most prevalent acquired β-Lactamase was KPC. We identified a group of ST309 strains, as a potential high risk clone. Our finding also showed that the strains isolated from patients with bacteremia have important virulence factors involved in colonization and dissemination as: a TFP group I or II; the presence of the exoU gene within the PAPI-2 island and the presence of the exoS gene. INTRODUCTION Pseudomonas aeruginosa is a Gram-negative bacterium, which is categorized as an opportunistic pathogen due to its ability to cause infections mainly in immunocompromised patients. It is a ubiquitous microorganism, metabolically versatile, which is able to adapt to many environments (Gilligan, 1995;Lyczak et al., 2000). An important virulent characteristic is the formation of biofilms and its natural multiresistance to a wide range of antibiotics and disinfectants (Drenkard and Ausubel, 2002;Wolska and Szweda, 2009;Poole, 2011;Rybtke et al., 2015). This microorganism has been associated with nosocomial infections and outbreaks in Intensive Care Units (ICU) for adults, children and neonates (Thuong et al., 2003;Agodi et al., 2007;Zhang et al., 2012). It is a microorganism with the capacity to colonize different surfaces and in hospitals this is common in the colonization of humid sources, such as air conditioning units, sink faucets, and medical equipment (automatic ventilators and humidifiers; Agodi et al., 2007;Kerr and Snelling, 2009). Approximately, a 30% of the general population carries P. aeruginosa on their skin and in their mucosa and intestine (Thuong et al., 2003;Agodi et al., 2007). This bacterium is associated with chronic recurrent infections in patients with cystic fibrosis and it represents a high mortality in children with underlying conditions such as hemato-oncology diseases, cardiovascular surgeries, extended hospitalization in the ICU, gastrointestinal malformations, and prematurity (Fergie et al., 1994;Zhang et al., 2012). Some reports have shown that the incidence of bacteremia due to P. aeruginosa falls between 0.09 and 3.8 cases per 1,000 patients with a greater frequency in boys (Grisaru-Soen et al., 2000) with underlying conditions, such as hemato-oncological diseases. Nonetheless, P. aeruginosa also causes infections, such as ear infections or skin infections in healthy people exposed to poorly chlorinated water in swimming pools or tubs for hydromassage (Mena and Gerba, 2009;Rybtke et al., 2015). The genome of P. aeruginosa is highly variable due to the insertion of different mobile elements, such as genomic and pathogenic islands that contribute to chromosomal organization and genetic content thereby providing versatility to the bacteria that allows for better adaptation to different niches (Shen et al., 2006;Wiehlmann et al., 2007). Horizontal gene transfer (HGT) is a major force in bacterial evolution conferring a great variability between the species (Jolley and Maiden, 2010;Darmon and Leach, 2014). The majority of the studies related to P. aeruginosa and pediatric cohorts have been performed in patients with cystic fibrosis (Cf; Kus et al., 2004;Kidd et al., 2015), which have genetic and phenotypic characteristic well-studied. The distribution of pilin alleles amongst CF human isolates belong to pilin group I. Biofilm production is thought to be a hallmark of chronic colonization of the CF lung; P. aeruginosa "hypermutators" can be isolated from 37 to 54% of the patients with chronic CF infections. MutS, a critical component of the mismatch repair system, is commonly lost in hypermutator strains, resulting in elevated mutation rates. P. aeruginosa hypermutator strains isolated from chronically infected patients are often more resistant to antibiotics, possess a mucoid phenotype with small-colony variants on culture medium, and lose both the lipopolysaccharide (LPS) O-antigen and motility (Deretic et al., 1994;Mahenthiralingam et al., 1994;Govan and Deretic, 1996;Häußler et al., 1999Häußler et al., , 2003Oliver et al., 2000;Leone et al., 2008;Chung et al., 2012;Kidd et al., 2012;Rybtke et al., 2015). Published data show the importance of P. aeruginosa as a cause of bacteremia in patients who develop neutropenia following chemotherapy and the bacterium has been associated with nosocomial infections (Pronovost et al., 2006). However, there is little published data on the genetic characteristics and the susceptibility patterns of P. aeruginosa strains isolated from blood samples from children who developed bacteremia and/or neutropenia following chemotherapy (Oliver et al., 2015;Peña et al., 2015). In the present study, we characterized a collection of P. aeruginosa strains isolated from the blood of 60 children with a background of underlying conditions that developed bacteremia and neutropenia post-chemotherapy in a highly specialized hospital in Mexico City. Bacterial Stains A collection of 60 clinical P. aeruginosa strains was used in this study. The clinical isolates were isolated from blood sample taken between October 2011 and May 2014. All patients were treated in the Pediatric Hospital at Centro Medico Nacional, Siglo XXI in Mexico City. The project was approved by the Ethics Committee (No. R-2014-3603-44) of the Pediatric Hospital at the Centro Medico Nacional, Instituto Mexicano del Seguro Social. In all cases, the parents or guardians were informed about the nature of the study and were asked to sign a consent form. The following reference strains were used as positive controls: P. aeruginosa PA14 strain, which is an isolate from burn (Berkeley, California, USA) (Lee et al., 2006); two strains from P. aeruginosa clone C: C strain, is a typical CF isolate (Hannover Medical School, Germany) and SG17M strain, an environmental isolate from a river water in the city of Mulheim, Germany (Römling et al., 1994(Römling et al., , 1997Lee et al., 2014); P. aeruginosa PAO1 strain, which is an isolate from wound (Melbourne, Australia) (Holloway, 1955;Lee et al., 2006). All the strains were maintained in 15% glycerol at −70 • C. Each strain was biochemically typed using conventional biochemical tests (Murray et al., 1995;Mac Faddin, 2000) and the API20 NE system (Identification system for non-enteric Gram-negative rods. bioMérieux, Inc.). Virulence Genes and Type III Secretion System Genotype (TTSS) Detection Chromosomal DNA was isolated from overnight cultures in Luria broth (Invitrogen, Carlsbad, Ca. USA), of each of the 60 clinical P. aeruginosa isolates, as well as from the P. aeruginosa control strains (PA14, PAO1, C, and SG17M). DNA was purified from bacteria by miniprep (DNeasy Blood & Tissue Kit QIAGEN, Hilden, Germany) according to the manufacturer's instructions. All DNAs were adjusted to 100 ng/µl at 260/280 nm, using a Tecan Genios equipment. Seven virulence genes (toxA, lasB, lecA, algR, plcH, phzA1, and toxR) from P. aeruginosa were selected and amplified by PCR using Taq DNA polymerase recombinant (Invitrogen, Carlsbad, Ca. USA) and specific primers (Morales-Espinosa et al., 2012). The type III secretion system genes (exoS, exoT, and exoU) were investigated by PCR. The primers: forward 5 ′ ACTCGTGCGTCCCTTCGTG 3 ′ and reverse 5 ′ GATACTCTGCTGACCTCGCTCTC 3 ′ were used for exoS and exoT amplification. The conditions for thermal cycling were: an initial denaturation cycle at 94 • C for 2 min, followed by 30 FIGURE 1 | Restriction Patterns created with HinfI enzyme from PCR products of exoT and exoS genes. DNA ladder of 100 bp (Invitrogen by Thermo Fisher Scientific, Inc.) and 50 bp (Thermo Fisher Scientific, Inc.) (lines 1 and 8, respectively). PCR products from P. aeruginosa PA14 strain, PAO1 strain and a 1208 clinical strain, the primers: forward 5 ′ ACTCGTGCGTCCCTTCGTG 3 ′ and reverse 5 ′ GATACTCTGCTGACCTCGCTCTC 3 ′ were used (lines 2, 4, and 6, respectively). Restriction pattern from PA14 strain product, which gave a restriction pattern of two bands for the exoT gene: one of 292 bp and the other of 233 bp (line 3). Restriction pattern from PAO1 strain product, which present both exoT and exoS genes, the bands size correspond to 86, 122, 140, 168, 29, and 233 bp (line 5). Restriction pattern from PCR product of one of our strains, this strain only present the exoS gene, which produced a pattern of four bands: 86, 122, 140, and 168 bp (line 7). cycles at 94 • C for 1 min, the annealing temperature was 55 • C for 1 min and 72 • C for 1 min with a final cycle of 72 • C for 2 min. Subsequently, a restriction pattern from the PCR product was created with the Hinf I (Promega Life Science, Madison, Wisconsin. USA) enzyme. The PA14 strain was used as a positive control for exoT, which gave a two bands restriction pattern: one of 292 bp and the other of 233 bp. The PAO1 strain was used as positive control for both exoS and exoT genes, which produced six bands: 86, 122, 140, 168, 233 and 292 bp (Figure 1). The exoU gene was detected by PCR using the primers previously documented by Morales-Espinosa (Morales-Espinosa et al., 2012). Detection of the type IV pili (TFP) alleles was carried out according to Kus' characterization (Kus et al., 2004), in which P. aeruginosa TFP are divided into five phylogenetic groups. The complete characterization of TFP alleles was made by sequencing (Sanger method, Macrogene Metagenome Next Generation Sequencing [NGS] Service. Korea) of the PCR products from two strains (1,207 and 1,242), which give a greater PCR product (Genbank accession number KX096875 and KX096876). PFGE (Pulse-Field Gel Electrophoresis) Analysis Genomic DNA in agarose blocks was prepared using the method previously described by Liu (Liu et al., 1993) with some modifications such as: allowing a bacterial growth of no more than 12 h, subjecting the bacterial package to lysis twice, deproteinizing the DNA-plugs twice and increasing the number of washes (8) of the DNA-plugs with TBE buffer. The SpeI (Roche Diagnostic GmbH. Mannheim, Germany) enzyme was used to obtain the chromosomal profiles. SpeI fragments were separated by a CHEF-DR II device (Bio-Rad, USA) and electrophoresis was performed on 1.2 % agarose gels and 0.5X TBE (45 mM Tris, 45 mM Boric acid, 1 mM EDTA) buffer at 10 • C with pulse time ramped from 5 to 25 s over 19 h and 5.3 V/cm and a second block with pulse time ramped from 5 to 60 s over 17 h and 5.3 V/cm. The sizes of SpeI fragments were estimated using XbaI (Roche Diagnostic GmbH. Mannheim, Germany) fragments of Salmonella Braenderup global standard H9812. The images were digitized by the Gel Logic 112 imaging system (Kodak, NY, USA). The fingerprinting profile in the PFGE gel was analyzed using BioNumerics v.7.1 (Applied Maths, Belgium) software package. After background subtraction and gel normalization, typing of fingerprint profiles was carried out based on banding similarity and dissimilarity, using the Dice similarity coefficient (Dice, 1945) and the Unweighted Pair Group Method with Arithmetic Mean (UPGMA; Day and Edelsbrunner, 1984) according to average linkage clustering methods. MLST (Multilocus Sequence Typing) Genotype MLST was created according to the MLST scheme for P. aeruginosa (http://pubmlst.org/paeruginosa) with some modification to the annealing temperatures (according to each specific primer set): Acquired β-Lactamases Detection The most common acquired β-lactamases (Kos et al., 2014;Oliver et al., 2015) were searched for using PCR with specific primers for each group (Table S1). The acquired β-lactamases were sequenced (Macrogene Metagenome Next Generation Sequencing [NGS] Service. Korea) in order to determine the allele type. The conditions for thermal cycling for the β-lactamases genes were: an initial denaturation cycle at 94 • C for 2 min, followed by 35 cycles at 94 • C for 1 min, the annealing temperature was according to each specific primer set (Table S1) for 1 min and 72 • C for 1 min with a final cycle of 72 • C for 2 min. All PCR products of each gene were visualized on agarose gel. RESULTS We carried out the characterization of 60 P. aeruginosa isolates. All isolates showed biochemical patterns of P. aeruginosa (data not shown). The isolates were isolated from children, of whom 65% were female and 35% male. The median age of patients at the time of P. aeruginosa bacteremia diagnosis was 5.5 years with a range of 1 month to 14 years and 8 months. All patients admitted to hospital had underlying disease. The most common underlying diseases were hematological and oncological disease (52%) including acute lymphoblastic leukemia, non-Hodgkin lymphoma, solid tumor, hemophagocytic syndrome, histiocytosis and aplastic anemia, and prematurity (24%), other underlying diseases were gastrointestinal malformations, congenital cardiopathy, Wiscott-Aldrich disease, Dandy Walker syndrome, Chiari's malformation, and nephrogenic and diabetes insipidus (Figure 2). Medical records were unavailable for eight patients. In 28 patients, bacteremia was related to a central catheter. Bacteremia was present in 21 children after chemotherapy treatment, with neutropenia and fever developing. The overall case fatality associated with P. aeruginosa bacteremia was 13.3% (8 of 60), who developed septic shock and multi-organ failure ( Table 1). Frequency of Virulence Genes and TTSS Genotype Detection All of the isolates isolated from the study's patients presented the seven virulence genes that were amplified by PCR. Due to the high identity between the exoT (GenBank accession NC_008463.1) and exoS (GenBank accession NC_002516.2) genes (>80%), we could not design specific primers for PCR to amplify each gene. Therefore, we had to use a new strategy that allowed us to differentiate between the detection of exoS and exoT in each of our strains. The in silico analysis of restriction FIGURE 2 | Pulse-Field gel electrophoresis (PFGE) profile dendrogram and genetic and phenotypic characteristics of P. aeruginosa strains isolated from children with bacteremia. The dendrogram was generated by Dice similarity coefficient (Dice, 1945) and UPGMA (Day and Edelsbrunner, 1984) clustering methods by using PFGE images of SpeI digested genomic DNA. The scale bar shows the correlation coefficient (%). Underlying disease (Dx): PNET, primary (Kus et al., 2004), in which P. aeruginosa type IV pili are divided into five distinct phylogenetic groups. The GEIs genotype was assigned base on the presence/absence of genomic island, 12 different GEIs genotypes were found (for details see Table S2, Supplementary Material). The resistance profile was formed by a number and a letter: the number indicates how many antibiotics the strain was resistant to; the letters were assigned alphabetically to differentiate among the antimicrobial combinations for which the strains were resistant (detailed information is shown in Table S3, Supplementary Material). patterns of exoS and exoT showed that the enzyme Hinf I yielded two different patterns between them. Based on this new strategy, 75% of the strains were exoS+ and 70% exoT+. Of all our strains, 90% presented the exoU gene, which was detected by PCR using specific primers. In general, the 67% of our strains were exoS+/exoU+ genotype, 23% were exoS−/exoU+ and 10% were exoS+/exoU−. The exoS−/exoU− genotype was not found in our study population. Previous studies have reported (Oliver et al., 2015;Peña et al., 2015) that the exoY and exoT genes are present in all strains. Reason for which, we decided not to detect the exoY gene in the present study. However, due to our results where found that the 30% of the isolates are negative exoT, now, it does appear necessary to characterize the exoY gene in our population and to determine if also there are negative exoY strains. TFP Allele Characterization and GEIs Detection With respect to characterization by TFP alleles (Kus et al., 2004), we found that all strains produced a single PCR product ranging in size from ∼1.4 to 2.8 kb. Based on PCR product size analysis, 30 out of 60 strains gave a product size of 1,400 bp which was similar to that of group II from the PAO1 reference strain. In addition, three yielded a PCR product of 2,650 bp, as seen for the PA14 control strain and these three strains were determined to belong to group III; 24 strains gave a PCR product (2,800 bp) greater than the PA14 strain; and three strains yielded a PCR product of 1,560 bp. To complete the characterization by TFP allele of the 27 strains with different PCR product sizes, individual PCRs were performed using specific primers for the tfpO a, tfpO b , tfpY, and tfpZ accessory genes present between tRNA and pilA (Kus et al., 2004). The results showed that 22 strains amplified the tfpO gene (group I), of which seven strains were subgroup Ia and 15 were subgroup Ib, while one strain (1,242 strain) could not be characterized according to Kus's criteria (Kus et al., 2004). Therefore, we selected this strain (1,242) and another strain (1,207) with a 2,800 bp PCR product. Both products were sequenced and analysis of the strain 1,207 showed the presence of the pilin glycosylation gene tfpO adjacent to pilA (GenBank accession number KX096875), confirming that this strain belongs to group I. However, subgroup 1a or 1b characterization using specific primers for each subgroup (Kus et al., 2004) could not be achieved. Sequence analysis of strain 1,242 showed a novel accessory gene (IS1383), which encodes for a transposase and a new variant of pilA gene (GenBank accession number KX096876). The transposase gene has 100% similarity to a transposase gene described in cyclohexylamine-degrading Pseudomonas plecoglossicida NyZ12, while the new variant of pilA gene presented high identity in its first 345-381 nucleotides with the pilA gene of P. aeruginosa M1-G, K122-4, and B136-33 strains. According to this result, we identified a new variant of PilA protein and probably, a new TFP allele ( Figure S1). We wanted to know if more strains from our study presented the transposase gene and in turn, the new allele. We could not find more strains with this novel TFP allele in our population. We were not been able to sequence the 1,560 bp products of three strains despite three consecutive attempts. However, considering the size of the PCR product, which was very similar to the strain PAO1 Group II, we decided to characterize this pil region according to its restriction patterns with HphI enzyme, using the PAO1 strain as reference. The restriction pattern presented for the three strains was the same, with two bands of ∼750 and 650 bp (data not shown). However, this pattern was very different to that of the PAO1 strain (675, 425, 147, and 96 bp bands size) and other strains from group II, which suggests greater variability in this region and possibly, the presence of other alleles, as yet not described. With respect to the detection of genomic islands, we found 12 GEIs genotypes (Table S2), and at least one genomic island was found in all of the strains. The most frequently detected genomic island was PAPI-2 (100%), followed by PAPI-1 (55%), PAGI-1 (47%), and pKLC102 (23%). PAGI-2 was detected in only 3% of the strains; the genomic islands PAGI-3 and PAGI-4 were not detected at all in our study population. The majority of the strains had only two islands; just one strain presented up to five GEIs (PAGI-1, PAGI-2, PAPI-1, PAPI-2, and pKLC102); in six strains four GEIs were detected; in 15 strains 3 GEIs were found and 13 strains had one island only. The genetic content of each GEIs was variable, as has been previously documented (Liang et al., 2001;Klockgether et al., 2007;Morales-Espinosa et al., 2012). PFGE and MLST Genotype Using SpeI fragment patterns, we found 42 different restriction patterns, of which 29 corresponded to 29 single isolates (unique patterns) and 13 were shared by two or three isolates (Figure 2). Strains 1,195 and 1,203 could not be typed with this method. Although, there were 13 strains that shared chromosomal profiles, the majority of the strains were isolated from unrelated patients, in different hospital services and on different dates. In addition, each strain showed a variable number of GEIs or variability in genetic content and/or a different antimicrobial resistance profile. Only four strains (1,240, 1,250, 1,251, and 1,252) presented the same or similar PFGE patterns and were isolated from same patient in the same day. However, all the isolates had different GEIs number with genetic content and different resistant profile, indicating that this patient had a mixed infection. The sequence type (ST) in our strains was highly variable, and showed a good correlation with the variability found using the PFGE method. The most frequently detected ST was ST309, which was present in nine strains. These strains were grouped in only one cluster (Figure 2) and all of them shared the TFP group II, six of out nine strains were isolated of urine from patients with urinary tract infection as primary infection, six had the highest resistance profile to around 20 antimicrobials, and four shared the same GEIs genotype. Additionally, there were six strains from this group that presented up to three different βlactamases (GES20, OXA2, and KPC). A further 21 strains shared STs: as ST796 (5 strains), ST112 (4 strains), ST1503, and ST1816 (3 strains for each), ST357, ST897, and ST664 (2 strains for each), while the remaining 30 strains presented a unique sequence type (Figure 2). Antimicrobial Susceptibility Profile With respect to susceptibility, only two strains were susceptible to all 20 antimicrobials tested. The 92% of the strains were susceptible to polymyxin B, and between 70 and 85% were sensitive to quinolones, aminoglycosides, cefepime and ceftazidime, while, from 45 to 67% were susceptible to β-lactam antibiotics (Table S3). With respect to resistance, seven strains were resistant to almost all the antimicrobials. The highest rate of intermediate resistance and resistance was observed for carbenicillin (73%) and ceftriaxone (75%). In general, 31.6% (21) of the strains showed resistance to more than 10 antibiotics; 21.6% (13) of strains were multi-resistant more than five antibiotics. We found 43 profiles (phenotypes) of resistance, based on antimicrobials combination for which they were resistant (Table S3). Genetic and Phenotypic Characteristics of P. aeruginosa Strains Associated to Case Fatality Total case fatality associated with P. aeruginosa bacteremia was 13.3% (8/60). All the children who died were above 1 year of age and the majority of these patients were diagnosed with a hematooncological disease ( Table 1). All the children developed septic shock and multi-organ failure. Characterization of the strains showed genetic and phenotypic variability, with four strains sharing chromosomal patterns (1,211, 1,212, 1,220, and 1,239) between them (Table 1 and Figure 2). DISCUSSION European epidemiological surveillance programs show that P. aeruginosa is one of the most frequently isolated Gramnegative microorganisms from patients admitted to ICU (Pujol and Limón, 2013). The most important risk factors leading to development of nosocomial infections associated with P. aeruginosa in patients are a long period of hospitalization, the existence of a serious pre-existing condition and exposure to invasive procedures (Fergie et al., 1994;Yetkin et al., 2006;Yang et al., 2011). P. aeruginosa-associated infections have a high mortality rate due to the presence of virulence factors in the bacterium, innate and acquired multidrug resistance, and immune impairment of the host (Fergie et al., 1994;Lyczak et al., 2000;Corona-Nakamura et al., 2001;Thuong et al., 2003;Poole, 2011). Different studies have shown that P. aeruginosa is generally acquired from the hospital environment, person-toperson contact, indirect transmission via contaminated hands, through contaminated respiratory care equipment, catheters, irrigating solutions, and from the use diluted antiseptics and cleaning solutions (Corona-Nakamura et al., 2001;Thuong et al., 2003;Yetkin et al., 2006). Generally, P. aeruginosa outbreaks in hospitals are associated with clonally-related strains and through cross transmission in immunocompromised patients with underlying diseases, such as those with malignancies, burns, and prematurity (Agodi et al., 2007;Zhang et al., 2012;Cies et al., 2015). Although P. aeruginosa is considered an opportunistic pathogen, it has several virulence factors. These are encoded on plasmids or chromosomal genes, such as lasB (encoding for elastase), toxA (exotoxin-A), pilA (type fimbrial precursor type IV pilin), plcH (hemolytic phospholipase C precursor), phzA1 (phenazine biosynthesis protein), toxR (positive transcriptional regulator of toxA transcription), lecA (lectin; Wick et al., 1990;Walker et al., 1995;Rumbaugh et al., 1999;Woods, 2004;Shen et al., 2006;Wolska and Szweda, 2009;Morita et al., 2015), and four type III effectors: ExoU (phospholipase A2), ExoY (adenylate cyclase), ExoS (ADP-ribosylates numerous proteins, including members of the Ras protein family) and ExoT (a type III cytotoxin that functions as an anti-internalization factor with an N-terminal RhoGAP domain and a C-terminal ADPribosyltransferase domain; Sun and Barbieri, 2003;Jia et al., 2006;Cisz et al., 2008;Sun et al., 2012). These last two effectors are closely related to each other and participate in inhibiting phagocytic cells (neutrophil and macrophage function) and in bacterial uptake by epithelial cells (Engel and Balachandran, 2009). The characterization of strains from our current study showed the presence of all virulence genes in 100% of the strains, indicating that these genes are present in the structural integrity of the bacterial chromosome. Contrary to report by other authors (Feltman et al., 2001;Engel and Balachandran, 2009;Oliver et al., 2015;Peña et al., 2015), almost all our strains presented the exoU gene. Determination of the TTSS genotype showed that a high percentage was exoS+/exoU+ genotype. However, not all strains present the exoT gene. The presence of both exoS and exoU genes has been associated with acute infection in humans, such as bacteremia, and correlates with a worse outcome in clinical infections, a higher bacterial burden and a greater risk of death in mechanically ventilated patients (Feltman et al., 2001;Engel and Balachandran, 2009;Peña et al., 2015). While it is true that the results of the present study confirm this important relationship, between the exoS+/exoU+ genotype and bacteremia, the infections were resolved via antimicrobial treatment, probably due to low resistance of most of our strains to fluoroquinolones, aminoglycosides and ceftazidime, a third generation cephalosporin indicated for the treatment of patients who develop fever associated with neutropenia. On the other hand, registered mortalities in our study were reduced. Analysis of the results showed that there was no association between the exoS+/exoU+ genotype and risk of death. The characterization of TFP alleles in our strains show diversity in TFP alleles to a greater extent than previously documented (Kus et al., 2004). A novel TFP allele was detected on the transposase accessory gene located between tRNA and pilA, which has been documented in other Pseudomonas species. A novel group of the pilA VI gene with low sequence identity in its 3 ′ end to other pilA groups of different P. aeruginosa strains was also detected. The analysis of PilA VI amino acid sequence showed a homology between 31.4 and 42.6% with respect to other groups of PilA ( Figure S1). Although, we did not have experimental evidence of the PilA expression, the amino acids sequence is highly homologous in its first 127 aa to other sequences of PilA available in the databases, as it is showed in Figure S1. Using the Swiss-model (homology modeling) software, we obtained a virtual protein structure 100% homologous to P. aeruginosa fimbrial protein in its first 125 amino acids (image not shown), although from 126 to 142 aa, no homology was found with respect to the C-terminal region of the same PilA protein. Interestingly, we observed the lack of the two-cysteine residues within of its disulfide-bonded loop (DSL) region, which are involved in the disulfide bond formation contributing to the pilin assembly into fibers and its adhesive capacity (Harvey et al., 2009). The detection of this novel allele supports the notion of horizontal transfer of genes and recombination within homologous regions between bacteria and the incorporation of novel DNA into one of the hypervariable regions of the P. aeruginosa chromosome. The singularity of the accessory gene (transposase) and pilA VI gene confirmed Kus's observations that each pilin type is stringently associated with a specific accessory gene. Additionally, the presence of the transposase gene immediately adjacent to the tRNA thr gene confirms that a mechanism of bacteriophagemediated transduction was involved in the generation of the new TFP allele. This is not surprising, since it is known that tRNA genes are hotspots for bacteriophage integration, where we detected greater genetic variability, as seen in the lack of characterization of three of our strains. The characterization of our strains isolated from the blood of children with bacteremia showed the predominance of two TFP alleles (group I and II). In her study, Kus reported a similar percentage of group I pilins within environmental strains and pediatric CF isolates, while in other human isolates, there appears to be an approximately equal distribution of strains within pilin groups I, II, and III (Kus et al., 2004). The characterization of different populations of P. aeruginosa isolated from different sources is required in order to determine if there is a correlation between the pilA allele and a specificity niche. In addition to virulence genes, the bacterium acquired foreign DNA in combinations of specific blocks of genes that contributed to virulence and/or adaptation to specific niches. These strain-specific segments of the genome are found in limited chromosomal locations, referred to as genomic islands (GEIs), which are acquired by HGT (Ou et al., 2006;Boyd et al., 2008;Juhas et al., 2009). Depending on their functions, they encode for pathogenicity, symbiosis, fitness, metabolic, or resistance traits (Hacker and Kaper, 2000;Dobrindt et al., 2004;Juhas et al., 2009). A large number of GEIs in the P. aeruginosa chromosome have been described, but these GEIs are found in varying numbers in some strains and not in others (Schmidt et al., 1996;Liang et al., 2001;Larbig et al., 2002). In the present study, all the strains isolated from children diagnosed with bacteremia possessed the PAPI-2 Island and more than half of them had PAPI-1 with both islands presenting a mosaic structure. Most of the PAPI-2 genes are related to mobility functions, including integrase genes, transposase genes, one pseudogene, and portions of insertion sequences in addition to the presence of seven ORFs that correspond to hypothetical proteins of unknown function (He et al., 2004). Interestingly, in the right end of PAPI-2 there are two genes that correspond to the exoU gene and its chaperone spcU (He et al., 2004). As mentioned previously, exoU encodes a type III effector (ExoU) that plays an important role in pathogenesis. ExoU is a potent cytotoxin with phospholipase A2 activity, which has been associated with the development of septic shock in an animal model (Kurahashi et al., 1999). The presence of exoU in almost all our strains isolated from blood samples taken from patients with bacteremia corroborates the data previously reported by Kurahashi. The presence of exoU on PAPI-2 defines this island as a pathogenicity island and it is very likely that the expression of ExoU for P. aeruginosa strains facilitates the spread through tissues favoring the arrival of bacteria to bloodstream (He et al., 2004;Kulasekara et al., 2006). On the other hand, PAPI-1 genes are involved in adhesion and/or motility, although, the majority of their genes encode for hypothetical proteins, making this island unique (He et al., 2004;Qiu et al., 2006;Carter et al., 2010;Harrison et al., 2010). This island contains two pairs of two-component regulatory systems, which through mutational analysis have been shown to affect plant and mammalian pathogenesis (He et al., 2004). In addition to all genes being involved in type 4 fimbrial assembly and function in the pil chromosomal region, the PAPI-I Island has a set of genes (pilL, pilN, pilO, pilQ, pilR, pilS, pilT, pilV, and pilM) involved in type IVb pilus biogenesis that contributes to adherence onto synthetic surfaces, such as catheters (Giltner et al., 2011), which provide an entrance to the circulatory system. The presence of both PAPI-I and PAPI-2 islands in more than half of our strains show that these islands are contributing to P. aeruginosa virulence, promoting colonization on catheter surfaces and skin injury with the induction of proinflammatory mediators, and passing the bacteria into the blood system benefitting the survival and fitness of the bacterium. The antimicrobial resistance profiles showed that more than half of our strains isolated from children had a moderate resistance profile, which may provide some explanation as to the low mortality rate reported in our study. The number of MDR and XDR strains detected is still low in the present study (Pediatric Hospital), nevertheless, continuous epidemiological surveillance is necessary to monitor MDR and XDR strain presence considering the continuous admission of patients to different hospital services and horizontal genes transfer from hospital microbiota to patient's native microbiota. Analysis of chromosomal profiles and MLST of the strains showed great genetic variability among our population, indicating that there is no clonal relationship. However, it is striking that in a cluster of nine strains, six were isolated from urine and all nine share the ST309, TFP allele (group II), and have the XDR (six strains) or MDR (two strains) phenotype. There are reports of P. aeruginosa high-risk clones circulating in hospitals worldwide, which present specific genetic characteristics (ST111, ST235, and ST175) linked to MDR or XDR phenotype (Cabot et al., 2012;Mulet et al., 2013;Witney et al., 2014;Oliver et al., 2015;Peña et al., 2015). The increasing prevalence of these clones complicates the clinical landscape, limiting therapeutic options and having significant impact on morbidity and mortality. The presence in our population of strains ST309 linked to the MDR or XDR phenotype, make them a potential high-risk clone, which was not documented as such, previously. However, it is important to highlight that in our study, this clone was not associated with any of the mortality cases. Studies in other populations and hospitals settings are recommended in order to determine the presence of ST309 as part of a potential high risk clone, its distribution throughout hospitals in Mexico and its importance to the severity of the clinical outcome. Based on the overall analysis of the results obtained in this study, we found that the strains of P. aeruginosa causing bacteremia in each child harbored exoU and exoS. This results are support for the Berthelot's observations (Berthelot et al., 2003), who characterized genetically and phenotypically 92 P. aeruginosa strains isolated from blood: where, they identified four groups of strains (TTSS types) according to level of type III protein secretion and kinetics of cytotoxicity. Additionally, they made the detection of exoU and exoS genes by real-time PCR. They found a strong correlation among exoU+ and exoS+ genotype and TTSS phenotype. They concluded that the most of the bacteremic strains (80%) were strongly cytotoxic for macrophages and that the ExoU-secreting isolates killed the phagocytes more rapidly. Based on Berthelot's study, we can deduce that our strains being cytotoxic. It is likely that exoU was acquired through horizontal transfer of PAPI-2 from one strains to other. It also appears likely that the patients were carrying the majority of the strains prior to hospital admission and immunosuppression caused by underlying disease favored the multiplication of microorganisms and adherence to catheter surfaces. The presence of exoU on PAPI-2 island gives bacteria the ability to disseminate into the circulation and produce bacteremia, and in some cases the development of septic shock (Engel and Balachandran, 2009). We identified a reduce number of exogenous β-lactamases among strains, with KPC β-lactamase being the most frequent. However, the presence of a potential high-risk clone, ST309 with a MDR or XDR phenotype, circulating throughout of our hospital could create a serious health problem. Multiresistant bacteria serve as hosts for the multiple genetic elements (genes, integrons, transposons, and plasmids) that confer their antibiotic resistance phenotypes. This important characteristic allows to the bacteria to be a "successful" bacterial strain, which is an extremely effective vehicle for the dissemination of any genetic element (s) for at least two reasons: (a) all of the hosted resistance elements are transmitted vertically (i.e., from mother to daughter cells) by virtue of the strain's spread and its increasing prevalence and (b) a successful strain has multiple opportunities to act as a donor and to transfer its resistance elements horizontally to other strains, species or genera (Maatallah et al., 2011;Woodford et al., 2011). So that, the identification of a successful multiresistant strain or clone should receive prompt attention to avoid HGT of antimicrobial resistance into bacterial populations, and its dissemination to different hospitals and different regions. Additionally a high-risk clone should have important characteristics: (a) to be pathogenic (to have virulence factors); (b) to have a resistance profile to at least three groups of antibiotics (extensive drug resistance) and (c) to be present in different places (Woodford et al., 2011). ST309 strains have been documented in France, Australia, Malaysia and even Brazil, which have been isolated from water and some clinical samples such as bronchial lavage, blood, and urinary tract (P. aeruginosa PubMLST website). Despite the fact that the relative contributions of endogenous and exogenous sources to P. aeruginosa acquisition are not wellestablished. At this moment, we can assume that in our study hospital, P. aeruginosa infections are not the result of epidemic outbreaks, since the strains associated with infection were highly variable and they were not acquired in the hospital setting. CONCLUSIONS To conclude, genetic and phenotypic characterization of 60 isolates of P. aeruginosa associated with blood infections in children admitted to a highly specialized hospital in Mexico, showed that the infections were caused by strains with great diversity in their accessory genome. In the majority of the cases, there was no cross-infection between patients associated with a single clone. The P. aeruginosa strains isolated from blood and involved in bacteremia were TFP allele group I and II, and cytotoxic (exoU+ and exoS+). The results support the idea that the presence of PAPI-I and PAPI-2 in the strains contributed to greater virulence, which is associated with better adherence and dissemination into the bloodstream leading to an increased risk of septicemia. We identified the presence of ST309 strains isolated from urinary tract, which possess virulence genes, an
8,871.8
2017-03-01T00:00:00.000
[ "Biology", "Medicine" ]
COVID-19 detection using federated machine learning The current COVID-19 pandemic threatens human life, health, and productivity. AI plays an essential role in COVID-19 case classification as we can apply machine learning models on COVID-19 case data to predict infectious cases and recovery rates using chest x-ray. Accessing patient’s private data violates patient privacy and traditional machine learning model requires accessing or transferring whole data to train the model. In recent years, there has been increasing interest in federated machine learning, as it provides an effective solution for data privacy, centralized computation, and high computation power. In this paper, we studied the efficacy of federated learning versus traditional learning by developing two machine learning models (a federated learning model and a traditional machine learning model)using Keras and TensorFlow federated, we used a descriptive dataset and chest x-ray (CXR) images from COVID-19 patients. During the model training stage, we tried to identify which factors affect model prediction accuracy and loss like activation function, model optimizer, learning rate, number of rounds, and data Size, we kept recording and plotting the model loss and prediction accuracy per each training round, to identify which factors affect the model performance, and we found that softmax activation function and SGD optimizer give better prediction accuracy and loss, changing the number of rounds and learning rate has slightly effect on model prediction accuracy and prediction loss but increasing the data size did not have any effect on model prediction accuracy and prediction loss. finally, we build a comparison between the proposed models’ loss, accuracy, and performance speed, the results demonstrate that the federated machine learning model has a better prediction accuracy and loss but higher performance time than the traditional machine learning model. COVID-19 The current COVID-19 pandemic, caused by SARS CoV2, threatens human life, health, and productivity [1] and is rapidly spreading worldwide [2]. The COVID- 19 family members, is sensitive to ultraviolet rays and heat [3]. AI and deep learning play an essential role in COVID-19 cases identification and classification using computer-aided applications, which achieves excellent results for identifying COVID-19 cases [1] based on known symptoms including fever, chills, dry cough, and a positive x-rays. AI, and the deep learning model can be used to forecast the spread of the virus based on historical data which can help control its spread [3]. So there is a need to build machine learning models to identify COVID-19 infected patient or to predict the spread of the virus in the future, but this is not easy to achieve because patient data is confidential, and without enough data, it is too difficult to build a robust model [1]. A new approach is needed that makes it easy to build a model without accessing a patient's private data or requires transferring patient's raw data, and one which gives high prediction accuracy. Federated learning The concept of federated learning was proposed by Google in 2016 as a new machine learning paradigm. The objective of federated learning is to build a machine learning model based on distributed datasets without sharing raw data while preserving data privacy [4,5]. In federated machine learning, each client (organization, server, mobile device, and IoT device) has a dataset and his local machine learning model. There is a centralized global server in a federated environment that has a centralized machine learning model (global model), which aggregates the distributed client's model parameters (model gradients). Each client trains the local machine learning model locally on a dataset and shares the model parameters or wights to the global model. The global model makes iteration of rounds to collect the distributed clients model updates without sharing raw data [4,5] as shown in Fig 1. Why federated machine learning should be used: • Decentralized model removes the need to transfer all the data to one server to train the model, as training each node occurs locally, unlike traditional machine learning which requires moving all the data to a centralized server, to build and train the model. • No data privacy violation as it applies methodologies including the differential privacy and the homographic Secure multiparty computation, unlike traditional machine learning. • A third-party can be part of the training process as long as there is no data privacy violation and data is secured, unlike traditional machine learning third-party could not be an option in case of military organizations. • Less computation power is needed as model training is performed on each client, and the centralized model's primary role is to collect gradient update distributed models, unlike the traditional machine learning which one centralized server contains all the data, which requires high computational power for model training. • Decentralized algorithms may provide better or the same performance as centralized algorithms [5]. It is highly recommended to use federated machine learning rather than traditional machine learning, in such environments where data privacy, is highly required. Federated learning can be applied in many disciplines like (Smart healthcare, sales, multi-party database, and smart retail) [6] Motivation and contributions Federated machine learning enables us to overcome the obstacles faced by the traditional machine learning model as: • Traditional machine learning occurs by moving all data source to a centralized server to train and build the model, but this may violate the rules of military organizations especially when third-party is used to create, train and maintain the model. • To train the model, the third-party should prepare, clean, and restructure the data to be suitable for model training, however, this may violate data privacy when the data are handled to create the model. • Traditional machine learning models also take much time to build the model with acceptable accuracy, which may cause a delay for organizations, especially recently opened ones. • Traditional machine learning also requires the existence of a massive amount of historical data to train the model to give acceptable accuracy (Cold Start) [7]. • There is a need for a secure distributed machine learning methodology that trains clients' data on their servers without violating data privacy, saves computational power, and overcomes the cold start problem, enabling clients to get immediate results. Federated learning has the potential to solve these issues, as it enables soiled data servers to train their models locally and to share their model's gradients without violating patient privacy [1]. The principal objective of this paper is, to build a comparison between a federated machine learning model and a non-federated machine learning model, by applying them to the same datasets and build the comparison between the model's prediction loss, prediction accuracy, and training time. Related work Boyi Liu et al. [1] proposed an experiment to compare the performance of federated machine learning, between four popular models(Mobile Net, ResNet18, MobileNet-v2, and COVID-Net), by applying them to the patient's chest images CXR dataset. These models are designed to recognize COVID-19 pneumonia, the authors used the same parameters for all models, after 100 rounds the authors found that the ResNet18 model is the fastest model and gives the highest accuracy rate (96.15%, 91.26%), second, the COVID-Net and MobileNet-v2 had the same loss value as COVID-Net and Mobile Net. Non-federated learning was conducted on the same data and it was found that the loss convergence rate caused by using federated learning decreased slightly. Junjie Pang et al. [8] proposed a federated learning framework based on digital city twin concepts to study the effect of different prevention city plans to prevent a COVID-19 outbreak, and by building a federated model to predict the effect they traced the infection number from multiple cities over the periods from their digital city twin systems. They were also able to trace the effectiveness of each prevention plan and build local models on each digital city twin system which sent the model parameters or updates to federated sites to maintain data privacy. They built a comparison between the prediction accuracy and loss between the federated model and the traditional one. Weishan Zhang et al. [4] proposed a novel dynamic fusion-based federated learning approach to enhance federated learning model performance metrics. They found that all the recent studies on federated learning used the default federated learning settings which may introduce huge communication overhead and underperforms when there is data heterogeneity between clients. They proposed an approach which determines the interaction between clients and servers with a dynamic fusion-based function to determine which client participates in each round to upload his local model updates. They defined a max waiting time for each client to participate during the server round which was defined by the platform owner. They applied four models using this architecture. GhostNet, ResNet50, and ResNet101 were used on COVID-19 datasets and they found that the proposed approach introduce better accuracy than the default setting one and can reduce communication overhead and the training time for ResNet50 and ResNet101, however, these results did not apply to GhostNet. Parnian Afshar et al. [9] proposed a modeling framework based on capsule networks (COVID-CAPS) to identify positive COVID-19 cases from x-rays images to overcome the drawbacks of CNN-based models for handling small dataset, they modified the model parameters to perform well and conducted a comparison between the COVID-CAPS and the traditional network and found that the COVID-CAPS model performed better than the traditional model for accuracy, sensitivity, and specificity Chaoyang He et al. [10] proposed an experimental study on automating federated learning (AutoFL) using the Neural Architecture Search (NAS) algorithm and proposed a Federated NAS (FedNAS) algorithm to find the optimal design settings of local machine learning models to obtain the performance and effectiveness of the local models that share their model updates. They found that default settings of local machine learning models did not fit the federated environment nature as the clients contain non-identical and non-independent data i.e., non-IID clients. The experiment was conducted using the CIFAR10 dataset and found that FedNAS can search for a better architecture with an 81.24%accuracy in only a few hours compared to 77.78% for FedAvg. Amir Ahmad et al. [11] proposed a detailed literature review of start-of-art taxonomies used in COVID-19 case prediction and they categorized them into four categories. The authors built a comprehensive review to provide suggestions to machine learning practitioners to improve the accuracy of their machine learning model and the challenges that they may face. Nikos Tsiknakis et al. [12] introduced a study on COVID-19 classification using the transfer learning method which achieves better AUC performance; their study proposed a deep learning-based COVID-19 classification system based on x-rays for better performance compared to state-of-the-art methodologies. Mwaffaq Otoom et al. [13] proposed a real-time COVID-19 case detection and monitoring system. Their study used an IOT device for data collection and monitoring during quarantine; they used seven machine learning algorithms and conduct an experiment on each experiment and build a comparison they found that five machine learning algorithms had greater than 90% prediction accuracy. Thanh Thi Nguyen et al. [14] proposed a survey of AI methods used in various applications used in fighting COVID-19; they covered areas including data analytics, data mining, and natural language processing (NLP). The authors identified previous problems and identified the solutions based on COVID-19 AI methods on chest x-rays image datasets. Fatima M Salman et al. [15] Proposed Machine learning model to identify COVID-19 cases using patient's chest x-rays images by implementing convolutional neural network CNN machine learning algorithm, they used patient's chest x-rays datasets contains 130 images of COVID-19 x-ray cases and 130 images for normal cases x-ray, their prediction machine learning model gives 100% prediction accuracy. N Narayan Das et al. [16] proposed a machine learning model to identify COVID-19 cases using patient's chest x-rays images by implementing the Inception (Xception) machine learning model. They overcame the RT-PCR kits issues due to the time and cost required to identify the COVID-19 cases by using patient's chest x-rays images; their models outperform competitive models. AKMB Haque et al. [17] proposed a study on how to detect Covid-19, pneumonia, and normal chest cases using patient's chest x-rays images by implementing the different Convolutional Pre-Trained Neural Network models (VGG16, VGG19, Xception, InceptionV3, and Resnet50). They found that VGG16 and VGG19 showed high performance and prediction accuracy, and also investigated the effects of weather factors including temperature, humidity, sun hour, and wind speed and found that temperature had a great effect on death cases caused by Covid-19. Himadri Mukherjee et al. [18] proposed a machine learning model to identify COVID-19 cases using patient's chest CT scan or CXR images by implementing the Convolutional Neural Network (CNN)-tailored Deep Neural Network (DNN) machine learning algorithm, they found that the proposed model achieves overall high accuracy compared with others models like InceptionV3, MobileNet, and ResNet. Ike FIBRIANI et al. [19] proposed a machine learning model to identify COVID-19 cases using patient's chest X-ray images by implementing a multi-layer Convolutional Neural Network (CNN) machine learning algorithm; they created a multi-Convolutional Neural Network (CNN) classifier architecture to minimize the errors and found that the majority vote and the proposed model achieves high accuracy. Harsh Panwar et al. [20] proposed a machine learning model to identify COVID-19 cases using patient's chest X-ray images by implementing a deep learning neural network-based method nCOVnet, and found that the machine learning model gives high prediction accuracy. Shashank Vaid, et al. [21] proposed a machine learning model to uncover the hidden patterns that exist between COVID-19 cases to predict the potential infection. They used their model to identify the key parameter that used to detect the hidden patterns between cases (dimensionality reduction) then applied their model using the unbiased hierarchical Bayesian estimator. Rodrigo M Carrillo-Larco et al. [22] proposed a machine learning model to group countries with shared COVID-19 infection profiles. They used unsupervised machine learning algorithms (k-means), and collect data from COVID-19 cases from 155 countries, and implemented the K-mean clustering algorithm and principal component analysis (PCA) to group the countries. Fadoua Khmaissia et al. [23] proposed an unsupervised machine learning model to find similarities between zip codes in New York City to study COVID-19 inside the city. They used feature selection and clustering techniques to find similarities based on mobility, socioeconomic, and demographic features with the COVID-19 trends. Akib Mohi Ud Din Khanday et al. [24] proposed an unsupervised machine learning model to classify textual clinical reports in four classes to study the Behavior of COVID-19. They used Term frequency/inverse document frequency (TF/IDF), bag of words (BOW), and report length to generate features and used these features for traditional machine learning algorithms to generate better results and found that it gives better testing accuracy. R Manavalan et al. [25] proposed a study to explore the association between COVID-19 transmission rates and meteorological parameters by implementing a gradient boosting model (GBM) on Indian data. GBM model was optimized after tuning its parameters. Sina F Ardabili et al. [26] proposed a study to compare between machine learning and soft computing models in predict the COVID-19 outbreak and built a comparative analysis, which found that multi-layered perceptron (MLP), and adaptive network-based fuzzy inference system, (ANFIS) shows a promise. Sara Hosseinzadeh Kassan et al. [27] proposed a study comparing between most popular deep learning-based feature extractions frameworks like MobileNet, DenseNet, Xception, ResNet, InceptionV3, InceptionResNetV2, VGGNet, and NASNet by applying to COVID-19 chest X-rays patients to help in COVID-19 automatic detection. They found that DenseNet121 feature extractor with Bagging tree classifier achieved the best performance. Iwendi, Celestine, et al. [28] proposed a system for classifying and analyzing the predictions obtained from COVID-19 symptoms, by using the Adaptive Neuro-Fuzzy Inference System (ANFIS), which helps in detecting Coronavirus Disease early. The authors found that the support vector machine (SVM) algorithm gives better prediction accuracy among all classifiers. Javed, Abdul Rehman, et al. [29] presented a generalized collaborative framework named collaborative shared healthcare plan (CSHCP) used for people cognitive health and fitness assessment, the proposed framework shows promising outcomes compared to the existing studies. Bhattacharya, Sweta, et al. [30] presented summarizing for start-of-art research works related to COVID-19 medical image processing deep learning applications, and provided an overview for deep learning applications used in healthcare in the last decade. Finally, they discussed the deep learning application's challenges used in COVID-19 medical image processing. Manoj, Mk, et al. [31] proposed incentive-based approach is provided to channel isolation which helps the people in need during these tough times and proposed also a blockchainbased solution to prevent information tampering. Reddy, G. Thippa, et al. [32] proposed an experiment using an adaptive genetic algorithm with fuzzy logic (AGAFL) model to predict heart disease which helps practitioners to early diagnosing heart disease, they applied the proposed model on UCI heart disease dataset and found that the proposed approach is outperformed current methods. Anwaar Ulhaq et al. [33] introduced a theoretical framework called differential privacy by design (dPbD) that helps to design scalable and robust federated machine learning systems for COVID-19 data privacy. Privacy by design embeds privacy directly into the system design and was introduced by [34], authors found that all studies focused on the tradeoff between privacy and utility and ignored the system scalability (number of clients attached) and robustness (the performance of the system against attacks) so they define seven steps as a theoretical framework to be applied when using federated machine learning. Materials and methods This section addresses the applied tools and methodology for the federated and traditional one, to predict recovery based on the features of the patient. Tensor flow with Keras API was used to build federated and traditional mode, following steps were used for building models: The federated learning model • Data Loading (data loaded using pandas package which returned data frame object with data). • Drop Unique Values Column (all unique, primary keys, and distinct values columns had dropped during model training). • Replace Null Values (the null values were replaced with mode values to make it easy to model for data training). • Label Encoding (categorical label and text labels were replaced with normalized values). • Data Repetition (data were repeated to simulate the number of clients). • Data Shuffling (data shuffled to avoid getting the same results). • Data Batching (data grouped into batches to enhance performance). • Data Prefetching (data cached in memory for better performance). • Create Deep Learning Model (sequential deep learning model built using Keras API). • Create Federated Learning Model (using Keras API from_keras_model deep learning model wrapped and built a federated learning model). • Create a Federated Average Process (collecting local models gradients and updates to be sent to the global model). • Model Initializing and Training (iterative process initialized and start training). • Model Evaluation (the model performance was evaluated by print evaluation metrics). • Return the machine learning model accuracy and loss for each round. Initialization: The traditional machine learning model • Data Loading (data loaded by pandas package which returned data frame object with data). • Drop Unique Values Column (all unique, primary keys, and distinct values columns had dropped during model training). • Replace Null Values (the null values replaced with mode values to make it easy to model for data training). • Label Encoding (categorical label and text labels replaced with normalized values). • Create Deep Learning Model (sequential deep learning model built using Keras API). • Model Evaluation (the model performance was evaluated by print evaluation metrics). • Return the machine learning model accuracy and loss for each round. Federated learning model on patient's chest x-rays images As shown in Fig 2, the proposed federated model building steps are: • Data Loading CV2 package was used to read chest x-ray images from the dataset download directory, and was loaded it into the memory object. The images were resized to 244 � 244 � 3 as color images. • Data Normalizing Image data was divided by 255 to normalize it between 1 and 0. • Data Reshaping The image object is an array of (244, 244, 3) should be flattened to be list (178, 608). • Creating Sample Data Dictionary After flattening the data dictionary instance was created for each image sample to represent the image data (features) and its label. • Creating Samples and labels Tensors keras Objects To build keras the dataset, the keras tensor object should be built for features and keras tensor object for labels. • Data Repetition Data repeated to simulate the number of clients. • Data Shuffling Data shuffled to avoid obtaining the same results. • Data Batching Data grouped into batches to enhance their performance. • Data Prefetching Data cached in memory for better performance. • Create Keras Deep Learning Model Sequential deep learning model built using Keras API. • Create Federated Learning Model Using Keras API from_keras_model deep learning model. • Create a Federated Average Process collecting local models gradients and updates to be sent to the global model. • Model Initializing and Training Initiated the iterative process and start training. • Model Evaluation Evaluate the model performance by print evaluation metrics. Federated model on patient's descriptive data. As shown in Fig 3, to apply the same model to the patient's descriptive dataset, there are modifications required to be done first, there no need for data normalization because the data is not all the same. There is no need for data reshaping step as the data is already flat, so there is a need to modify our model by removing some steps and adding new steps All of the data features are categorical so there is a need to encode it for processing, so there is a need to add a new step after creating the dataset for transforming categorical features to binary vectors, so model modifications can be summarized as follows: Steps to be removed: • Data Normalization. The data is not all the same type. • Data Reshaping Steps to be added Features One-Hot Encoding. Convert features categorical values to binary vectors. We modified the proposed model before model training with the Patient's Descriptive Data. Traditional model on patient's chest x-ray images. As shown in Fig 4 the proposed Traditional model building steps were: • Data Loading CV2 package was used to read chest x-ray images from the dataset download directory, it was loaded it into the memory object. The images were resized to 244 � 244 � 3 as color images. • Data Normalizing. Image data was divided by 255 to normalize it between 1 and 0. • Creating Sample Data Dictionary. A dictionary instance was created for each image sample to represent the image data (features) and its label. • Creating Samples and labels list Objects. List object were be built for features and labels. To build a matrix of vectors of binary values representing categorical values of labels. • Create Keras Deep Learning Model Sequential deep learning model created using Keras API. • Model Initializing and Training. The iterative process initialized and start training. • Model Evaluation. Model performance was evaluated by print evaluation metrics. Traditional model on patient's descriptive data. As shown in Fig 5, to apply the same model to the patient's descriptive dataset, there are modifications required to be done first. Steps to be removed: • Data Normalization • Data Reshaping Steps to be added: • features One-Hot Encoding(convert features categorical values to binary vectors). Patient's descriptive COVID-19 datasets The patient's descriptive COVID-19 datasets contained COVID-19 case information, and after training the two proposed models were used to predict the patient recovery rate. We found that: • The proposed federated model had a higher prediction accuracy than the proposed traditional model As shown in Fig 6 and Table 1. • The proposed federated model had lower prediction loss than the proposed traditional model As shown in Fig 6 and Table 1. • The proposed federated model had high training time than the proposed traditional model As shown in Fig 6 and Table 1. In patient's chest x-ray radiography (CXR) images datasets Binary classifier. After training the federated and traditional models were used to predict the outcome for a patient (COVID-19, pneumonia) based on the chest x-ray image. We found that: • The proposed federated model with SGD algorithm had a higher prediction accuracy than the proposed traditional model As shown in Fig 7 and Table 2. • The proposed federated model with SGD algorithm had a lower prediction loss than the proposed traditional model As shown in Fig 7 and Table 2. • The proposed federated model had a high training time than the proposed traditional model As shown in Fig 7 and Table 2. Ternary classifier. After training, the federated and traditional models were used to predict the patient status (COVID-19, pneumonia, normal) based on the chest x-ray image. We found that: • The proposed federated model with SGD algorithm had a higher prediction accuracy than the proposed traditional model As shown in Fig 8 and Table 3. • The proposed federated model with SGD algorithm had a lower prediction loss than the proposed traditional model as shown in Fig 8 and Table 3. • The proposed federated model with SGD algorithm had a training time equal or slightly greater than the proposed traditional model as shown in Fig 8 and Table 3. Hardware specifications Our experiments were conducted by machine was shown in Table 4 Discussion The following Table 5: contains the dataset columns description and the action taking with the data to appropriate for machine learning model training. Results discussion The model parameters modified multiple times to achieve maximum accuracy and minimum loss. These modifications included: Activation function The sigmoid activation function was more accurate than the relu activation function. Model Optimizer Changing the SGD provided a better model accuracy and loss than ADAM, As shown in Conclusion We applied a proposed federated learning model on COVID-19 datasets, and found that • The proposed federated learning model gives better prediction accuracy than traditional deep learning model. • The proposed federated learning model gives a lower loss than traditional machine learning model. • The proposed federated learning model takes a higher training time than traditional machine learning model. The model's parameters were changed many times to achieve maximum accuracy, minimum loss, and minimum training time, and we found that • Activation function. The softmax activation function was more accurate than relu, sigmoid activation function when applied to the chest x-ray (CXR) images dataset and patient's descriptive data. • Model optimizer Changing the SGD provided better model accuracy and loss than ADAM when applied on patient's descriptive data and patient's chest x-ray (CXR) images dataset. • Learning rate Changing the learning rate had a slight effect on model accuracy and model loss when applied on patient's descriptive data and chest x-ray (CXR) images dataset. • Number of rounds Increasing the number of rounds had a good effect on reducing the loss but had no impact on model's accuracy when applied to the patient metadata and the chest x-ray image (CXR) data set. • Data Size Increasing data size did not affect the model loss or model accuracy when applied to patient's descriptive data and chest x-ray (CXR) images dataset. Swarm intelligence algorithms will be used in the future to optimize the proposed federated model for global optimization and reduce the communications overhead. The hybrid model should be tested on chest x-ray radiography (CXR), and chest computed tomography
6,349
2021-06-08T00:00:00.000
[ "Computer Science", "Medicine" ]
Triple-core-hole states produced in the interaction of solid-state density plasmas with a relativistic femtosecond optical laser Extremely exotic dense matter states can be produced in the interaction of a relativistic femtosecond optical laser with a solid density matter. Here we theoretically investigate triple-core-hole (TCH) states produced by an intense polychromatic x-ray field formed by hot electrons in the interaction of a relativistic femtosecond optical laser with a thin silver foil. X-ray emission spectra of solid-density silver plasmas show unambiguously the production of TCH states at an electron temperature of a few hundreds of eV and radiative temperature of 1–3 keV of the polychromatic x-ray field. Practical calculations show that the emissivity originating from the TCH states exceeds that from the single- and double-core-hole states in Ne-like Ag37+ at electron temperature of ~500 eV and radiative temperature of ~1500 eV. For the neighbouring ionization stages of Ag36+ and Ag38+, TCH emissivity is roughly equivalent or comparable to that from the single- and double-core-hole states. Present work deepens our insight into investigation of the properties of extremely exotic states, which is important in high energy density physics, astrophysics and laser physics. in theory, one has to employ a detailed level accounting method to obtain the correct line intensity and position in the spectral modeling 35,36 . This could further increase the difficulty of high-Z plasma spectral simulation. In this work, radiative properties of TCH states are investigated for solid-density silver plasmas produced in the interaction of a relativistic femtosecond optical laser. Atomic kinetic calculation by using a detailed level accounting method showed that the emissivity of TCH states exceeds that of SCH and DCH states for Ne-like Ag 37+ and is comparable for the nearby Na-like Ag 36+ and F-like Ag 38+ ionization stages at electron temperature of 500 eV and radiative temperature of 1500 eV. Intense x-ray radiation field is produced by fast electrons refluxing in the interaction of a relativistic optical laser with a thin silver foil 16,19 . Multiple-core-hole state emission properties of Ag are investigated systematically at electron temperature of 30-1000 eV and radiative temperature of 1-3 keV. The optimum conditions to produce TCH states are investigated systematically. Results In Fig. 1 we show the emissivity contributed by the multiple-core-hole states of different ionization stages in a solid-density silver plasma at an electron temperature of 500 eV and a Planck radiation field with a temperature of 1500 eV. The charge state distribution and emissivity of plasmas are investigated by solving a rate equation which connects the involved quantum states 37 . A fraction of 1% hot electrons with a temperature of 10 keV is included in the calculation and the thickness of the silver foil is 0.5 μm. At the given plasma condition, the charge state distribution is shown in Fig. 2 for the dominant ionization stages of Ag 33+ -Ag 40+ . Ag 37+ has a highest population fraction of TCH states among all ionization stages, which accounts for 6.3%. The population fraction of SCH and DCH states of Ag 37+ is 3.7% and 11.6%, respectively. The TCH states originate from 1s 2 (L) −3 nln'l'n"l" (n,n' ,n" ≥3), for examples 1s 2 2s 2 2p 3 3s 2 3p, 1s 2 2s 2 2p 3 3s 2 3d, 1s 2 2s2p 4 3s 2 3p, 1s 2 2s2p 4 3s 2 3d, 1s 2 2s 0 2p 5 3s 2 3p, 1s 2 2s 0 2p 5 3s 2 3d, etc. The ionization stages with the next highest TCH population fraction are Ag 38+ and Ag 36+ which account for 3.3% and 1.6%, respectively. The corresponding population fractions of SCH and DCH states are 5.6% and 10.1% for Ag 38+ , and 7.4% and 8.3% for Ag 36+ , respectively. The emission lines shown in Fig. 1 originate dominantly from 3d-2p and 3p-2s bound-bound transitions. Contributions from higher transition arrays such as 4d-2p are much smaller. In this photon energy region, the continuous emissions from the free-bound and free-free processes are weaker by at least two orders of magnitude. The emission lines from a particular transition array are obviously separated into two groups due to the relativistic orbital splitting 38 . The emissivity contributed by the TCH states is surprisingly larger than the summation of SCH and DCH states for Ne-like Ag 37+ . For this ion, SCH states emit only two narrow lines, whereas DCH and TCH states provide a wide quasi-continuum emission with a range of photon energy of more than 100 eV. Intensity of TCH emission is comparable to that of SCH and DCH emission for Na-like Ag 36+ and F-like Ag 38+ . We can also observe the production of quadruple-core-hole (QCH) states in the interaction of laser with the silver material although the population of the QCH states is much smaller than TCH states. The only possible origin for the production of TCH and QCH states is the intense polychromatic x-ray field which is formed by hot electrons produced in the interaction with the ultra-intense laser [16][17][18][19] . The first ionization potential of Ag 37+ is 5558 eV 39 , which is much higher than the thermal energy at the given electron temperature (500 eV). As a result, these free electrons cannot effectively ionize the material to such a high ionization stage of Ag 37+ . Hot electrons have enough energy and can indeed ionize the atoms and ions in the plasma, yet the ionization efficiency of the inner-shell electrons of 2p and 2s is much smaller than the x-ray radiative field 18 . It is found that TCH states can be effectively produced in a Planck radiation field with a temperature of 1500 eV at the given electron temperature of 500 eV. Such a radiation field has a largest intensity (and photon population) at a photon energy of 4230 eV, which can effectively photo-excite 2p electrons to 3d orbital and 2s electrons to 3p orbital with an excitation energy of about 3500 eV and 3700 eV for Ag 37+ . The efficiency of producing TCH states with radiation field of different temperature can be further seen from Fig. 3, which shows the emissivity of the plasmas produced in the interaction of laser with silver foils. The emissivity contributed by the SCH, DCH, TCH, and QCH states is given separately to evaluate their respective contributions. Here we fix the electron temperature of Ag plasmas to be 500 eV for all considered cases. At the radiative temperature of 1000 eV, only a small fraction (0.9%) of DCH states can be produced and most of the production belongs to SCH states. The fraction of DCH and TCH states increases very fast with the increase of radiation temperature. At the radiative temperature of 1500 eV, TCH and DCH emissions exceed those of SCH. In addition to TCH, QCH states begin to appear in the plasmas. With the further increasing of the radiative temperature to 2000 eV, DCH states contribute the largest emissivity and TCH emission is stronger than the SCH emission. Therefore we can conclude that the favorable condition to produce the TCH states is a radiation field with a temperature of ~1500 eV for Ag plasmas at an electron temperature of 500 eV. The production of TCH states is also closely related with the electron temperature of the plasmas. In Fig. 4, we show the emissivity of plasmas at different electron temperature from 30 eV to 1000 eV. The radiation field temperature is assumed to be 2000 eV (left), 2500 eV (middle) and 3000 eV (right), respectively. For a particular temperature of radiation field, we can see that the production of TCH states is sensitive to electron temperature. First let us look at the case of radiative temperature of 2000 eV. At the lower electron temperature (30 eV), emissivity of SCH states dominates and TCH emissivity is much weaker. With the electron temperature increasing to 300 eV, TCH emissions are the most important part and even QCH emissions appear. As the electron temperature further increases, contribution of TCH emission decreases. At such a radiative temperature, the favorable electron temperature to produce TCH states is around 300 eV. Similar characteristics of TCH emission could be found for the cases of radiative temperature of 2500 eV and 3000 eV. The optimized electron temperature to produce TCH states decreases as the radiative temperature increases, which is around 100 eV and 30 eV at the radiative temperature of 2500 eV and 3000 eV, respectively. By utilizing these features of emission spectra, we can deduce the plasma condition produced in the interaction with ultra-intense lasers. Discussion In what follows we explain the underlying physical processes which determine the charge state distribution and emissivity of Ag plasmas. From the inspection of Fig. 4, we find that the emissivity contributed by TCH states at the electron temperature of 1000 eV is always small regardless of the radiative temperature. This is because the high electron temperature makes the Ag plasma in a condition of highly ionized state. The electrons in the plasma effectively ionize the matter into the ionization stages around Ag 37+ . The introduction of radiation field further ionizes the matter into a higher ionized state than Ag 37+ . As a result, the fraction of TCH states is negligibly small. With the decrease of electron temperature, the plasma electrons can no longer ionize the matter into a highly charged state and therefore the radiation field plays a more important role in producing the TCH states. The radiation field with a temperature given in this work can effectively ionize or excite the 2s and 2p electrons, yet the photons do not have enough energy to ionize the 1s electrons. On the other hand, the rate of photoionization of outer-shell electrons such as 3s and 3p is much smaller. This is a favorable situation to produce multiple-core-hole states in the plasma, in particular for Ag 37+ . For this ion, it has a closed atomic structure 1s 2 2s 2 2p 6 and hence the singly excited states 1s 2 2s 2 2p 5 nl are bound ones with a relatively long natural lifetime. However, the multiple-core-hole states of charge states below Ag 37+ have a shorter natural lifetime because they decay via Auger as well as radiative processes, unlike the excited states 1s 2 2s 2 2p 5 nl of Ag 37+ via radiative decay only. As a result, there is an optimized electron temperature to produce TCH states at a given radiative temperature. The present work utilizes a large-scale rate equation approach to determine the level populations and thus the convergence of our results can be guaranteed 40 . In all our calculations, we have included ionization stages from Ag 21+ to the bare ion. For each ionization stage, in addition to the valence electron excitation states, we have further included the quantum states of singly, doubly, triply and quadruply excited configurations from the L shells (2s and 2p) to ensure the completeness of the atomic model. Take the Ne-like Ag 37+ as an example to illustrate the scale of included quantum states. The quantum states belonging to the following electronic configurations have been included in our calculations: (1) 2 (2) 8 , (1) 2 (2) 7 nl, (1) 2 (2) 6 nln'l' , (1) 2 (2) 5 (3) 3 , (1) 2 (2) 5 (3) 2 nl, (1) 2 (2) 4 (3) 4 and (1) 2 (2) 4 (3) 3 nl. Here the notation (N) M means M electrons are occupied in the orbital shell with principle quantum number N and therefore the designation (2) 4 in the configurations means four electrons have been excited from the orbital of 2s and 2p. For lower charge states than Ag 37+ , many more configurations originating from the valence electron excitation have also been included. The maximal principal quantum numbers n and n' are determined by the plasma condition. For the solid-density Ag plasma at the electron temperature of 500 eV, it is calculated to be 5 for Ag 37+ and 6 for Ag 39+ due to the ionization potential depression. The convergence trend can be seen from Fig. 5, which shows the results with different scale of excitation. Nearly converged results have been obtained for including the triple excitation. The ionization potential depression is significant for the solid-density Ag plasma which makes the excited states with a principal quantum number larger than 5 being merged into the continuum for Ag 37+ . In this work, we have considered the effects of screening potential caused by the plasma environment on atomic structure and atomic processes. Briefly, the plasma screening potential originates from the interaction with surrounding free electrons and ions in the plasma, especially free electrons. In this work, we used the screening potential based on the average atom model 41 . It is determined by the plasma micro-field of the free electrons, which have a Thomas-Fermi distribution. The screening potential is added to the self-consistent potential of isolated ion and therefore influences the energy levels, ionization energy and spectral line width 42 . The plasma screening has a more pronounced effect for the energy level shift and ionization potential, yet the effect is much smaller for the transition energy and probability. A detailed quantitative demonstration and discussion are beyond the scope of the present work. In summary, we investigated exotic atomic TCH states production in the interaction of a relativistic optical laser with a thin silver foil. We predicted that TCH emissivity exceeds SCH and DCH emissivity for Ag 37+ and is comparable for Ag 36+ and Ag 38+ at electron temperature of 500 eV and radiative temperature of 1500 eV. The optimized electron temperature to produce TCH states decreases with radiative temperature increasing, which is around 300 eV, 100 eV and 30 eV, respectively, for the corresponding radiative temperature of 2000 eV, 2500 eV and 3000 eV. The intense x-ray radiation is generated by fast electrons which are accelerated in ultra-intense relativistic optical laser in the interaction with a thin silver foil. We estimated the femtosecond optical laser intensity should be about 10 21 W/cm 2 according to previous investigations on aluminium and silicon [16][17][18][19] . Extremely exotic atomic states with four or even more core-holes could be produced by irradiating ultra-intense ultrafast optical lasers on thin high-Z element foils. Our results should be useful to understand and interpret related experiments. Rate equation. In a non-local thermodynamic equilibrium (NLTE) plasma, the population distribution of different quantum states is determined by the relevant microscopic atomic processes. The population n i of the state i can be obtained by solving a rate equation 43,44 where R ji and R ij represent the populating rate from state j to i and the de-populating rate from state i to j, respectively. In the calculations, the following microscopic atomic processes are included, i.e., photoexcitation, photoionization, electron impact excitation, electron impact ionization and autoionization, as well as their inverse processes. In this work, the rate equation is solved at the steady-state assumption where the left hand of the rate equation equals zero. The opacity effect is treated by an escape factor approximation 45,46 . We utilized a hybrid method to obtain high precision x-ray emission spectra. Firstly, a detailed relativistic configuration accounting model is applied in the rate equation to obtain the population distributions in the relativistic configurations. Then the populations of the fine-structure levels belonging to each particular relativistic configuration are determined by assuming the fine-structure levels are at equilibrium in the relativistic configurations which are belonging to. The fine-structure level populations are obtained by the formula where g l and E l are the statistical weight and energy of the fine-structure level l belonging to the relativistic configuration C, N l and n c are the populations of the fine-structure level l and the relativistic configuration C, respectively. Finally, the emission properties are obtained by using the radiative transition data in the level-level formalism and the population distribution of the fine-structure levels. Atomic code DLAYZ used for modeling. We use code DLAYZ to perform the calculation 43 , which is a versatile code for investigating population kinetics and radiative properties of NLTE plasmas. The complete set of atomic data including radiative transition probability, microscopic cross section of photoionization, electron impact excitation and electron impact ionization, and autoionization rate are obtained by using the Flexible Atomic Code (FAC) 47 . In FAC, atomic orbital wave functions are obtained by solving Dirac-Fock equation which is used to construct the configuration state wave function. The atomic state wave functions are expanded by the configuration state wave function with the same parity and angular momentum 47 . The continuum wave functions are obtained based on the relativistic distorted wave methods. The basic atomic data can be calculated after the atomic state wave functions are obtained.
3,920
2018-07-23T00:00:00.000
[ "Physics" ]
3D vs 2D laparoscopic radical prostatectomy in organ-confined prostate cancer: comparison of operative data and pentafecta rates: a single cohort study Background Currently, men are younger at the time of diagnosis of prostate cancer and more interested in less invasive surgical approaches (traditional laparoscopy, 3D-laparoscopy, robotics). Outcomes of continence, erectile function, cancer cure, positive surgical margins and complication are well collected in the pentafecta rate. However, no comparative studies between 4th generation 3D-HD vision system laparoscopy and standard bi-dimensional laparoscopy have been reported. This study aimed to compare the operative, perioperative data and pentafecta rates between 2D and 3D laparoscopic radical prostatectomy (LRP) and to identify the actual role of 3D LRP in urology. Methods From October 2012 to July 2013, 86 patients with clinically localized prostate cancer [PCa: age ≤ 70 years, prostate-specific antigen (PSA) ≤ 10 ng/ml, biopsy Gleason score ≤ 7] underwent laparoscopic extraperitoneal radical prostatectomy (LERP) and were followed for approximately 14 months (range 12–25). Patients were selected for inclusion via hospital record data, and divided into two groups. Their patient records were then analyzed. Patients were randomized into two groups: the former 2D-LERP (43 pts) operated with the use of 2D-HD camera; the latter 3D-LERP (43 pts) operated with the use of a 3D-HD 4th generation view system. The operative and perioperative data and the pentafecta rates between 2D-LERP and 3D-LERP were compared. Results The overall pentafecta rates at 3 months were 47.4% and 49.6% in the 2D- and 3D-LERP group respectively. The pentafecta rate at 12 months was 62.7% and 67% for each group respectively. 4th generation 3D-HD vision system provides advantages over standard bi-dimensional view with regard to intraoperative steps. Our data suggest a trend of improvement in intraoperative blood loss and postoperative recovery of continence with the respect of the oncological safety. Conclusions Use of the 3D technology by a single surgeon significantly enhances the possibility of achieving better intraoperative results and pentafecta in all patients undergoing LERP. Potency was the most difficult outcome to reach after surgery, and it was the main factor leading to pentafecta failure. Nevertheless, further studies are necessary to better comprehend the role of 3D-LERP in modern urology. Background Prostate cancer is the most common tumor in people aged over 50 and is the second leading cause of cancer death in Europe and in the United States. Worldwide, nearly 900,000 men were estimated to have been diagnosed with prostate cancer during 2008 and 258,000 men died for this reason [1]. Incidence in Western countries is higher than in less developed ones where it is slowly increasing. Furthermore, there were recent significant decreases in prostate cancer mortality in Europe and in the United States. In contrast, mortality rates have increased in other countries [2,3]. Decreasing mortality rates is mainly due to earlier diagnosis and improved treatment. Laparoscopic radical prostatectomy (LRP) has become an established treatment for organ confined prostate cancer and is increasingly performed at selected centers worldwide even though open radical retropubic prostatectomy (RRP) is widely considered the treatment of choice. For the first time in 1992 Schuessler carried out a LRP in order to transfer the well-known advantages of the laparoscopic technique to the most common open surgical treatment for prostate cancer [4]. Only after years, Guillonneau and Valencien improved the techniques obtaining results similar to those of open surgery but, because of the steep learning curve, laparoscopic radical prostatectomy has slowly risen in popularity [5]. The advent of robotic surgery has further helped to confine laparoscopic surgery to a special niche. Shorter learning curve, three-dimensional view as well as the ease of movement offered by the Da Vinci® operating arms have made robot-assisted laparoscopic prostatectomy (RALP) more reproducible despite the higher costs. So RALP is easier to learn and is now the surgical treatment of choice in most centers of excellence in the United States [6]. Nowadays, laparoscopic surgery could be regenerated by the introduction of a high-resolution three-dimensional view (3D). 3D techniques have been improved in comparison to the first generation of 3D vision system introduced in the 90s and can even replace the classic bi-dimensional view [7]. 3rd-generation three-dimensional view was introduced about 10 years ago but few experiences were reported in literature probably due to some limit of this technique. The use of a quite heavy helmet with a head mounted display caused surgeon fatigue [8,9]. 4thgeneration three-dimensional system uses more ergonomic glasses and an innovated technology. Better knowledge of pelvic anatomy, improvements in surgical technique have led to improved oncological results and reduced adverse functional outcomes. Historically, outcomes of continence, erectile function, and oncologic control were the major surgical achievement and were called 'trifecta' outcomes. Nowadays, patients with a diagnosis of prostate cancer are younger, healthier and have higher expectations from the advanced minimally invasive surgical technologies. Hence, the 'pentafecta' was proposed as a new method of outcomes analysis by adding early complications and positive surgical margins (PSMs) to trifecta [10]. According to these theories, pentafecta has become a new cornerstone in the analysis of urological surgery results. In this pilot randomized study, we aim to highlight the differences between the standard two-dimensional (2D) with the 4th-generation three-dimensional view (3D) applied to laparoscopic extraperitoneal radical prostatectomy (LERP) in order to assess if the 3D visualization of the operative field could really improve intraoperative and perioperative steps and the pentafecta outcomes. Patients and technologies From October 2012 to July 2013, all patients with clinical T1c prostate tumor, belonging to low/intermediate D' Amico risk group, were included in the study. 86 consecutive patients, who met these criteria, underwent LERP. Patients were selected for inclusion via hospital record data. The data were collected in a database and retrospectively analyzed. Fondazione PTV -Policlinico Tor Vergata Ethic Committee approved our clinical study and data collection. In accordance with our institution's Ethic Committee, informed and signed consent was obtained from each patient prior treatment. A statement of ethical approval covered permission to access patient records and use them for study purposes. All patients constituting the cohort had at least 1 yr follow-up. Patients were randomized into two groups: the former 2D-LERP (43 pts) operated with the use of 2D-HD Storz® camera with a 10 mm 0°laparoscope; the latter 3D-LERP (43 pts) operated with the use of a 3D-HD Viking® camera with a 10 mm and 0°lens double-channel stereolaparoscope. The 3D view is achieved with the help of a 3DHD Viking® screen and with the use of polarized glasses. The glasses are filtered; each lens only lets one direction of light pass through the eye, thus maintaining two perspectives of the image and giving a tridimensional vision. Procedures: surgery and rehabilitation All 86 patients were operated by the same surgeon (P.B.) following the same surgical technique of LERP. A 1,5 cm cutaneous incision is made at 1 cm below the inferior margin of the umbilicus and a dilator device is inserted into the preperitoneal space and about 300 ml of air is inflated to develop the space of Retzius (pneumo-Retzius). 4 secondary trocars are then placed under laparoscopic view (2 for each iliac fossa, right and left) in the inverted fan configuration. The endopelvic fascia is incised on each side and bladder neck is dissected and isolated through the "bladder neck sparing" technique. Once the bladder neck is opened close to the prostate, the posterior lip of the bladder neck is lowered to provide access to the interprostatorectal plane. Vasa deferentes and seminal vesicles are isolated and dissected. Then, prostatic pedicles are incised in an anterograde mode with preservation of the neurovascular bundle when it is indicated. Finally, a meticulous preparation of the urethral stump introduces to the vesicourethral anastomosis which is collected in interrupted sutures [11]. The drainage is left in place until leakage is observed, and normally it is removed on second post-operative day. Urinary fistula is defined as prolonged drainage over postoperative day 10. The catheter is normally removed between 7 and 10 days after surgery; in case of urinary fistula a cystography is carried out on 14th and 21st postoperative day and both drainage and catheter are removed at the complete closure of the anastomosis. Baseline sexual and urinary functions were assessed before LERP with self-administered, validated questionnaires: the International Index of Erectile Function 6 (IIEF-6) and the Incontinence Quality of Life (I-QoL) [12,13]. Pelvic floor muscles exercises were recommended for all patients immediately after catheter removal in order to facilitate continence recovery. After catheter removal, all patients received phosphodiesterase type 5 (PDE5) inhibitors at least three times a week and began penile rehabilitation no later than three weeks after radical surgery by using intracavernous pharmacotherapy (ICP) with Prostaglandin E1 (alprostadil). Data collection Operative and perioperative data. Operative time (OTfrom skin to skin closure), anastomosis time (ATtime to complete the anastomosis till the catheter insertion), number of stitches used (NuS), estimated blood loss (EBL) and any intraoperative complication were recorded. Perioperative data include: days of drainage (DD), days of catheterization (DC), hospital stay (HS), pathological staging and complications. Histopathologic staging was performed according to the 2002 TNM system [14]. Pentafecta. The five outcomes included in the analysis of the pentafecta are complications, positive surgical margins (PSMs) and the trifecta outcomes (urinary continence, sexual potency, biochemical recurrence BCR-free survival rates). Pentafecta is achieved if there were no complications, negative surgical margins and if the patient was continent, potent and BCR-free. Statistical analysis Fisher test was used to analyze non-parametric data as appropriate. Student t-test was used to analyze parametric data such as patients' characteristics, intra and remaining perioperative data. Results were considered significant if the p value was ≤ 0.05. Results There were not significant differences between the two groups in terms of age, body mass index (BMI), preoperative PSA level and biopsy Gleason score. Patients' characteristics are summarized in Table 1. Operative and perioperative data Operative and perioperative data are presented in Table 2. Median OT for 3D-LERP was significantly shorter than that in 2D-LERP (162 versus 241 minutes, p 0.01). Moreover, in the 3D LERP group, median OT for the first 3 cases was significantly longer than the remaining cases due to the initial operator learning curve. Statistically significant differences were also recorded in median AT (24 versus 32 minutes, p 0.03) and median NuS (5.65 versus 6.45, p 0.018). Median EBL did not reach a statistical significance in the two groups with two patients requiring transfusion in the 2D group and 1 patient in the 3D group. No conversion to open surgery was necessary and no complications occurred requiring early re-intervention. Median HS was 7.6 and 5.5 days for the 2D-LERP and 3D-LERP respectively (p = 0.180). Median DD was 5 in the 2D-LERP and 4,5 in the 3D-LERP (p = 0.925). The median CT was of 10.55 days and 10.75 days for the 2D to 3D respectively (p = 0.880). Complications Complications can be considered as a perioperative outcome. We discussed complications apart from the other perioperative data in order to underline their role in the pentafecta. Modified Clavien grading system was used to classify complications occurring during the surgical procedure or within 90 days after surgery (early complications) [15]. Twenty-three of 86 patients experienced complications. More specifically, perioperative complications were reported in 15 (34.8%) cases in the 2D-LRP and in 8 (18.6%) cases in the 3D-LRP. 2D-LRP and 3D-LRP complications are summarized in Table 3. Minor complications (Clavien grade 1 and 2) represent respectively 80% and 63% for the two groups of all those reported. Major complications (Clavien grade ≥ 3) constituted respectively 20% and 37%. There were no cases of complications graded 4a, 4b and 5 according to the modified Clavien grading system. Data are depicted in Table 3. Oncologic outcomes: biochemical recurrence and positive surgical margins Oncological results are presented in Tables 4 and 5. The distribution of pathological stage and Gleason score was similar in the 2 groups. The overall PSM rate was 9% in the 2D-LERP and 4% in the 3D-LERP. When stratified by pathological stage, PSM rate was significantly different in pT2c/pT3 disease between groups (halved in the 3D-LERP compared with the 2D-LRP group). Urinary continence According to the European Association of Urology Guidelines (EAU Guidelines, 2013), urinary incontinence represents a postoperative complication that persists after 1 year in 7.7% of patients who underwent radical prostatectomy [17]. The American Urological Association Guidelines (AUA Guidelines, 2007 updated in 2011) report a rate of postoperative urinary incontinence that ranges between 3% and 74% [18]. Urinary continence was assessed with the selfadministered, validated questionnaire Incontinence Quality of Life (I-QoL). The definition of continence was based on a specific question appropriate to reflect the range of incontinence severity: "How many pads per day did you usually use to control urine leakage during the last 4 weeks?". We considered "dry" patients without any loss of urine (no pads/day) or those who used a safety pad/day. The overall continence rates did not reach a statistically significant difference although the trend is clearly favorable to the 3D-LERP group (89% and 92% vs 83% and 88% of patients were continent at 3 and 12 month follow-up in the 3D and 2D-LERP group respectively). I-QoL questionnaire showed a significant quality of life improvement at the first month in the 3D (mean score 90,45) compared to the 2D-LERP group (mean score 81,8) (p = 0.01). These positive results are also confirmed at third (93.3 vs 83.6 -p = 0.01) and twelfth (95.4 vs 88.1 p = 0.03) month follow-up in the 3D compared to the 2D-LERP group respectively. Pre-and postoperative urinary continence data are depicted in Table 6. Erectile function It is widely recognized that the preoperative Erectile Function (EF) is an important prognostic factor for erectile dysfunction recovery after radical prostatectomy [19]. Several other factors are predictive for EF recovery after surgery: age, type of surgery, pre-and post-RP libido, adjuvant treatments, comorbidities, urinary continence, availability of a partner and sane mental health. Therefore, it is essential to determine the EF baseline. The International Consultation on Sexual Medicine (ICSM) Committee recommends the use of validated psychometric instruments such as IIEF. In our experience, potency rate has been assessed using the IIEF-6. After surgery, erectile function rehabilitation was recommended for all patients (scheme reported above) in order to preserve the functional smooth muscle tissue of the corpora cavernosa and to avoid the effects of the surgical-related neuroapraxia [20]. Preoperative Potency was defined as the ability to achieve and maintain satisfactory erection for sexual activity or as a score IIEF-6 score ≥ 17 (without pharmacological or mechanical support). Post-operative potency was defined as the ability to achieve and maintain erections firm enough for sexual intercourse in more than 50% of attempts, with or without the use of iPDE5 and with eventual ICP (IIEF-6 score ≥ 17). Patients were subjected to bilateral or unilateral nervesparing surgery (NSS). The overall potency rates were 60% and 67% at 3 months and 67% and 72% at 12 months in the 2D-and 3D-LERP group respectively. Type of surgery, pre-surgical evaluation of erectile function and potency outcomes are summarized in Tables 7 and 8. Pentafecta outcomes The overall pentafecta rate at 3 months was 47.4% and 49.6% in the 2D-and 3D-LERP group respectively. The pentafecta rate at 12 months was 62.7% and 67% for each group respectively. Discussion In the past decade, a dramatic shift towards lower-stage tumors has become evident. Currently, men are younger at the time of diagnosis and more interested in less invasive surgical approaches (eg. Laparoscopy, robotics) than they are for the traditional approach [21]. At the same time and more importantly, normal continence and preserving sexual function are fundamental but not the only primary goals of radical prostatectomy. Patients want to know if the treatment option will render them cancer free with a minimum of complications and the shortest possible convalescence time while preserving continence and potency [10]. These observations highlight two main topics: on one hand, the possibility of considering a minimally invasive surgical approach with its innovative technical characteristics Bilateral 58% (25) 63% (27) every time that it is possible and, on the other hand, the necessity of adopting a more comprehensive method of reporting peri-and post-operative outcomes. By adopting the laparoscopic technique with adherence to established oncological principles, the aim is to duplicate the open surgical method in its entirety. LRP has slowly risen in popularity and has become, in some centers, the surgical approach of choice for the treatment of localized prostate cancer for its advantages. Lower blood loss and transfusion rate associated with the laparoscopic approach together with shorter hospital stay, reduced catheterization time, better pain control and faster return to everyday activities seem to be the most encouraging improvements obtained [22]. Unfortunately, classic laparoscopic surgery is limited by a two-dimensional vision that does not allow perception of the operative field as in open surgery. The lack of depth perception has repercussions both on the learning curve, which still constitutes a major obstacle to the development of laparoscopy [23], and in the possibility for the surgeon to maneuver the instruments with an accuracy comparable to that which would occur in the same "open" operation. Even if the experienced surgeon is able over time to regain some vision of depth, this will never be optimal [24]. For this reason and through the increasing popularity of laparoscopy, a three-dimensional display system was introduced in the early 90s, with the expectation that this technique could make laparoscopic interventions safer and faster 23 . Up to now, just a few studies on three-dimensional laparoscopy have been written without any definite conclusion about its utility. Some articles describe better results with 3D laparoscopic technique than with the 2D system both in surgical training exercises and in different surgical procedures. Exercises like linear cutting and suturing, curved cutting and suturing, tubular suturing and dorsal vein complex suturing simulation have been performed and it has been suggested that the new-generation 3D system could be helpful in laparoscopy [25][26][27]. In the 90s, comparative studies were organized to evaluate the improvement and superiority of vision between traditional 2D and 3D system (3rd generation) in terms of dissection of the kidney, securing of the renal vessels and laparoscopic suturing, but the Authors found no differences between the two vision systems, either with respect to the accuracy and speed of surgical execution, nor as regards to the learning curve [28][29][30][31][32]. Gynecologists and general surgeons have described similar studies in the field of 3D laparoscopic surgery with discordant conclusions [33][34][35]. Robotic surgery had a great benefit from the threedimensional view. The advent of Da Vinci® has further helped to confine laparoscopic prostatectomy to a special niche. Shorter learning curve, three-dimensional view as well as the ease of movement offered by the operating arms, makes robot-assisted laparoscopic prostatectomy (RALP) more reproducible despite the higher costs. This way, Robertson et al. recently underlined that RALP is easier to learn and is now the surgical treatment of choice in most centers of excellence in the United States [22]. This is the first study reported in urologic literature that aims to establish, after twenty years since the first 3D model was introduced, the utility of the 4th generation 3D vision system during LERP in terms of feasibility and potential advantages over the 2D vision system regarding operative and perioperative data and the pentafecta outcomes. Only one work reported by Good et al. in 2013 analyzed the pentafecta learning curve for laparoscopic radical prostatectomy [36]. Transition from the 2D to 3D vision system, requires an initial period of adaptation. This is demonstrated by longer operative time and the incidence of postoperative urinary fistula that occurred at the very beginning of our experience using the 3D vision system. This short learning curve is related to a new perception of the depth of the operative field that requires a different spatial assessment of instrument positioning rather than an initial difficulty in recognizing anatomical landmarks avoiding possible complications. Once adaptation to 3D view is reached, a more realistic visualization of the surgical field allows greater speed and precision in the movement of the surgical instrument. This translates in a better preparation of the bladder neck and the urethral stamp reducing anastomosis time. Although not resulting in a statistically significant difference, the easy identification of small vessels using the 3D vision may reduce blood loss. Despite the necessary adaptation from 2D to 3D vision by an expert laparoscopist, on the other hand, the 3D vision may offer significant advantages in teaching laparoscopic skills to inexperienced individuals [37]. Meticulous handling and tissue dissection obtained with the auxilium of the 3D view have allowed earlier continence recovery. This could be mainly related to less trauma and greater sphincter-structures saving [38] as demonstrated by a better I-QoL and a decreased number of pads per day in the 3D LERP Group. One of the operating steps that gets more advantages from the 3D view is the dissection of seminal vesicles, vas deferens and prostatic pedicles. Dissection of these delicate structures makes 3D vision very effective. Basically, these operating stages and their higher accuracy might affect a possible earlier and better recovery of erectile function. These encouraging results obtained with the 3D vision system were associated with a number of positive surgical margins and post-operative complications comparable in both groups demonstrating a good oncological and functional efficacy. From our point of view, some problems related to the prolonged use of 3D vision such as headaches, fatigue and nausea, already reported in previous studies, have still remained unresolved, but it is not an important limitation to its use [39,40]. Statistically significant differences were recorded for all intraoperative steps and data suggest a trend of improvement in intraoperative blood loss, postoperative recovery of continence and potency with the respect of the oncological safety for the 4th generation 3D-HD vision system of the 3D-LERP over standard bi-dimensional view in 2D-LERP. One of the advantages of this study is that the comparison between the 2D and 3D surgical procedures was performed by a single surgeon making it more reliable and avoiding possible bias. Despite this fact, the extensive experience of the surgeon may have influenced the results and complication rates of our study and, as a result, the outcomes cannot be generalized. However, this study has several limitations. First of all, being a pilot study, with a small number of procedures and a relatively short follow-up, it does not allow the definition of the definitive role of this technique. Data analysis was a retrospective. Some data may not reach a statistical significance between groups because the study was not powered to identify these differences; nevertheless, a trend of improvement in surgical and functional outcomes has been shown. Furthermore, we included both bilateral and monolateral NSS procedures. Another limitation is that we did not use the Expanded Prostate Cancer Index Composite (EPIC) questionnaire to better assess urinary symptoms. Finally, the study is limited by the short follow-up, which can affect BCR-free and functional outcomes. Nowadays, all the laparoscopic prostatectomies in our Department are performed with the auxilium of 3D video system. If these preliminary data will be confirmed by larger follow-up of a greater number of patients, the 4th generation 3D laparoscopy may play an important role in the treatment of prostate cancer. Conclusions This preliminary study has shown that 4th generation 3D-HD vision system provides advantages over standard bi-dimensional view with regard to intraoperative steps. Our data suggest a trend of improvement in intraoperative blood loss and early postoperative recovery of continence along with the respect of the oncological safety. Pentafecta has been reached with a higher score for the 3D-HD LERP. Given the large number of men diagnosed with prostate cancer and the exponential growth of the medical costs on a global level, it would be better that treatment options are not only effective but also less expensive. In this context, the 3D laparoscopy may be an intermediate step between the standard 2D laparoscopy and robotassisted laparoscopy, allowing the combination of the low cost of the first with the 3D technology of the second. Further studies are necessary to better comprehend the role of 3D-LERP in modern urology.
5,608.4
2015-02-21T00:00:00.000
[ "Engineering", "Medicine" ]
Neural Network Study of Hidden-Charm Pentaquark Resonances Very recently, LHCb experiment announced the observation of hidden-charm pentaquark states $P_c(4312)$, $P_c(4440)$, and $P_c(4457)$ near the $ \Sigma_c \bar{D}$ and $ \Sigma_c \bar{D}^\ast$ thresholds, respectively. In this present work, we studied thesepentaquarks in the framework of the nonrelativistic quark model with four types of potential. We solved 5-body Schr\"odinger equation by using artificial neural network method and made predictions of parities for these states which are not determined in the experiment yet. The mass of another possible pentaquark state near the $\bar{D}^\ast \Sigma_c^\ast$ with $J^P=5/2^-$ is also calculated. I. INTRODUCTION In recent years, some experimental states or resonances are announced to be observed to be candidates beyond the conventional quark-antiquark and three-quark configurations. Most of these particles are not confirmed with high statistics and better resolution. Besides that, except the case for X(3872) [1] , they were seen only in one experiment, such as X(5568) [2,3] or in one type of experiment such as B factories. The observation of X(3872) was a milestone for the era of so called exotic states. Exotic states are beyond the description of conventional quark model. Pentaquark is an example of these exotic states. It consists of four quarks (qqqq) and one antiquark (q) bound together. Very recently, the LHCb collaboration updated the results of Ref. [4] reporting the observation of new narrow pentaquark states [31] with masses and widths as follows: P c (4312) M = (4311.9 ± 0.7 +6. 8 −0.6 ) MeV, Γ = (9.8 ± 2.7 +3.7 −4.5 ) MeV, It is interesting to note that the old peak of P c (4450) splits into two peaks; P c (4457) and P c (4440) and the old broad state P c (4380) has given a narrow peak at 4312 MeV. The massess of P c (4440) and P c (4457) are close to Σ cD * threshold and the mass of P c (4312) is very close to Σ cD threshold. As pointed out in [32], central mass of the P c (4312) state is ≈6 MeV below the Σ + cD 0 threshold and ≈12 MeV below the Σ ++ c D − threshold. For P c (4440), it is ≈20 MeV below the Σ + cD * 0 and ≈24 MeV below the Σ ++ cD * − thresholds. In the case of P c (4457), it is ≈3 MeV below the Σ + cD * 0 and ≈7 MeV below the Σ ++ cD * − thresholds. Isospin violating process can occur when the width of a resonance is small and mass is below the corresponding thresholds. This can be instance for these pentaquarks. The observation of these pentaquarks got attention immediately [33][34][35][36][37][38][39][40]. In this paper, we use constituent quark model in order to obtain spectrum and quantum numbers. As mentioned in Ref. [26], constituent quark model has often been employed for exploratory studies in QCD and paved the way for lattice simulations and QCD sum rules calculations. The main part of the constituent quark model is to get a solution of Schrödinger equation with a specific potential. For mesons and baryons, this can be done effectively and one can obtain reliable results comparing to the results of experiments. But pentaquark structures are multiquark systems and due to the complex interactions among quarks, solving 5-body Schrödinger equation is a challenging task. For this purpose, we solved Schrödinger equation via Artificial Neural Network (ANN). Besides other use of fields, ANNs can be utilized as an elective strategy to solve differential conditions and quantum mechanical systems [41,42]. ANNs provide some advantages compared to standard numerical methods [43,44] • The solution is continuous over all the domain of integration, • With the number of sampling points and dimensions of the problem, computational complexity does not increase significantly, • Rounding-off error propagation of standard numerical methods does not influence the neural network solution, • The method requires less number of model parameters and therefore does not ask for high memory space in computer. The most eminent advantage of using ANN to solve differential equation is that coordinate transformations are not needed [45]. For example, Jacobi coordinates are used to simplify many-body systems in physics. The paper is organized as follows. In Section II, the model and method used for the calculations are described. In Section III, obtained results are discussed and in Section IV, we sum up our work. A. The Model The Hamiltonian of Ref. [46] reads as follows , A and B are constant parameters, κ and κ are parameters, r ij is the interquark distance |r i − r j |, σ i are the Pauli matrices andλ i are Gell-Mann matrices. There are four potentials referred to the p nd r c : The related parameters are given in Table I. This potential was developed under the nonrelativistic quark model (NRQM) and used for exploratory studies. It compose of 'Coulomb + linear' or 'Coulomb + 2/3-power' term and a strong but smooth hyperfine term. For further details of this potential, see Ref. [46]. B. The Method Nowadays, machine learning is one of the most popular research fields of modern science. The fundamental ingredient of machine learning systems is artificial neural networks (ANNs) since the most effective way of learning is done by ANNs. ANN is a computational model motivated by the biological nervous system. ANN is made up of computing units, called neurons. A schematic diagram of an ANN is given in Fig. 1. FIG. 1: A model of multilayer neural networks In this work, we use a multilayer perceptron (neuron) neural network (MLPN). A MLPN contains more than one layer of artificial neurons. These layers are connected to next layer but there is no connection among the neurons in the same layer. They are ideal tools for solving differential equations [47]. A simple model of a neuron can be seen Feed forward neural networks which are used in this present study, are the most used architectures because of their structural flexibility, good representational capabilities and a wide range of training algorithms available [47]. All input signals are summed together as z and the nonlinear activation function determines the output signal σ(z). We use a sigmoid function as an activation function since all derivatives of σ(z) can be derived in terms of themselves. The information process can only be in one way in feed forward neural networks, from input layer(s) to output layer(s). The input-output properties of the neurons can be written as where i, j, and k are for input, hidden, and output layers, respectively. Input to the perceptrons are given as n i = (Input signal to the neural network), where N i and N j represent the numbers of the units which belong to input and hidden layers respectively, ω ij is the synaptic weight parameter connecting the neurons i and j, and θ j is threshold parameter for the neuron j [48]. The overall response of the network can be written as One can get the derivatives of o k with respect to the network parameters (weights and thresholds) by differentiating Eqn. (10) as In order to obtain the spectra of pentaquark states, we consider of ANN application to a quantum mechanical system. We will follow the formalism which was formulated in [41]. Consider the following differential equation where H is a linear operator, f (r) is a function and Ψ(r) = 0 at the boundaries. To solve this differential equation, it is possible to write a trial function as which feeds a neural network with vector parameter p and λ which are to be adjusted later. The parameter p stands for the weights and biases of the neural network. A(r) and B(r, λ) should be conveniently specified in order to Ψ t (r) satisfies the boundary conditions regardless of the p and λ values. In order to solve Eqn. (15), the collocation strategy can be utilized and it can be changed into a minimization problem as Eqn. (15) can be written as with the boundary condition Ψ(r) = 0. The trial solution can be written of the form Ψ t (r) = B(r, λ)N (r, p), where B(r, λ) = 0 at boundary conditions for a variety of λ values. By discretizing the domain of the problem, Eqn. (17) can be transformed into a minimization problem with respect to the parameters p and λ where E is the error function and can be computed by Consider a multilayer neural network with n input units, one hidden layer with m units and one output. For a given input vector r = (r 1 , · · · , r n ) , the output of the network is where Here, ω ij is the weight from input unit j to hidden unit i, ν i is the weight from hidden unit i to output, u i is the bias of hidden unit i and σ(z) is the sigmoid function, Eqn. (3). The derivatives of output can be written as where σ i = σ(z i ) and σ (k) is the k-th order derivative of the sigmoid. To obtain desired results, the first thing that ANN has to do is learning. The learning mechanism is the most important property of ANN. In this work, we used a feed forward neural network with a back propagation algorithm which is also known as delta learning rule. This learning rule is valid for continuous activation function, such as Eqn. 3. The algorithm is as follows [49]: Step 1 Initialize the weights w from the input layer to the hidden layer and weights v from the hidden layer to the output layer. Choose the learning parameter (lies between 0 and 1) and error E max . Initially error is taken as 0. Step 2 Train the network. Step 3 Compute the error value. Step 4 Compute the error signal terms of the output layer and the hidden layer. Step 5 Compute components of error gradient vectors. Step 6 Check the weights if they are properly modified. Step 7 If E = E max terminate the training session. If not, go to step 2 with E → 0 and initiate a new training. We parametrize trial function as 26) where N denotes the feed forward artificial neural network with one hidden layer and m sigmoid hidden units with The minimization problem becomes as We solved Schrödinger equation in the interval 0 < r < 1 fm using 250 equidistant points with m = 10. III. RESULTS AND DISCUSSION At first step, we calculated the masses of heavy mesons and baryons with all potentials. The results are given in Table II. One interesting point is that the potential (Eqn. (2)) which have a simple form (has no many-body forces and tensor forces) reproduced masses of the observed states quite good. Motivated from these results, we obtained mass values of the newly observed pentaquark states according to their quantum numbers. Table III shows the results of J P = 1/2 − case and Table IV shows J P = 3/2 − case, respectively. It can bee seen from Tables III and IV that the mass of P c (4312) of four potentials with the quantum number assignment J P = 1 2 − is more favourable than the quantum number J P = 3 2 − . On the other hand, the mass of P c (4440) and P c (4457) with the quantum number assignment J P = 3 2 − is more favourable than the the quantum number assignment J P = 1 2 − . All the potentials reproduced rather well the experimental data. The method of ANN for solving differential and eigenvalue equations include a trial function [50]. A trial function can be written as a feed forward neural network which includes adjustable parameters (weights and biases) and eigenvalue is refined to the existing solutions by training the neural network. As mentioned in Ref. [27], if a wave function results for a multiquark configuration an energy as E = 100 MeV below the lowest threshold, it can represent the exact solution of the system. Besides this, an energy E = 100 MeV above one of the threshold puts a question mark about the wave function and the model for describing the system. The relevant thresholds had been calculated in Ref. [26] as 4329 MeV for DΣ c with I(J P ) = 1 2 ( 1 2 ) − and 4483 MeV for D * Σ c with I(J P ) = 1 2 ( 3 2 ) − . Our mass values are below at the order of 50 MeV of the relevant thresholds which means trial function of this work represent the 5-body structure quite good. The LHCb result could be an important sign to understand the heavy quark spin symmetry (HQSS). In the limit where the masses of heavy quarks are taken to infinity, the spin of the quark decouples from the dynamics which refers the strong interactions in the system are independent of the heavy quark spin. This implies that the states that differ only in the spin of the heavy quark, i.e. states in which the rest of the system has the same total angular momentum, should be degenerate. This is also the case for single heavy baryons like Σ * c Σ * b and called heavy quark spin (HQS) multiplet structure. It is shown in Ref. [39,40] that the HQS multiplet structure predicts a state near D * Σ * c threshold with J P = 5/2 − .D * Σ * c threshold with J P = 5/2 − was calculated in Ref. [26] as 4562 MeV. Our mass estimation for this state is shown in Table V. It should be also noted that a 5/2 −D * Σ * c state does not couple to J/ψp in S-wave therefore it is not expected to give a peak in the LHCb [40]. IV. SUMMARY Inspired by the recent observation of the hidden-charm pentaquark states, we solved 5-body Schrödinger equation in the nonrelativistic quark model framework. We used a nonrelativistic quark model using the potentials proposed in [46]. These potentials reproduced the experimental ground state masses of some mesons and baryons as a demonstration of the method. We used ANN method to get the solution of the 5-body Schrödinger equation. The main advantage of using ANN for such few-body Schrödinger equations is that there is no need for coordinate transformation. We gave a prediction of quantum numbers for these newly observed pentaquarks. The quantum number assignments for P c (4312), P c (4440), and P c (4457) of this work are in agreement with [33,34,36,38]. Since the spin and parity numbers are not determined in the LHCb report, the other J P assignments can not be excluded. For example the P c (4440) and P c (4457) states can be explained as 5/2 + and 5/2 −D * Σ c state [51]. Partial wave analysis in the experimental data is critical to enlighten the internal structures of these exotic states. We also calculated the mass for 5/2 −D * Σ * c state which is a prediction of heavy quark spin multiplet structure. The average mass value of four estimations is roughly 95 MeV below the relevant threshold. Searching this missing HQS partner or partners is an important task for future experiments.
3,598
2019-04-22T00:00:00.000
[ "Physics" ]
The Economic Aspects of the Ferry Operator Activity - Selected Issues The ferry industry is well-developed within the Baltic Sea. The ferry operation is a type of liner shipping where passengers and cargo form one specific market. The costs incured by ferry companies are typical for regular shipping, whereas revenue is generated in two different segments. The aim of the article is to analyse the structure of costs and income of ferry carriers. Two prime Baltic companies operating cruise ferries have been analysed. Introduction Ferry shipping is a type of liner trade where passengers and cargo form one market. Ferries can be defined as ships with passenger accommodation and space for wheeled cargo. Ro-ro technology is used for loading/discharging vehicles. Ferries sail on regular routes. (Market 2012: 12, 16). Some ferry companies operate pure ro-ros as well. Stopford states that ships operate ferry markets share many common characteristic such as car decks, accommodation for passengers Ilona Urbanyi-Popiołek, PhD, Gdynia Maritime University, Faculty of Entrepreneurship and Quality Science, e-mail address<EMAIL_ADDRESS>and entertainment facilities but there are so many combinations of these basic characteristics that make the ferry fleet a diverse phenomenon (Stopford 2009: 501). The main types of vessels used in the ferry industry to be considered in this article are cruise ferries and ro-paxes. The purpose of this paper is to present the structure of costs and revenue of ferry companies operating within the Baltic Sea and to analyse the two items influencing the financial performance of the company. The research question focuses on the comparison of two main types of ships involved in the industry within the Baltic area -cruise ferries and ro-paxes. The paper is organized as follows: Section 1 elaborates the ferry market at the Baltic Sea. In Section 2 an analysis of running costs of ships has been given. Section 3 presents the revenue of ferry companies. Conclusions are addressed in last section. The methodology used in this study is based on the analysis of statistics and companies' financial reports as well as deductive reasoning. Baltic Sea ferry market The Baltic Sea is one of the prime ferry markets. In 2014 the total Baltic traffic amounted to 240 million passengers, 93,7 million cars and 11,8 million trailers. These figures include all main and local routes between islands in the whole region. As regards the international and main domestic services (a few Danish and Swedish domestic routes), the ferries carried nearly 49 million passengers, 9,5 million cars and 3,5 million cargo units (Market 2015: 25, 180-198). The pure ro-ro cargo traffic is not included. In relation to the international market, the ferry shipping within the Baltic is consolidated. 17 ferry companies work in this area operating about 120 ferries of different types (cruise-trailer, ro-pax, cargo, high speed). For cargo, the most convenient vessels are ro-paxes and ro-ro ferries with space for drivers. They operate in services where lorries and trailers dominate. Cruise-trailer ships have also large capacity for the wheeled cargo, but they are put on markets with huge passenger demand, for example Finland -Estonia, Sweden -Finland, Norway -Germany. The biggest concentration of the ferry industry occurs in the western Baltic and Danish Straits. This market services 60% of the total number of passen-gers and 70% of the cargo carried within the Baltic. Cargo dominates in Sweden -German and Sweden -Denmark markets. This region has utilized 49 vessels operated primarily by Stena Line, TT-Line, DFDS Seaways and Color Line. The second region is the eastern Baltic, with the services from Finland to Sweden and Estonia. This market has the 32% share in passenger traffic and 15% in cargo turnover. The number of ferries employed in this area amounts to 31. The main operators are Tallink Group (Silja Line and Tallink) and Viking Line. The major services are Stockholm -Helsinki, Stockholm -Turku and Helsinki -Tallinn. The passenger traffic and package tours are very popular in this region, so cruise-trailer ferries prevail. The southern Baltic contains services from Sweden to Poland, Lithuania and Latvia. The market has the 8% share in passenger and 15% in cargo. The traffic between Poland and Sweden dominates in this area and is growing every year. The leading companies in this region are Unity Line, Stena Line and TT-Line. Ferry services are generally operated by large companies. These companies compete in servicing the same routes or lines to the same destinations (Stopford 2009: 501). Thus, the price policy and quality of services are the basic issues, and so are the level of cost and incomes influencing the financial performance. costs of the ferry company Ferry operators incur costs generated in several areas of activity. Generally, the costs of running a ferry company are a combination of the three areas: cost related to fleet operation, costs of the company maintenance, and costs of marketing and land services designed for passengers and cargo owners. The ferry operation-generated expenses are fundamental as they determine the financial performance of the business. The methodology of cost classification in the shipping industry is not unified, a fact which makes it difficult to analyse and compare different categories of costs. In general, costs can be classify into six categories (Stopford 2009: 221): -operating costs constituting the ship-running expenses such as crew, stores and maintenance incurred with the ship trading, -voyage costs including fuel expenses, port charges, canal dues: these are the specific voyage-related costs, -periodic cost of maintenance incurring with ship` seaworthily, such as the costs of surveys, dry docking, repairs, insurance, -cargo handling costs including loading and discharging operations and stevedoring expenses, -capital costs resulting from the way of financing the ship and including depreciation, interest and capital payment etc., -other costs, administrative costs. The structure and orchestration of costs depend on several factors such as e.g. ship type and size, age, flag, and way of operation. The costs in ferry shipping are unique as they comprise expenses typical for freight shipping and passenger transportation by sea. The structure of ferry ship costs is as follows (Kizielewicz, Urbanyi-Popiołek 2015: 182-183): -operating costs -crew costs, cost of goods sold, other operating costs (expenses for water, sanitary stores, collection of waste), -voyage costs -bunker, port fees, -handling costs -loading and discharging of wheeled cargo, private cars, buses, embarkation and disembarkation of passengers, services at ferry terminals, -costs of maintenance -repairs, dry docking, surveys, insurance, -capital costs -depreciation, charter hire, interest, capital payments on debt finance. The operating costs constitute 50-65% of the total cruise ferry costs. The largest are the expenses for the purchase of goods to shops and restaurants. These expenses approximate 30% of the total cruise ferry costs. The above items account for 50% of costs on ship-plying routes in the eastern Baltic and connections with Norway, where duty free sales are provided. They also comprise the expenses for the passenger servicing such as entertainment, spa&wellness, gambling. Ergo, the above costs are the expenses associated with direct passenger services on-board. The crew costs include all the charges incurred in relation to crewing the ferry such as e.g. salaries and wages, social insurance, pensions and victuals. Crew costs of cruise ferries constitute 20-25% of all operating costs. The number of hotel staff members increases the crew expenses. This item is lower on ro-pax as the number of crew is lesser. The other factor is the wages level. Some of the Baltic ferries are registered in national registers of shipping, for example Viking Line ships serve under the Swedish and the Finnish flag, Tallink Group ferries are registered under the Estonian and the Finnish flag, whereas Color Line -under the Norwegian one. The terms of employment under the national ensign increases the crew costs. Other operation costs comprise such the expenses for hotel purchase stores, engine and deck departments, water supply and waste disposal. This group constitutes 10-15% of the total operating costs. The voyage costs, the second group, includes the expenses for purchase of fuels and port fees. Fuel costs depend primarily on the fuel consumption and marine fuels prices. The fuel expenses are estimated for 12-20% of the running costs. Some carriers have implemented slow steaming (reduced vessel speeds) in recent years so as to improve fuel efficiency. On the other hand, the Baltic Sea has become a SECA area, hence operators are forced to use low sulphur fuels (or implement the alternative solutions such as scrubbers, LNG, methanol). The port dues include various fees levied against the ferries such as tonnage dues, wharfage, quay dues, light dues, dockage and passenger dues. These expenses are charged according to port tariffs. Ferries calling ports regularly are charged lower fees depending on a number of calls over a definite period of time. Other items in this cost group are pilotage, towage and mooring. Ferry ships are exempt from these obligatory services when they are fitted with thrusters and captains have the pilot`s licence. The handling costs are composed of two items. First, there are the expenses for loading and discharging of lorries, trailers and other wheeled cargo as well as cargo claims. Then, there are the expenses incurred in embarking and disembarking passengers and those comprising all terminal operations. It is estimated that the total of port expenses amounts to 7-15% as regards the cruise ferry operating. Yet another cost group refers to the ship maintenance and comprises e.g. protection and indemnity insurance (P&I), hull and machinery insurance (H&M) as well as the periodic and routine maintenance expenses. The latter covers the costs of dry docking and special surveys to determine the ship`s seaworthiness. Further, this cost group includes maintaining the main engine and auxiliary equipment etc. It is estimated that the above mentioned expenses amount to 15% of the cruise ferry running costs. The last cost group is the capital expenses such as depreciation (depending on the ferry value and method of writing off), payment of interest and repayment of loan. They account for 5-10% of the overall cost. Figure 1 presents the typical structure of cruise ferry costs. The high share of the good sold is typical for the Baltic cruise ferries. On the lines where duty and tax-free sales on-board exist such as e.g. Sweden -Finland and Sweden -Estonia via Aland Islands and services outside the European Union, mini cruises and different packages are common. Differently from above at ro-pax expenses of goods purchase amount to 12-20% of total costs. Table 2 presents the cost structure of two leading companies operating cruise ferries. The choice of Tallink -Silja Line and Viking Line has been done not without reason. Firstly, both carriers operate only cruise-trailer ferries and do not have ro-pax or cargo ferries in their fleet, so the data are not distorted by the results of different types of ferry vessels in consolidated financial statements. Secondly, both operators apply similar methodology of costs classification so there is a possibility of comparing value and share of cost items. Other operator e.g. Stena Line, DFDS, operate ro-paxes as well as pure ro-ros and all data are presented in total. Furthermore, the cost items are calculated in different way, e.g. DFDS in operating costs reveals fuel and port operations. Analysing the data presented in Table 2, it is noticeable that the similar cost share of goods sold -nearly 1/3 of costs -is related to the services on-board. The higher level of crew costs in Viking Line effects from the nationality of crew members -Swedes and Finns are employees on Viking ships, whereas Tallink Group workers are mainly Estonian. Ro-paxes and car ferries with limited space for passengers present a different cost structure. The main expenses are fuel and crew costs approximating 25% and 20% respectively. The cost of goods sold, as mention above, constitutes 12-20% of the total cost. Revenue of the ferry company The service sales is the main source of ferry operator revenue. Differently from the other types of shipping, ferry operators get their incomes from two different areas -passenger and cargo. The revenue of the ferry company can be classified as follows: -sales of tickets, -sales of on-board services, -other passengers revenue, -sales of cargo transport, -income from the charter of vessels. The ticket sales is the first item of the passenger revenue and includes transport, sales of cabins and transport of private vehicles. The ferry operator uses the passenger tariffs showing the expenses of travelers covered by ticket. The accommodation expenses depend on the cabin category. The passenger tariff also includes the price of transportation of car, minibuses, caravans, motorbikes etc. The above category also comprises sales of package trips and conferences on-board. The prices of mini cruises represent integrated rates and include carriage, accommodation, sometimes also the stay in a hotel onshore. Sales of on-board services include sales in restaurant and shops as well as entertainment. This item is the prime revenue that flows to the company-operating services with cruise philosophy developed. It is estimated that sales on-board generate 30-55% of the total revenue of cruise ferry. In the case of ro-pax, this category is of minor importance in relation to the cargo transportation and comprises 20-30% of the total earnings. Other passengers' revenue contains the incomes which are not recognised as the on-board sale such as sales of packages by tour operators, passenger transfer, marketing, sales of hotel accommodation onshore etc. The second important item is cargo segment revenue. This income is significant part of ro-paxes where cargo transportation dominates (up to 70%-80% of the total income). Lorries, trailers and other wheeled cargo are carried upon freight tariffs. The basic rates are charged per length of the vehicle or per unit. Typically for the liner service pricing, ferry companies charge freight additionals, like Bunker Adjustment Factor (BAF), Low Sulphur Surcharge (LSS), or the charge for vehicles containing dangerous goods. In practice operators frequently use service contracts with major customers offering discounts or other concessions. The income from charters is a minor item concerning operators chartering out free tonnage instead of sale, e.g. Finnlines, Tallink -Silja Line charter ferries which have not been employed on companies route networks. The cruise ferry operation income differs in structure compared with ro-pax. On-board services generate on average as much as 40% of the total revenue. Catering and entertainment generate the expenditures of passengers being the primarily source of the cruise ferries income. Cargo segments amounts on average to 15-20% of this ferry type revenue. Table 3 presents the revenue of selected companies operating the cruise fleet. Tallink-Silja Line`s passengers service is the primary source of income. Sales on-board generate more than half the revenue of the group. In total, this item with ticket sales gives 85% the of yearly income. Freight has 11% share due to the high cargo turnover on Tallinn -Helsinki route. Viking Line presents the consolidated data. The passenger segment generates 93% of the income and reflects the tourist quality of the company`s business. The carrier offers low ticket prices and concentrates on sales at shops and restaurants. Taking into consideration the level of costs related to the purchase of goods for on-board services, one can assume that the structure of operator income from the passenger segment is the same as Tallink Group. summary Cost and revenue are variables significantly influencing the financial results and performance of the ferry industry. The position of a ferry company is infle-xible due to the operation of the vessels on fixed routes according to sailing lists. The carrier incur costs irrespective of the utilization of a ferry's passenger and cargo capacity. The majority of costs in the ferry business should be regarded as fixed items. From the above analysis we may draw the following conclusions. For the companies operating cruise ferries the basic are on-board services, for those having ro-paxes obviously the cargo segment is essential. The management of cost and revenue are the key issues for the financial performance of a ferry operator.
3,684.8
2015-01-01T00:00:00.000
[ "Economics" ]
Early circulating strain of SARS-CoV-2 causes severe pneumonia distinct from that caused by variants of concern To analyze the molecular pathogenesis of SARS-CoV-2, a small animal model such as mice is needed: human ACE2, the receptor of SARS-CoV-2, needs to be expressed in the respiratory tract of mice. We conferred SARS-CoV-2 susceptibility in mice by using an adenoviral vector expressing hACE2 driven by an EF1α promoter with a leftward orientation. In this model, severe pneumonia like human COVID-19 was observed in SARS-CoV-2-infected mice, which was conrmed by dramatic inltration of inammatory cells in the lung with ecient viral replication. An early circulating strain of SARS-CoV-2 caused the most severe weight loss when compared to SARS-CoV-2 variants of concern, although histopathological ndings, viral replication, and cytokine expression characteristics were comparable. We found that a distinct proteome of an early circulating strain infected lung characterized by elevated complement activation and blood coagulation, which were mild in other variants, can contribute to disease severity. Unraveling the specicity of early circulating SARS-CoV-2 strains is important in elucidating the origin of the pandemic. Early circulating strain of SARS-CoV-2 causes severe pneumonia distinct from that caused by variants of concern Yusuke Matsumoto Tokyo Metropolitan Institute of Medical Science Michinori Kohara ( <EMAIL_ADDRESS>) Tokyo Metropolitan Institute of Medical Science inserted in the rightward orientation, a viral pIX gene located downstream of the inserted unit was coexpressed with the transgene, and a fusion protein consisting of the N-terminal part of transgene product was expressed. These pIX products may be one of the main causes of adenovirus-induced immune responses. Interestingly, the EF1α promoter did not activate the pIX promoter in this adenoviral vector 17 . The EF1α promoter with a leftward orientation resulted in a reduced antiviral response and maintained prolonged transgene expression 17 . Thus, we generated an adenoviral vector expressing hACE2 under the EF1α promoter with a leftward orientation (rAd5 pEF1α-hACE2-L) (Fig. 1a). We rst examined whether intranasal administration of rAd5 pEF1α-hACE2-L affects the body weight of BALB/c mice, and found that administration at 5×10 7 or 2.5×10 8 focus forming units (FFU) per mouse did not cause any decrease in body weight (Fig. 1b). Next, BALB/c mice were intranasally administered with rAd5 pEF1α-hACE2-L at 1⋅10 7 , 5⋅10 7 or 2.5⋅10 8 FFU/animal. Five days after administration, mice were further intranasally inoculated with an early circulating strain of SARS-CoV-2 isolated in Japan (originated from Wuhan; Wu-2020 strain) at 1⋅10 5 plaque forming units (PFU) per mouse (Fig. 1c). As a result, the 5⋅10 5 FFU rAd5/mouse group showed a marked reduction in body weight, which peaked at 5-6 days post-infection (dpi) with SARS-CoV-2 Wu-2020. Although the 2.5⋅10 8 FFU rAd5/mouse group showed weight loss until 4 dpi of SARS-CoV-2 infection, this group showed recovery of body weight after 5 dpi. The 1⋅10 7 FFU rAd5/mouse group showed a slight reduction of weight followed by rapid recovery. Therefore, we determined that 5⋅10 7 FFU rAd5/animal was the most suitable dose for assessing disease severity. Variants of concern (VOCs) have emerged showing evidence of altered virus characteristics 18 . VOCs have been associated with increased transmissibility, evasion of immunity from infection and vaccination, and reduced susceptibility to antibody therapies [19][20][21] at a multiplicity of infection (MOI) of 0.001, and found that the growth kinetics were almost comparable among strains (Fig. 2a). To analyze their replication in mouse lungs, we inoculated the rAd5 pEF1α-hACE2-L administered BALB/c mice with SARS-CoV-2 via the intranasal route. Viral replication in the lungs was examined by quantitative real-time RT-PCR (qRT-PCR) for the detection of SARS-CoV-2 genome, and by plaque assay using Vero E6/TMPRSS2 cells. A clear increase in viral replication was observed with a peak at 2 dpi in all strains ( Fig. 2b and 2c), followed by a gradual decrease towards 7 dpi. The B.1.351 strain showed reduced genome copy numbers (signi cantly at 7 dpi), but other strains showed comparable genome copy numbers throughout the course of infection. Macroscopically, in Wu-2020-and B.1.1.7 strain-infected lungs, multiple dark red and brown lesions appeared on the surfaces of all lung lobes from 4 to 7 dpi (Fig. 2d, arrows). In contrast, the discolored lesions were restricted to the upper left lobe in P.1-and B.1.1351 strain-infected lungs at 7 dpi. The loss of body weight in mice did not correspond to the extent of viral replication and lung lesions (Fig. 2e). The B.1.351 strain, an exception, showed less viral replication and lesions in the lungs, so the degree of weight loss was lower throughout the course of infection. Although viral replication was similar in the B.1.1.7 and P.1 strains, the appearance of lung lesions was more pronounced in B.1.1.7, however, weight loss was relatively more pronounced in P.1. Mice infected with all four strains continued to lose weight until 5 dpi, but mice infected with three VOCs began to recover thereafter. Though viral replication and the extent of lung lesions were comparable in B.1.1.7-and Wu-2020-infected mice, mice infected with Wu-2020 did not regain weight, resulting in signi cant weight loss at 7 dpi compared with mice infected with VOCs (Fig. 2e). The histopathological analysis demonstrated that severe pneumonia with thickened alveolar walls, in ammatory cell in ltration, hemorrhaging, and thrombus formation was remarkable in the dark red and brown lesions of the lung (Fig. 3a, hematoxylin and eosin staining; HE). Even in areas where discoloration was not obvious macroscopically, thickened alveolar walls and mild in ltration of in ammatory cells were observed (Supplementary Fig. 1, HE). Immunohistochemistry using an antibody against SARS-CoV-2 nucleocapsid (N) protein showed that the viral antigen was present in the lung epithelial cells ( Fig. 3a and supplementary Fig. 1). The viral antigen was stained most prominently at 2 dpi. Few viral antigens were found in lesions with remarkable cellular in ltration. Next, we examined cytokine expression in the lung using a multiplex bead array. In ammatory cytokine, such as IL-6, was signi cantly elevated in all strains compared to mock-infected animals (Fig. 3b). Furthermore, IL-1b, IFN-g, IL-12, MIP-1b, MIP-2, LIF, KC, IL-10, MCP-1, M-CSF, G-CSF, and GM-CSF were signi cantly elevated and VEGF was decreased, at least in a strain compared to mock-infected mice. In B.1.351-infected lungs, the expression of some cytokines, such as MIP2, LIF, and MCP-1, was relatively low compared to other strains. This is correlated with the manifestation of low viral replication and mild weight loss. For infection by other strains, there was no clear evidence of a relationship between cytokine levels and disease severity. The clinical severity of COVID-19 is not always associated with increased levels of pro-in ammatory cytokines and other in ammation markers 25 . To survey the molecules associated with disease severity, tandem mass tag (TMT) peptide labeling, combined with mass spectrometry (MS) quantitative proteomics in mouse lung at 7 dpi with SARS-CoV-2 was performed. The TMT-based quantitative proteomic method was approved previously for comparison of protein levels across multiple organs in human COVID-19 autopsy cases 26 . TMTpro 12-plex MS revealed distinct lung proteomes associated with infection by SARS-CoV-2 strains ( Fig. 4a and Supplementary Fig. 2a). Gene ontology (GO) enrichment analysis of signi cantly (p<0.05) up-and down-regulated (2-fold) proteins showed that the proteome of Wu-2020-infected lung was distinct from those of other variants (Fig. 4b). Immune response-related factors, such as regulation of complement activation, immune effector process, as well as platelet degranulation and regulation of blood coagulation, were enriched in the proteins that changed signi cantly in the proteome of Wu-2020-infected lungs (Fig. 4b). In contrast, the proteomes of VOCinfected lungs were associated with structural organization, such as the development of extracellular structures and changes in matrix organization, as well as nuclear DNA replication. Up-regulation of proteins associated with complement activation, e.g., C3a anaphylatoxin chemotactic receptor, complement components and complement factors was prominent in Wu-2020-infected lungs (Fig. 4c). The complement system has been shown to be involved in the severity of human COVID-19 27,28 . Upregulation of proteins involved in platelet degranulation and blood coagulation, e.g., kininogen, bronectin, plasminogen activator inhibitor-1, coagulation factor XII and plasma kallikrein, was also remarkable in Wu-2020-infected lung tissue ( Fig. 4d and f). These factors are considered to work in concert and contribute to COVID-19 pneumonia via dysregulation of thrombus formation. Up-regulation of minichromosome maintenance complex component (MCM)2, 3, 4, 5, 6, and 7, which are related to nuclear DNA replication, was observed in SARS-CoV-2 infection, regardless of strain (Fig. 4g). MCM2-7 act as replicative DNA helicases that unwind the DNA duplex template as a hetero-hexameric complex 29 . The involvement of the MCM family in immune responses against viral infection is still poorly characterized. However, MCM up-regulation is correlated with proliferation and maintenance of leukocytes 30,31 , suggesting that the MCM family is involved in the activation of in ltrating cells in the COVID-19 pneumonia observed in lungs infected with all of the virus strains. Structural organizationrelated proteins, such as collagens, were down-regulated in VOC-infected lungs (Fig. 4e). Collagen deposition is a hallmark of lung brosis 32 and has been con rmed in the lungs of COVID-19 patients 33 . It is considered that collagen deposition may be correlated with mild disease onset based on the recovery of body weight in VOC-infected mice. Pathway analysis based on Wikipathways (https://www.wikipathways.org/index.php/WikiPathways) supports the enrichment of complement and coagulation cascades, as well as the blood-clotting cascade, in Wu-2020-infected lungs ( Supplementary Fig. 4). We established a system to recapitulate COVID-19-like pneumonia in mice infected with SARS-CoV-2 after inducing hACE2 with rAd5 pEF1α-hACE2-L. When rAd5 pEF1α-hACE2-L was used, there were few abnormalities in protein expression ( Supplementary Fig. 2a), suggesting that this adenoviral vector has low cytotoxicity. Mice infected with the Wu-2020 strain developed diffuse pneumonia. Histopathologically, thickened alveolar walls, hemorrhaging, and in ltration of in ammatory cells were prominent. The SARS-CoV-2 N antigen was found in alveolar epithelial cells, not in the lesions in ltrated by in ammatory cells; rather, the antigen was concentrated in areas that retained relatively normal alveolar structure. These ndings were consistent with human COVID-19 autopsy cases in early 2020 34,35 . SARS-CoV-2 Wu-2020, an early circulating strain, was shown to be highly pathogenic in mouse lung. The Wu-2020, B.1.1.7 and P.1 strains have comparable replication potentials in both Vero E6/TMPRSS2 cells and mouse lung (Fig. 2a-c). In addition, these strains induced a marked cytokine response (Fig. 3b), and infection led to similar histopathological ndings ( Fig. 3a and Supplementary Fig. 1). However, there was a clear difference in the lung proteome (Fig. 4b) between the Wu-2020 strain, which induced prolonged weight loss, and the other strains, which induced weight loss followed by recovery (Fig. 2e). The ndings showed that proteins involved in the complement system were elevated most markedly in cases of Wu-2020 infection (Fig. 4b and c). The release of proin ammatory complement peptides helps to recruit leukocytes to the lung and aids in the assembly of the terminal complex that damages vascular endothelium 27,28,36 . Increased levels of complement fragments is related to disease severity in COVID-19 patients 27,37 , which suggests that they are well suited for use as a marker for serious injury in Wu-2020infected lung. The altered blood coagulation system, which is manifested by the up-regulation of thrombosis-associated proteins, such as tissue factor (TF), coagulation factor XII and plasma kallikrein (Fig. 4f), can also be involved in disease severity in Wu-2020-infected lungs. TF initiates the extrinsic coagulation pathway to form thrombin in response to tissue injury and in ammation 38-41 . Coagulation factor XII is activated by polyphosphates released from platelets, and initiates an intrinsic coagulation cascade. [42][43][44] , which occurs with disease onset in acute respiratory distress syndrome 45 . Factor XII also activates plasma kallikrein, thereby increasing the formation of the proin ammatory peptide bradykinin 46 . Simultaneously, several inhibitory factors for complement activation, such as complement factor H and vitronectin (Fig. 4c), and for coagulation, such as plasminogen which activates brinolysis ( Fig. 4d), were up-regulated in Wu-2020-infected lungs; these factors may play a role for prevention of excessive tissue injury. These results indicate that molecular events in pneumonia lesions are altered in ways that cannot be observed by morphological observation. Even in these cases, the lung damage associated with elevated proteins that are related with complement activation, platelet degranulation and blood coagulation may result in the manifestation of severe symptoms, such as unrecovered weight loss after Wu-2020 infection. In this study, levels of pro-in ammatory cytokines could not be used as markers of disease severity. Rather, the ndings showed that complement-related and blood coagulation factors may be key factors associated with COVID-19 severity. In addition, we observed 35-fold and 9-fold up-regulation of metallothionein-2 (Mt2) and Mt1, respectively, in Wu-2020-infected lung (Supplementary Fig. 3). Mt1/2, which are potently induced by heavy metals, other sources of oxidative stress and cytokines, facilitate metal binding and detoxi cation 47 . In response to GM-CSF, macrophages express Mts (Mt2 rather than Mt1), which are involved in antimicrobial responses and contribute to the production of reactive oxidative species 48 . We observed a correlation between disease severity and Mt1/2 amount, suggesting that Mt1/2 may act as a marker for COVID-19 severity. Additionally, we identi ed other potential biomarkers that may be correlated with disease severity, including tenascin, membrane-spanning 4-domains subfamily A member 6C and ste n-1/3 ( Supplementary Fig. 3), whose associations with COVID-19 have not been studied to date. Furthermore, abundance of the SARS-CoV-2 N protein in lungs infected with Wu-2020 was markedly higher than that in other strains (Supplementary Fig. 3). SARS-CoV-2 N protein has been shown to promote NLRP3 in ammasome activation 49 , and it is possible that SARS-CoV-2 N protein remaining in the lung may stimulate excessive in ammation. The amount of residual SARS-CoV-2 N protein in lesions may also be indicative of lung injury. It is possible that a comparison of SARS-CoV-2 strains that exhibit different pathogenicity may reveal the existence of novel biomarkers for disease severity. 55,56 . Our ndings revealed that there is a difference in the manifestation of symptoms associated with SARS-CoV-2 strains, and that an early isolated strain was highly pathogenic in the lung. The major difference between our ndings and previous studies is that the respiratory-speci c pathogenesis of SARS-CoV-2 was recapitulated in the mouse model using rAd5 pEF1α-hACE2-L in this study. SARS-CoV-2 infection involves extra-respiratory manifestations, including cardiac, gastrointestinal, hepatic, renal, and neurological symptoms. Disease severity in K-18 mice infected with B.1.1.7 and B.1.351 strains may be due to these extra-respiratory symptoms, as shown by the presence of neurological pathogenesis 57,58 . Comparative studies of human autopsy cases of each variant have not yet been performed. Some autopsy cases of patients infected with the B.1.1.7 and P.1 strains revealed no signi cant morphological or histopathological differences compared to early circulating strains 59,60 . However, our ndings showed that the elevation of several potent biomarkers may affect lung pathogenesis. Understanding the overall differences in an organ's proteome can help to unravel the pathology of emerging variants. With the accumulation of autopsy cases infected with VOCs, these changes in pathologies will be revealed. In conclusion, we demonstrated that an early circulating SARS-CoV-2 speci cally induces the manifestation of severe symptoms and is associated with dramatically altered host responses. How pathogenicity was transformed from the initial strains to that observed in the VOCs needs to be elucidated. Detailed analyses of the pathogenicity of the early circulating strains will lead to a better understanding of the origin of the pandemic. Declarations Y.M., F.Y. and M.K. conceived, designed, coordinated and performed the study, contributed to data interpretation, data presentation and manuscript writing. T.S., T.M., K.Y. and N.Y. assisted with the animal experiments. N.Y. assisted with the qRT-PCR experiments for quantitation of the viral genome. A.T. assisted with the generation of the adenoviral vector. Y.M., A.E., K.Y. and Y.S. performed proteome analyses and contributed to data presentation. Ethics statement All experiments using mice were approved by the Tokyo Metropolitan Institute of Medical Science Animal Experiment Committee and were performed in accordance with the animal experimentation guidelines of the Tokyo Metropolitan Institute of Medical Science. Cells and viruses Vero E6/TMPRSS2 cells, which constitutively express human TMPRSS2 24 , and human embryonic kidney 293 (HEK293) cells were maintained in Dulbecco's modi ed Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS), penicillin and streptomycin, and G-418 (1 mg/mL, only in VeroE6/TMPRSS2 cells). All cells were cultured at 37°C in 5% CO 2 . Generation of rAd5 pEF1α-hACE2-L E1-and E3-deleted adenovirus derived from human adenovirus type 5 encoding expression units with a leftward orientation were used in this study, as described previously 17,61 . The hACE2 gene was cloned in the antisense orientation into pAxCAwtit2, the adenoviral cosmid vector that contains the left end of adenovirus type 5 with the E1 region substituted with an expression cassette containing the EF1α promoter and a multicloning site using an Adenovirus Dual Expression Kit (Takara Bio, Tokyo, Japan). The rAd5 pEF1α-hACE2-L was generated by transfecting pAxCAwtit2 encoding hACE2 into HEK293 cells by using CalPhos Mammalian Transfection Kit (Takara Bio). The rAd5 pEF1α-hACE2-L were puri ed using two rounds of cesium chloride gradient centrifugation, and the titers of the concentrated and puri ed virus stocks were determined using HEK293 cells and an Adeno-X Rapid Titer Kit (Takara Bio) according to the manufacturer's instructions. Plaque formation assay Vero E6/TMPRSS2 cells in six-well plates were washed with DMED-GlutaMAX, and inoculated with serially diluted SARS-CoV-2, and incubated at 37°C for 60 min with rocking every 15 min. After removing the viruses, cells were washed with DMEM-GlutaMAX and overlaid with agarose medium. After incubation of cells at 37°C for 2 days, the plaques were visualized with crystal violet staining and counted. Viral RNA quanti cation The left lung lobe from each mouse was homogenized in nine volumes of Leibovitz's L-15 medium (Thermo Fisher Scienti c, Waltham, MA, USA) using a Multi-Bead Shocker (Yasui Kikai, Osaka, Japan). Total RNA samples were extracted from 50 μL of the supernatant of lung homogenates using Isogen LS (Nippon Gene, Tokyo, Japan) according to the manufacturer's instructions. Fifty nanograms of total RNA was used for quantitating the SARS-CoV-2 genome. Viral RNA was quanti ed using a 1-step reverse transcription qRT-PCR, as described previously 62 . Viral loads were calculated as copies per 1 μg of total RNA. Immunohistochemistry The mice lungs were xed in 10% neutral buffered formalin, embedded in para n, sectioned at a thickness of 4-μm, stained with HE, and subjected to routine histological examination. Para n block sections were also used for staining of the SARS-CoV-2 N protein. Antigen retrieval was performed by autoclaving sections in 10 mM citrate buffer (pH 6.0) for 10 min, and then the sections were immersed in 0.3% hydrogen peroxide in methanol at room temperature for 30 min to inactivate endogenous peroxidase. The sections were blocked with BlockAce (DS Pharma Biomedical, Osaka, Japan) for 15 min, and incubated overnight at 4°C with 2 mg/mL of rabbit anti-SARS-CoV-2 N protein monoclonal antibody [HL344] (GenTex, Inc., CA, USA). Secondary labeling was performed by incubation at RT for 30 min with EnVision+ System-HRP labeled Polymer Anti-Rabbit (Dako Denmark A/S, Glostrup, Denmark), followed by color development with ImmPACT DAB Peroxidase Substrate (Vector Laboratories, Burlingame, CA, USA) at RT for 10 min. Nuclear staining was performed with hematoxylin solution. TMTpro 12-plex MS analysis Lysates extracted from left lung lobes were processed and digested by using an EasyPep Mini MS Sample Prep kit (Thermo Fisher Scienti c) according to the manufacturer's protocol. Three mouse lungs were pooled, and 25 µg of peptides from each sample were labeled with 0.25 mg of TMTpro mass tag labeling reagent (Thermo Fisher Scienti c) according to the manufacturer's protocol. After TMT labeling, the 8 sample channels were combined in equal proportions, dried using a speed-vac, and resuspended in 0.1% TFA. Samples were fractionated into 8 fractions using High pH Reversed-Phase Peptide Fractionation Kit (Thermo Fisher Scienti c) according to the manufacturer's protocol. One microgram of peptide from each fraction was analyzed by LC-MS/MS on an EASY-nLC 1200-connected Orbitrap Fusion Lumos Tribrid mass spectrometer (Thermo Fisher Scienti c) equipped with an FAIMS-Pro ion mobility interface (Thermo Fisher Scienti c). Peptides were separated on an analytical column (C18, 1.6 µm particle size ´ 75 µm diameter ´ 250 mm, Ion Opticks) using 4-hr gradients (0% to 28% acetonitrile over 240 min) with a constant ow of 300 nL/min. Peptide ionization was performed using a Nanospray Flex Ion Source (Thermo Fisher Scienti c). FAIMS-Pro was set to three phases (-40. -60, and -80 CV) and a '1 sec cycle for a phase' data-dependent acquisition method was used where the most intense ions in every 1 sec were selected for MS/MS fragmentation by HCD. MS raw les were analyzed using a Sequest HT search program in Proteome Discoverer 2.4 (Thermo Fisher Scienti c). MS/MS spectra were searched against the SwissProt reviewed mouse reference proteome (UniProt). TMTpro-based protein quanti cation was performed using the Reporter Ions Quanti er node in Proteome Discoverer 2.4. Statistical analysis Statistical analyses were performed with Prism software (version 9.1.2; GraphPad, San Diego, CA, USA). Statistical signi cance was assigned when p values were <0.05. Inferential statistical analysis was performed by one-way analysis of variance (ANOVA), followed by Tukey s test. Histopathological analyses and cytokine levels of mouse lung infected with SARS-CoV-2 variants (a) Histopathologic ndings with HE staining and detection of SARS-CoV-2 N protein in mouse lungs (left lobe) infected with SARS-CoV-2. (b) Left lung homogenates were used for measurement of multiplex cytokines and chemokines using the Bio-plex suspension array system. Data represent means and SD, n=5. *p<0.05, **p<0.01. The colors of the asterisks indicate the following: black (vs mock) and gray (vs Wu-2020). Figure 4 Proteomic landscape of SARS-CoV-2 infected mouse lungs (a) Volcano plots for mouse lung proteome of the indicated group compared adenovirus-infected/SARS-CoV-2 non-infected (mock) mice. Up-regulated (Log 2 ³ 1) and down-regulated (Log 2 £ -1) proteins and p value < 0.05 indicate threshold lines. The numbers of up-and down-regulated proteins are 403 and 411
5,121.2
2022-01-26T00:00:00.000
[ "Biology", "Medicine" ]
Optimization of Thick-Walled Viscoelastic Hollow Polymer Cylinders by Artificial Heterogeneity Creation: Theoretical Aspects A theoretical solution of the problem of thick-walled shell optimization by varying the mechanical characteristics of the material over the thickness of the structure is proposed, taking into account its rheological properties. The optimization technique is considered by the example of a cylindrical shell made of high-density polyethylene with hydroxyapatite subjected to internal pressure. Radial heterogeneity can be created by centrifugation during the curing of the polymer mixed with the additive. The nonlinear Maxwell–Gurevich equation is used as the law describing polymer creep. The relationship of the change in the additive content along with the radius r, at which the structure is equally stressed following the four classical criteria of fracture, is determined in an elastic formulation. Moreover, it is shown that a cylinder with equal stress at the beginning of the creep process ceases to be equally stressed during creep. Finally, an algorithm for defining the relationship of the additive mass content on coordinate r, at which the structure is equally stressed at the end of the creep process, is proposed. The developed algorithm, implemented in the MATLAB software, allows modeling both equally stressed and equally strength structures. Introduction Thick-walled cylindrical shells are widely used in the gas, oil refining, chemical, petrochemical, and food industries, in the form of pipes, tanks, high-pressure vessels, and others. From the solution of the Lamé problem, it is known that for a homogeneous thick-walled cylinder under internal pressure, the maximum circumferential stresses are at the inner surface. Thus, in this case, the strength of the material is not fully implemented in these types of structures. When creating an artificial inhomogeneity of the material, the stress-strain state in thick-walled cylindrical shells subjected to internal pressure can change significantly. The analysis of the stress-strain state of radially inhomogeneous thick-walled cylinders with different laws of variation in the modulus of elasticity along the radius, including exponential, power-law, etc., was carried out in [1][2][3][4][5]. This analysis showed that in contrast to homogeneous structures, maximum stresses do not necessarily occur at the inner surface of the shell. For optimal use of the material strength, it is necessary to ensure that the limiting state occurs simultaneously at all points, that is, to create an equal-strength structure. For example, if the elastic modulus is reduced at the points of a thick-walled cylinder Polymers 2021, 13, 2408 2 of 13 with higher stresses, then the stresses in them decrease, and vice versa [6][7][8]. Thus, when changing the modulus of elasticity of the material in the structure's thickness according to a specific law, it is possible to achieve the constant equivalent stress according to any failure criterion. In this case, the structure is equally stressed. An equally stressed structure can be of equal strength if the strength of the material does not change when the elastic modulus changes. The described idea is based on the inverse method of structure optimization. The essence of the approach is to find such laws of variation in material characteristics, in which the stress-strain state of the structure is given [9]. In [10], a technique for achieving constant hoop stress throughout the thickness of a cylinder subjected to hydrostatic boundary loads is proposed. In [9,11,12], solutions are presented for the problem of finding the law of change in the modulus of elasticity of a material, in which thick-walled cylinders and spheres subjected to the action of internal pressure are equally stressed according to the criterion of maximum shear stresses and the maximum elastic distortional energy criterion. In [13], the solution to this problem is presented based on Mohr's failure criterion. It is shown that from the solution based on Mohr's theory, it is possible to obtain, as special cases, solutions based on three classical failure criteria: the criterion of maximum normal stresses, the criterion of maximum deformations, and the criterion of maximum shear stresses. The works [14,15] consider the model of an equally stressed cylinder based on the Balandin failure criterion. In [16], the plane strain problem for a functionally graded cylinder subjected to both normal and tangential nonuniform external pressure is solved. Both the power and exponential laws of the shear modulus were considered. In addition, the authors managed to identify a radial variation pattern in which the linear combination of the radial and the hoop stress can follow a given distribution. In articles [17,18], in addition to concentrated loads, temperature effects are taken into account when solving optimization problems. In [19,20], the technique of varying the material's mechanical characteristics is considered to create equal-strength bar structures. The practical implementation of an equally stressed cylinder manufacture can be performed according to the method proposed in [21]. First, the polymer mass is mixed with a finely dispersed mineral filler. Then the composite is placed into a cylindrical shape that rotates as the polymer cures. In this case, the solid phase is displaced to the periphery under the action of inertial forces, nonuniformly distributed along the cylinder radius. As a result, the modulus of elasticity is changed. By changing the type of filler, its percentage, and the speed of rotation of the centrifuge, it is possible to bring the function of variating the modulus of elasticity closer to the required one. This method is widely used in the production of centrifuged concrete [22][23][24][25]. The mechanical properties of some polymers can also be modified by exposing them to light of different intensities [26]. For example, for a fiber-reinforced composite, the volume fraction of the fibers and their orientation in the direction of thickness can vary to obtain a suitable modulus gradation [27]. In all the works above, the solution of optimization problems is performed in a linear setting. There are few publications on the analysis of heterogeneous thick-walled shells taking into account nonlinearity. In [28], the analysis of dilatation deformations of a functionally graded material (FGM) second-order elastic thick-walled spherical shell is carried out. The material is assumed to be isotropic and incompressible. In [29], a closedform solution for a hollow multilayer sphere made of transversally isotropic and hyper elastic FGM is obtained. The axisymmetric problem for a nonlinear elastic hollow sphere is also considered in [30]. In [31], the same methods of a similar problem for a thick-walled cylinder are used. In [32], a nonlinear finite element analysis of thermo-elasticity of a thick-walled FGM cylinder is carried out, taking into account the dependence of material properties on temperature. In [33], the analysis of thermal loads of a thick-walled cylinder is carried out, taking into account nonlinear kinematic hardening. The load is represented by constant internal pressure and cyclic temperature gradient loading. An essential aspect in the calculations of radially inhomogeneous cylinders is the experimental verification of deformation models. The paper [34] presents experimental tests of hollow bamboo cylinders for the action of internal pressure. Bamboo is a natural material with radial inhomogeneity. The results presented in [34] confirm the reliability of the theoretical solutions considered above. Additionally, in [35], experimental studies of radially inhomogeneous cylinders made of epoxy resin with a diabase flour filler were carried out, which showed a good agreement between the experiment and theory. Many materials are characterized by the phenomenon of creep, which can significantly affect the stress-strain state. However, there are relatively few works in the literature on the analysis of the creep of inhomogeneous structures in the form of thick-walled cylinders and spheres [36,37], and optimization problems with the creep taken into account have not been considered previously. Therefore, the aim of this work is to solve the problem of optimizing a thick-walled cylinder taking into account the material creep. Materials and Methods The optimization algorithm using the example of a thick-walled cylinder made of high-density polyethylene (HDPE) with the addition of hydroxyapatite is considered below. A cylinder with an inner radius a and an outer radius b under the action of an internal pressure p a is in the condition of plane strain ( Figure 1). walled FGM cylinder is carried out, taking into account the dependence of material properties on temperature. In [33], the analysis of thermal loads of a thick-walled cylinder is carried out, taking into account nonlinear kinematic hardening. The load is represented by constant internal pressure and cyclic temperature gradient loading. An essential aspect in the calculations of radially inhomogeneous cylinders is the experimental verification of deformation models. The paper [34] presents experimental tests of hollow bamboo cylinders for the action of internal pressure. Bamboo is a natural material with radial inhomogeneity. The results presented in [34] confirm the reliability of the theoretical solutions considered above. Additionally, in [35], experimental studies of radially inhomogeneous cylinders made of epoxy resin with a diabase flour filler were carried out, which showed a good agreement between the experiment and theory. Many materials are characterized by the phenomenon of creep, which can significantly affect the stress-strain state. However, there are relatively few works in the literature on the analysis of the creep of inhomogeneous structures in the form of thick-walled cylinders and spheres [36,37], and optimization problems with the creep taken into account have not been considered previously. Therefore, the aim of this work is to solve the problem of optimizing a thick-walled cylinder taking into account the material creep. Materials and Methods The optimization algorithm using the example of a thick-walled cylinder made of high-density polyethylene (HDPE) with the addition of hydroxyapatite is considered below. A cylinder with an inner radius a and an outer radius b under the action of an internal pressure pa is in the condition of plane strain ( Figure 1). For many types of polymers, the generalized Maxwell-Gurevich equation shows good agreement with the experimental data [38], which in the case of a triaxial stress state has the form: * = * * , = , , ; = , , where * is the creep strain, E ꝏ -high elasticity deformations modulus, * -initial relaxation viscosity, -Kronecker symbol, = ( + + )/3 -the average stress, * -velocity modulus, and index rr corresponds to the directions of principal stresses. A detailed study of a hydroxyapatite additive (HA) effect on HDPE properties was presented in [39]. In [38], the creep curves of the modified HDPE are processed to obtain For many types of polymers, the generalized Maxwell-Gurevich equation shows good agreement with the experimental data [38], which in the case of a triaxial stress state has the form: where ε * ij is the creep strain, E ∞ -high elasticity deformations modulus, η * 0 -initial relaxation viscosity, δ ij -Kronecker symbol, p = (σ r + σ θ + σ z )/3-the average stress, m * -velocity modulus, and index rr corresponds to the directions of principal stresses. A detailed study of a hydroxyapatite additive (HA) effect on HDPE properties was presented in [39]. In [38], the creep curves of the modified HDPE are processed to obtain the dependence of the physical and mechanical parameters of the material on the various percentages of HA additives: where HA is the hydroxyapatite (wt. %). Thus, when 30% hydroxyapatite is added into high-density polyethylene, the elastic modulus can increase up to 1.5 times. The optimization algorithm in the elastic setting is as follows: • At the first stage, a homogeneous structure is calculated numerically, by the finite difference method or by the finite element method, at E = const, and equivalent stresses are determined according to a given strength theory. Using the finite-difference method to determine the stress-strain state of the cylinder, Equation (3) [38] can be used: The dash here denotes the derivative with respect to r. When E = const, E' is equal to zero. The boundary conditions are: Stresses σ θ can be defined as The modulus of elasticity is corrected at each node by the formula: where σ eqv,i -equivalent stress at the i-th node and σ 0 is the equivalent stress on the inner surface at r = a. In this case, the elastic modulus at the inner surface remains constant. • The calculation is performed with the corrected values of the modulus of elasticity using Equation (3), or the finite element method, and the equivalent stresses are also determined. Steps 2-3 are repeated until the difference between the elastic modulus values at the outer surface at the previous, and the next steps become less than a predetermined error. Taking the creep into account, minor adjustments are made to the optimization algorithm, which is discussed below. Optimization Results in Linear Elastic Setting The calculation with the following initial data is performed: a = 15 cm, b = 22 cm, ν = 0.3, p a = 1 MPa. The initial value of the elastic modulus of HDPE without additives is E 0 = 694 MPa. Figure 2 shows the dependencies of the modulus of elasticity on the radius for an equally stressed cylinder at the initial moment. Four classical failure criteria were used: maximum stress criterion, maximum strain criterion, the Tresca criterion of maximum shear stress, and the von Mises criterion of maximum elastic distortional energy. A comparison was made with the analytical solutions presented in [9,11,12,40] for all the curves obtained. The discrepancy between the results is insignificant. It can be seen from the presented graphs that the most significant difference between the elastic moduli on the inner and outer surfaces is obtained according to the maximum shear stress criterion, and the smallest is according to the maximum stress criterion. Thus, the criteria of maximum shear stress and maximum elastic distortional energy give relatively close results. A comparison was made with the analytical solutions presented in [9,11,12,40] for all the curves obtained. The discrepancy between the results is insignificant. It can be seen from the presented graphs that the most significant difference between the elastic moduli on the inner and outer surfaces is obtained according to the maximum shear stress criterion, and the smallest is according to the maximum stress criterion. Thus, the criteria of maximum shear stress and maximum elastic distortional energy give relatively close results. If the dependence of the elasticity modulus on the hydroxyapatite content is known, the content of hydroxyapatite can be found by the formula: Figure 3 shows the dependencies of the hydroxyapatite content on the radius for an equally stressed cylinder, corresponding to four failure criteria. It can be seen from the presented graphs that, except for the maximum stress criterion, in other cases, the content of hydroxyapatite is beyond the limits of experimental data [9,11,12] (exceeding 30% on the outer surface). A minor difference between the modulus of elasticity on the inner and outer surfaces will be required with a thinner shell, but the effect of creating artificial inhomogeneity will be more negligible. As a result of creating an artificial inhomogeneity, there is a noticeable decrease in the maximum stresses. Figure 4 shows the graphs of the distribution of hoop stresses σθ along the radius for a homogeneous cylinder and equally stressed according to the maximum stress failure criterion one. The maximum stresses decreased from 2.73 to 2.14 MPa, i.e., 1.28 times. The change of the stress-strain state during creep in a cylinder that initially has an equal stress state is discussed below. In a homogeneous cylinder, under the action of only a static load during creep, the stresses σθ first relax, and then return to the elastic solution ( Figure 5). There is the following explanation for this. In [41], it is shown that to obtain a solution at the end of the creep process using the one-term version of the Maxwell-Gurevich equation, the instantaneous If the dependence of the elasticity modulus on the hydroxyapatite content is known, the content of hydroxyapatite can be found by the formula: Figure 3 shows the dependencies of the hydroxyapatite content on the radius for an equally stressed cylinder, corresponding to four failure criteria. It can be seen from the presented graphs that, except for the maximum stress criterion, in other cases, the content of hydroxyapatite is beyond the limits of experimental data [9,11,12] (exceeding 30% on the outer surface). A minor difference between the modulus of elasticity on the inner and outer surfaces will be required with a thinner shell, but the effect of creating artificial inhomogeneity will be more negligible. As a result of creating an artificial inhomogeneity, there is a noticeable decrease in the maximum stresses. Figure 4 shows the graphs of the distribution of hoop stresses σ θ along the radius for a homogeneous cylinder and equally stressed according to the maximum stress failure criterion one. The maximum stresses decreased from 2.73 to 2.14 MPa, i.e., 1.28 times. The change of the stress-strain state during creep in a cylinder that initially has an equal stress state is discussed below. In a homogeneous cylinder, under the action of only a static load during creep, the stresses σ θ first relax, and then return to the elastic solution ( Figure 5). There is the following explanation for this. In [41], it is shown that to obtain a solution at the end of the creep process using the one-term version of the Maxwell-Gurevich equation, the instantaneous constants E and ν can be replaced in the elastic solution with long-term ones determined by the formulas: constants E and ν can be replaced in the elastic solution with long-term ones determined by the formulas: constants E and ν can be replaced in the elastic solution with long-term ones determined by the formulas: Since the stress distribution in the solution of the Lamé problem does not depend on the elastic constants, at the end of the creep process, it will be the same as at the beginning. As a result of the cylinder calculation, the hydroxyapatite content changes following Figure 3 (maximum stress criterion). It was found that a cylinder with equal stress at the initial moment ceases to be equally stressed during creep. The graphs of the stresses σθ distribution along the radius at the beginning and at the end of the creep process are shown in Figure 6. At the inner surface, the stresses decrease over time, and at the outer surface, they increase, as shown in Figure 7. This is explained by the fact that the modulus of elasticity and the modulus of high elasticity are differently dependent on hydroxyapatite content. Since the stress distribution in the solution of the Lamé problem does not depend on the elastic constants, at the end of the creep process, it will be the same as at the beginning. As a result of the cylinder calculation, the hydroxyapatite content changes following Figure 3 (maximum stress criterion). It was found that a cylinder with equal stress at the initial moment ceases to be equally stressed during creep. The graphs of the stresses σθ distribution along the radius at the beginning and at the end of the creep process are shown in Figure 6. At the inner surface, the stresses decrease over time, and at the outer surface, they increase, as shown in Figure 7. This is explained by the fact that the modulus of elasticity and the modulus of high elasticity are differently dependent on hydroxyapatite content. Since the stress distribution in the solution of the Lamé problem does not depend on the elastic constants, at the end of the creep process, it will be the same as at the beginning. As a result of the cylinder calculation, the hydroxyapatite content changes following Figure 3 (maximum stress criterion). It was found that a cylinder with equal stress at the initial moment ceases to be equally stressed during creep. The graphs of the stresses σθ distribution along the radius at the beginning and at the end of the creep process are shown in Figure 6. At the inner surface, the stresses decrease over time, and at the outer surface, they increase, as shown in Figure 7. This is explained by the fact that the modulus of elasticity and the modulus of high elasticity are differently dependent on hydroxyapatite content. Optimization of the Cylinder Considering Creep The optimization problem can be set as follows: it is required to find a distribution of the additive content in the structure thickness to be equally stressed at the end of the creep process. The optimization algorithm is similar to the one outlined above, but there are some differences. Instead of the values E and ν, it should be operated with long-term constants and . At the first stage, a homogeneous structure is calculated with = const, = const. Further, the long-term modulus is adjusted according to the formula in (6). The corrected values of are used to determine the required hydroxyapatite content. Based on the formulas given earlier and (8): = ⋅ + = (694 + 1251 · )(228.9 + 1093 · ) 922.9 + 2344 ⋅ With a known value of , this formula represents a quadratic equation relative to the value of HA, from which it is easy to find the content of hydroxyapatite. Then, using the known values of E and , the long-term Poisson's ratio at each node is determined by the second formula in (8). Thus, at the second and subsequent optimization steps, the long-term modulus of elasticity and the long-term Poisson's ratio can be considered as a variable along the radius. To determine the stress-strain state the Equation (3) can be used, but the formula should calculate the functions φ(r) and ψ(r): The finite element method can also be used to calculate the stress-strain state of an inhomogeneous cylinder. Figure 8 shows the dependence of the hydroxyapatite content along the radius for a cylinder equally stressed according to the maximum stress failure criterion at the end of Optimization of the Cylinder Considering Creep The optimization problem can be set as follows: it is required to find a distribution of the additive content in the structure thickness to be equally stressed at the end of the creep process. The optimization algorithm is similar to the one outlined above, but there are some differences. Instead of the values E and ν, it should be operated with long-term constants E and ν. At the first stage, a homogeneous structure is calculated with E = const, ν = const. Further, the long-term modulus is adjusted according to the formula in (6). The corrected values of E are used to determine the required hydroxyapatite content. Based on the formulas given earlier and (8): With a known value of E, this formula represents a quadratic equation relative to the value of HA, from which it is easy to find the content of hydroxyapatite. Then, using the known values of E and E ∞ , the long-term Poisson's ratio at each node is determined by the second formula in (8). Thus, at the second and subsequent optimization steps, the long-term modulus of elasticity and the long-term Poisson's ratio can be considered as a variable along the radius. To determine the stress-strain state the Equation (3) can be used, but the formula should calculate the functions ϕ(r) and ψ(r): The finite element method can also be used to calculate the stress-strain state of an inhomogeneous cylinder. Figure 8 shows the dependence of the hydroxyapatite content along the radius for a cylinder equally stressed according to the maximum stress failure criterion at the end of the creep process. It can be seen from this graph that, in contrast to Figure 3, the maximum additive content is significantly lower. Polymers 2021, 13, x FOR PEER REVIEW 9 of 13 the creep process. It can be seen from this graph that, in contrast to Figure 3, the maximum additive content is significantly lower. The distribution of stresses σθ along the radius at the beginning and at the end of the creep process is shown in Figure 9. Figure 10 shows graphs of the hoop stresses at the variation of the inner and outer surface in time. It can be seen from these graphs that at the initial moment, the stresses at the inner surface are higher than at the outer, and in the process of creep at r = a the stresses decrease, at r = b they increase, which corresponds to an equal stress state. The distribution of stresses σ θ along the radius at the beginning and at the end of the creep process is shown in Figure 9. Figure 10 shows graphs of the hoop stresses at the variation of the inner and outer surface in time. It can be seen from these graphs that at the initial moment, the stresses at the inner surface are higher than at the outer, and in the process of creep at r = a the stresses decrease, at r = b they increase, which corresponds to an equal stress state. Polymers 2021, 13, x FOR PEER REVIEW 9 of 13 the creep process. It can be seen from this graph that, in contrast to Figure 3, the maximum additive content is significantly lower. The distribution of stresses σθ along the radius at the beginning and at the end of the creep process is shown in Figure 9. Figure 10 shows graphs of the hoop stresses at the variation of the inner and outer surface in time. It can be seen from these graphs that at the initial moment, the stresses at the inner surface are higher than at the outer, and in the process of creep at r = a the stresses decrease, at r = b they increase, which corresponds to an equal stress state. Figure 11 shows the dependencies in the content of hydroxyapatite for cylinders equally stressed at the end of the creep process, using the criterion of maximum deformation, maximum shear stress, and maximum elastic energy. Figure 11. Content of hydroxyapatite depending on the radius for equally stressed cylinders at the end of the creep process according to various cylinder theories. Figure 11 shows the dependencies in the content of hydroxyapatite for cylinders equally stressed at the end of the creep process, using the criterion of maximum deformation, maximum shear stress, and maximum elastic energy. Figure 11 shows the dependencies in the content of hydroxyapatite for cylinders equally stressed at the end of the creep process, using the criterion of maximum deformation, maximum shear stress, and maximum elastic energy. Figure 11 shows that cylinders equally stressed at the end of the creep process according to all the considered failure criteria can be created practically without exceeding 30% hydroxyapatite content. The difference between the results based on maximum shear stress and maximum elastic distortional energy failure criteria is insignificant. This can be explained by the fact that the long-term Poisson's ratio is close to 0.5, and at ν = 0.5, the indicated theories lead to the same result in the case of plane strain. Discussion It should be noted that the proposed models of equally stressed structures, in general, are not of equal strength since the strength of the resulting composite changes with the additives. The algorithm developed in this article, after a minor refinement, allows us to model structures of equal strength. However, it is necessary to know how the strength depends on the content of the additive. Additionally, the proposed technique allows taking into account the discreteness of the spectrum of polymer relaxation time. This requires experimental data on the dependence of the rheological parameters of the material on the content of additives for two or more members of the spectrum. For further research, it is of practical interest to construct models of equal strength and equally stressed reinforced concrete structures, taking into account the material's rheological properties. Conclusions The iterative algorithm is proposed for constructing models of equally stressed polymer cylinders with a finely dispersed mineral filler, taking into account the material's rheological properties. The optimization problem is theoretically solved by varying the content of the additive along the radius on the basis of four classical failure criteria: the criterion of the maximum stresses, the criterion of the maximum deformations, the criterion of the maximum shear stresses, and the maximum elastic distortional energy (von Mises) criterion. It was found that a cylinder with equal stress in the elastic stage ceases to be uniformly stressed during creep. Furthermore, it is shown that the maximum shear stresses and von Mises criteria lead to practically identical results. The creation of artificial heterogeneity can noticeably decrease the maximum stresses in the thickness of the structure.
6,619.2
2021-07-22T00:00:00.000
[ "Materials Science" ]