text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
The role of dosimetry and biological effects in metastatic castration–resistant prostate cancer (mCRPC) patients treated with 223Ra: first in human study
223Ra is currently used for treatment of metastatic castration resistant prostate cancer patients (mCRPC) bone metastases with fixed standard activity. Individualized treatments, based on adsorbed dose (AD) in target and non-target tissue, are absolutely needed to optimize efficacy while reducing toxicity of α-emitter targeted therapy. This is a pilot first in human clinical trial aimed to correlate dosimetry, clinical response and biological side effects to personalize 223Ra treatment. Out of 20 mCRPC patients who underwent standard 223Ra treatment and dosimetry, in a subset of 5 patients the AD to target and non-target tissues was correlated with clinical effects and radiation-induced chromosome damages. Before each 223Ra administrations, haematological parameters, PSA and ALP values were evaluated. Additional blood samples were obtained baseline (T0), at 7 days (T7), 30 days (T30) and 180 days (T180) to evaluate chromosome damage. After administration WB planar 223Ra images were obtained at 2–4 and 18–24 h. Treatment response and toxicity were monitored with clinical evaluation, bone scan, 18F-choline-PET/CT, PSA value and ALP while haematological parameters were evaluated weekly after 223Ra injection and 2 months after last cycle. 1. a correlation between AD to target and clinical response was evidenced with threshold of 20 Gy as a cut-off to obtain tumor control; 2. the AD to red marrow was lower than 2 Gy in all the patients with no apparently correlation between dosimetry and clinical toxicity. 3. a high dose dependent increase of the number of dicentrics and micronuclei during the course of 223Ra therapy was observed and a linear correlation has been found between blood AD (BAD) and number of dicentrics. This study provides some interesting preliminary evidence to be further investigated: dosimetry may be useful to identify a more appropriate 223Ra administered activity predicting AD to target tissue; a dose dependent complex chromosome damage occurs during 223Ra administration and this injury is more evident in heavily pre-treated patients; dosimetry could be used for radioprotection purpose. The pilot study has been approved from the Ethics Committee of Regina Elena National Cancer Institute (N:RS1083/18–2111).
Trial registration: The pilot study has been approved from the Ethics Committee of Regina Elena National Cancer Institute (N:RS1083/18-2111).
Keywords: Dosimetry, Biological effects, 223 Ra
Background Radium-223 ( 223 Ra) dichloride (Alpharadin®) is the first targeted α-therapy approved by FDA for the treatment of patients with metastatic castration resistant prostate cancer patients (mCRPC) with symptomatic bone metastases and no known visceral metastatic disease. 223 Ra targets bone metastases with high linear energy transfer (LET), short range (< 100 μm) α-particles.
Several clinical studies published in recent years suggest that 223 Ra may provide a new standard of care for patients with mCRPC and bone metastases improving overall survival and reducing the time to the first symptomatic skeletal event with a very low toxicity [1][2][3].
The approved regimen used worldwide consists of a course of six 223 Ra injections with a standard activity of 55 kBq/kg every 4-week [4][5][6].
However, mCRPC condition represents a very broad spectrum of disease and a standard activity may not be appropriate in all cases. Radiopharmaceutical treatment cannot be considered as a pharmacological treatment and schedules based on body weight are not suitable for getting the best therapeutic ratio [7]. Fixed schedules may result in over or under dosage limiting efficacy or increasing toxicity, mainly in α-emitter targeted therapy that is potentially highly effective but also quite toxic [8].
Efficacy and toxicity are due to adsorbed dose (AD) that is related to individual 223 Ra biodistribution in target, i.e. bone metastases, and non-target tissues.
The pharmacokinetic and dosimetry of 223 Ra in selected patients have been reported with promising results [9,10]. Wide difference in radium uptake and biodistribution has been evidenced in clinical use while the retention time of the 223 Ra in the body has been demonstrated to range from 11 to 70% after 30 days from the first administration.
To date however, there are no published studies aiming at modifying the administered activity according the patients and tumor features using dosimetry. Moreover, several studies indicated that in experiments with alpha particles, more cells were damaged than were traversed by alpha particles [11,12]. Radiobiological mechanisms of α-emitters could therefore play a relevant role for haematological toxicity or secondary radiation induced tumors and should be deeper investigated.
The understanding of the physical and biologic factors that impacts response and toxicity in non-target tissues is essential to avoid the risk that α-emitters may be abandoned before they have been properly tested in the clinic. This is a first translational prospective pilot study in humans aimed at improving α-emitters radiobiological model knowledge and demonstrating the potential applicability of dosimetry to evaluate health risks associated with α-particle exposure. For this purpose, the AD to target and non-target tissues of standard 223Ra treatment in a group of 5 mCRPC patients was correlated with clinical effects and with the radiation-induced chromosome damage in peripheral blood lymphocytes (PBLs) representing a non-target tissue.
Study design
The study design is an observational, prospective, first in human clinical trial evaluating the relationships between dosimetry and efficacy and safety of 223 Ra standard treatment on a cohort of 20 mCRPC patients. Preliminary data observed in a subset of 5 patients in which biological effects were also tested are reported in this paper. The clinical trial was conducted in accordance with the Declaration of Helsinki and good clinical practices guidelines and each patient provided a specific written informed consent. The protocol was approved from the Ethics Committee of Regina Elena National Cancer Institute, Rome, Italy (number: RS1083/18-2111).
The primary endpoint of this pilot study was to evaluate the predictive value of dosimetry on target tissues. Further endpoints included the evaluation of the following parameters: the patient based and lesion-based response; safety and haematological toxicity; the chromosome damage, in terms of Dicentric (DC) and micronuclei (MN) induced in PBLs during the course of therapy for assessing non-target tissue effects; the predictive value of dosimetry on non-target tissues with respect to haematological toxicity; the correlation between chromosome damage in PBLs and haematological clinical toxicity.
The timeline of the study includes different steps and activities: 1. patients' enrollment based on pre-treatment images; 2. treatment with six 223 Ra injections and related blood samples collections and images acquisitions; 3. dosimetry; 4. chromosome damage evaluation in PBLs; 5. post treatment images and follow-up. These activities are summarized in graphical abstract and detailed in the following paragraphs.
Patients
All patients were previously evaluated for the entry to the study in a multidisciplinary setting. Baseline examination included history, clinical examination as well as baseline blood tests to evaluate PSA, alkaline phosphatase level test (ALP), haematological parameters, white blood cells (WBCs), red blood cells (RBCs), Hemoglobin (Hgb) and platelets (PLTs) values and imaging: 18 F-choline PET/CT (FchPET), 99m technetium methylene diphosphonate ( 99m Tc-MDP) bone scan (BS) and CT of chest, abdomen and pelvis). Eligibility required at least two documented symptomatic bone metastases under androgen ablation therapy; Eastern Cooperative Oncology Group score 0-2, life expectancy > 6 months, age > 18 years, adequate hematologic function and the availability of nuclear imaging, i.e. FchPET and BS performed in our Institute less than 1 month before the enrolment.
Treatment
Each enrolled patient received monthly i.v. 223 Ra injections with a standard activity of 55 kBq/kg for a maximum of six cycles. Before each 223 Ra administration a baseline blood samples were collected.
Additional series of blood samples were also obtained before the first 223 Ra administration (T0) and afterwards at 7 days (T7), 30 days (T30) and 180 days (T180) to evaluate chromosome damage in PBLs.
Image acquisition and dosimetry
The 223 Ra activity was measured using the radionuclide calibrator PET DOSE (Comecer) following the procedures in [13,14]. For each patient, activity time curves were determined using antero-posterior and posteroanterior 30 min planar images acquired at 2-4, 18-24 h and 7 days after the each 223 Ra administration. Technical details about acquisition and correction factors to be applied to the images were reported in additional supplementary data (S1: Fig. S1 and S2: supplementary notes).
Planar 223 Ra images (Fig. 1a) were co-registered to the basal 99m Tc-MDP bone scan study (Fig. 1b,c) and with the Whole-Body scans using MIM 6.1.7 (MIM Software Inc., Ohio) (Fig. 1d). Regions of interest (ROIs) were delineated and transferred onto the 223 Ra static images and onto the calculated transmission images (Digital Reconstructed Radiograph, i.e.DRR) (Fig. 1e). Both 99m Tc-MDP and 223 Ra planar images were also visually compared with activity distribution in co-registered FchPET images before and after treatment (Fig. 1f) to identify the functional target volume. Exposure rate was measured at 5 cm, 1 and 2 m from the patient surface as described in D'Alessio et al. [15] at 1, 2-4, 24 and 48 h, and at 7 days post-injection. The time scheduling was modified according to patient compliance.
The blood samples for dosimetry were collected at 2-4 and 18-24 h after injection and measured using a well counter of Atomlab™ Gamma Counter (Biodex Medical Systems, Inc).
Tumor and red marrow (RM) AD was calculated using IDAC Dose 2.1 [16]. Time integrated activity curve (TIAC) was included for RM into IDAC Dose 2.1.
An RBE value of 5.5 were used to convert the absorbed to the equivalent dose for tumor and organs at risk (OARs), in agreement to [17,18]; while a relative biological effectiveness (RBE) of 20 was assumed for AD to blood according to [11].
Chromosome damage evaluation
The blood samples for evaluation of chromosome damage in PBLs were collected: 1) before the first administration (T0), 2) at day 7 after administration (T7), 3) at day 30, immediately before second administration (T30), 4) at day 180, after the end of treatment (T180). Chromosome damage, in terms of DC and MN induction, was evaluated for each time points accordingly to standard protocols [19].
Evaluation of DC and MN in exposed individuals' PBLs is the most commonly used biological dosimetry approach.
DC originates from an asymmetric exchange between the centromeric pieces of two broken chromosomes which in its complete form is accompanied by a fragment composed by the acentric pieces of these chromosomes. This method, after more than 50 years from its set up, is still considered as the "gold standard" of the biological dosimetry, the dicentric induction being considered radiation-specific.
The in vitro cytokinesis-block micronucleus assay has been used for biological dosimetry since 1985 [20]. MN are small nuclei that form whenever a whole chromatid/ chromosome or chromatid/chromosome fragments are not incorporated into one of the daughter nuclei during cell division. MN are not as radiation specific as dicentrics, since they may be induced either by clastogenic chemicals or aneugenic agents.
As for the dicentric assay, 200 metaphases were scored for each experimental point: both DC and centric rings were included in the scoring of chromosomal aberrations. An average of 3000 binucleated cells were analyzed for the MN induction.
These results on chromosome damage were analyzed versus the AD to blood to establish a possible correlation between the 223 Ra therapy and the genetic damage induced in a non-target tissue.
Treatment response and toxicity
Treatment response was monitored with clinical evaluation, BS, FchPET, PSA value and ALP both interim (before the fourth cycle) and at end-treatment within 3 months after the last cycle. Haematological parameters were monitored every week after 223 Ra injection and 2 months after last cycle of 223 Ra treatment to evaluate toxicity. Clinical follow-up every 3 months was also extended until to progression or death. Efficacy was evaluated both on per-patient analysis overall clinical response (OCR) and on per-lesion analysis target tissue response (TTR) within 3 months after the end of treatment. OCR was evaluated in a multidisciplinary setting and graded as responder if a combination of measured parameters improved (PSA, ALP, imaging and clinical condition).
Lesion based response at 3 months was also evaluated to define TTR on imaging distinguishing between complete (CR), partial (PR), stable (SD) and progressive disease (PD) according to PER-CIST criteria [21]. The occurrence of post-treatment grade 3/4 haematological toxicities up to 6 months after last administration of 223 Ra was considered as adverse events or Serious Adverse Events (SAEs) according to National Cancer Institute-Common Terminology Criteria for Adverse Events (NCI-CTCAE), version 4.03. Non-haematological toxicity was also evaluated and skeleton-related event, fatigue, general health deterioration, spinal cord compression were reported.
Statistical analysis
Categorical variables were presented as number with percentage in descriptive tables, while continuous variables were presented as median (range) or mean and standard deviation as appropriate. Parameters without a normal distribution were logarithmically converted.
Two independent groups were tested by the Student's t-test or Mann-Whitney test as appropriated. Correlations between variables were investigated by the Pearson Fig. 1 a) anterior planar 223 Ra image, b) anterior, c) posterior planar 99m Tc-MDP images; d) 223 Ra and 99m Tc-MDP co-registered images; e) ROIs identified on the calculated DRR; f) target volumes into FchPET images before and after treatment with 223 Ra correlation coefficient. Prognostic dosimetric or and clinical/pathological variables on tumor control as well as haematological toxicity were analyzed. The performance of number of dicentrics versus AD to blood was evaluated using Bland-Altman analysis. P value < 0.05 was considered statistically significant.
Treatment response and toxicity
The baseline clinical characteristics of the 5 mCRPC patients are reported in Table 1. The median age was 72 years (range 60-84 yrs.). The median basal PSA was 42.30 ng/mL (range 6.74-757 ng/mL) while median ALP value was 89 U/L (range 51-418 U/L).
Patients who had been treated with chemotherapy prior to 223 Ra appeared to have poorer baseline characteristics than those who had not (Table 1), presenting a higher ECOG performance status, a higher bone involvement with > 20 metastatic lesions, higher median levels of PSA (132.0 vs 40.2 ng/mL) and ALP (162.0 vs 115.0 U/L).
All patients received at least 3 cycles of 223 Ra and three patients (60%) received all 6 planned cycles (pt. 1, 2 and 5). Only two out of 5 patients were considered clinical responders ( Table 2).
Median survival from last treatment was 13 months while median progression time from last treatment was 6 months ranging from 3 up to 18 months. At progression time two patients received only palliative cure (pt. 3 and 4), one patient re-challenged to chemotherapy (pt. 2), one patient received external beam radiotherapy (EBRT, i.e. pt. 5) and one patient needed no further treatment until death (pt. 1).
A decrease in ALP was observed in three patients while PSA decreased in all patients out one (pt. 3), see Fig. 2.
The enrolled patients showed a wide difference in clinical presentation in terms of markers of disease burden (PSA and ALP) and hematopoietic impairment (WBC, RBC, HB, PLT). On per-patient analysis, the overall clinical response was observed only in two patients (pt. 1 and 5) in which both performance score, PET imaging at 3 months and PSA/ALP values improved. The other three patients were considered non-responders (Table 2).
Severe anemia requiring blood transfusions after three cycles of treatment was observed in the two patients who had received prior chemotherapy (patients 3 and 4). Mild and transient thrombocytopenia was observed also in one patient (pt. 3) while mild leukopenia with lymphopenia was observed in patient 1. Haematological values during treatments are shown in the Fig. 2. No patients experienced any symptomatic skeletal event during or after treatment and all out one patient reported a decrease in pain from pre-treatment baseline.
Objective response and target dosimetry
To define lesion-based response, 20 target lesions were identified in the 5 patients. The 3-month lesion-based response at PET imaging was PD (61%), SD (28%), PR (28%), CR (17%). The absence of statistically significant correlation between AD to the lesions and either the administered activity or the activity administered per kg was observed.
Based on PET imaging, target volumes ranged from 0.58 to 40 ml. Target doses ranged from 0.001 Gy to 43.7 Gy (median 30.1 Gy). An AD response relationship for target lesions has been observed with a threshold of 20 Gy. The RBE corrected AD versus the target objective response is shown in Fig. 3a. The SUV (standard uptake value) variation according to the lesion response is also reported in Fig. 3b.
Toxicity and non-target dosimetry
The AD to RM was < 2 Gy in all patients. Figure 4 shows the RM AD at the first cycle in patients with (2 patients) and without (3 patients) any haematological toxicity (i.e. anemia). The absence of statistically significant correlation between RM absorbed dose and haematological toxicity was observed. In addition, the haematological toxicity was observed in 2/5 patients although the activity administered per kg was the same in the whole groups of patients. The two patients manifesting grade 3 haematological toxicity were previously treated with CHT.
Chromosome damage
The average dicentric frequency found in T0 is very high (0.073 ± 0.008) compared to the general background of control subjects (0.001/0.002). This is likely due to previous treatments (particularly, 2 patients received external beam radiotherapy before 223 Ra treatment). In all patients, the average DC frequencies show a dose-dependent increase during the course of 223 Ra treatment (T7 = 0.105 ± 0.013; T30 = 0.146 ± 0.032) reaching the highest level at the completion of the therapy (T180 = 0.27 ± 0.118) (Fig. 5a). In patients 1 and 3, a sharp increase of DC was observed between T7 and T30, even though no additional 223 Ra administration occurred in this interval.
Moreover, a progressive increase of the complex chromosome damage (number of cells with 2 or more DCs) is registered over the course of the 223 Ra therapy in all patients (Fig. 5c).
Overall, the results show an increase in the average frequency of MN as treatment time increases up to a maximum of 0.126 ± 0.061 at T180 with a slight decrease at T30 (0.0731 ± 0.023) corresponding to the interval between the two treatments (Fig. 5b).
The DC and MN frequencies observed in PBLs (nontarget-tissue) have been plotted against the AD to blood (Fig. 6a-b). A linear correlation has been found between AD to blood and number of DC (Pearson's productmoment correlation cor = 0.658, p-value = 0.003) but not with the number of MN (cor = 0.41, p-value = 0.14).
Bland-Altman plots of difference in DC and MN frequencies versus the AD to blood (Fig. 6c-d) showed good correlation and agreement between the two methods, with few samples falling outside the 95% limits of agreement for each comparison (average difference ± 1.96 standard deviation of the difference).
The samples falling outside the 95% limits correspond to calculated doses after therapy, so the disagreement could be due to the fact we calculated the dose at the end of therapy using the images obtained at the first cycle. This can be considered an issue of our approach. In addition, the absence of statistically significant correlation between RM absorbed dose and haematological toxicity was registered.
Discussion
Targeted α-emitter therapy represents the future of nuclear medicine therapy and many novel radiotracers are under evaluation [22]. To improve knowledge of AD and biological effects of these treatments both on target and healthy tissue is therefore mandatory. The aim of this study is to correlate dosimetry, clinical response and biological side effects to optimize and personalize the 223 Ra treatment schedule.
Firstly, our results evidenced a correlation between target dose and clinical response. A threshold of 20Gy was identified as a cut-off to obtain tumor control. This supports the possibility of improving treatment efficacy through dosimetric estimation of the activity to administer and personalization of 223 Ra schedules fulfilling the requirement of Directive 2013/59. Moreover, the observed correlation between SUV variation in FChPET and response demonstrated that 223 Ra needs multimodal imaging to identify the biological target volume.
The second point addressed was the analysis of clinical toxicity and related AD and biological factors. Our results confirm that anaemia is the most represented AE related to 223 Ra treatment and the main reason for treatment interruption [23]. AD apparently does not seem to correlate with clinical toxicity. Indeed, despite the AD to RM resulted < 2 Gy in all patients two of them presented severe anemia.
Also, it is worth noting that patient #3 and #4 did not complete treatment due to severe side effects, although the estimated dose after first cycle is similar to one received from other patients. This suggests that a potential lower dose threshold level for haematological toxicity should be adopted for heavily chemo treated patients to prevent toxicity from α-emitters. Further studies are needed to highlight this issue. Looking at the Fig. 5 of Andersson paper [24] -based on Taprogge, J. et al. paper [25]-the RM absorbed dose is~13 and~11 mGy/MBq, with or without considering the progenies biokinetic model, respectively. This means that our calculation might overestimate the RM dose of about 18% without considering the progenies biokinetic model. Moreover, the RM absorbed dose for intravenous Ra-223 based on the ICRP Publication 137 [26] for male worker is~30 mGy/MBq including all progenies of Ra- 223, while it was 34 and 92 based on Lassman and Nosske [27] or Yoshida et al. [28], respectively. The equivalent dose to OARs per injected activity was calculated considering an RBE factor of 5 included in the range of RBE calculated using in vitro and in vivo studies for other α-particle therapy [29]. Stephan [11] using 224 Ra suggested a radiation weighting factor of 20 for α-radiation for radioprotection issues. The appropriate RBEs to be used for response/toxicity/secondary effects estimation must be still determined and this requires more clinical data.
The prevention of toxicity is of paramount importance for this treatment representing the unique for which an improvement of OS has been reported. Regardless the adopted RBE value, lower RM AD constraints for patients undergoing chemotherapy should be highlighted.
Finally, this is the first in human study evaluating biological effects and chromosome damage after 223 Ra administration. Unfortunately, the chromosome damage induced by internal radiation exposure is a difficult field of investigation. Internal exposures are generally more complex to manage than external exposures. As highlighted by a recent review [30], the local absorbed dose rates (generally higher in the tumor and lower in the OARs) follow complex patterns which depend on the physical, chemical and metabolic properties of the radionuclide(s) and on patients' anatomical characteristics. The irradiation of the body is spatially inhomogeneous, potentially prolonged over large periods and variable over time; thus, internal exposures become particularly problematic for biological dosimetry methods.
Therefore, even if the induction of chromosome aberrations is generally observed in PBLs of subjects internally contaminated, many factors must be considered to derive a meaningful estimate of radiation dose to the whole body or to specific organs.
In our study, the chromosome damage assessed showed a high dose dependent increase of the number of DC and MN during the therapy course. This increase appears more evident in patients completing all 223 Ra six cycles.
In addition, a large DCs increase was unexpectedly observed in two patients between T7 and T30, not due to a supplementary 223 Ra dose. This result suggests that PBLs could be exposed to an extra dose by the radiation emission from the target organs, highlighting possible adverse effects to non-target organs related to this type of therapy.
This hypothesis seems to be supported by the progressive increase in the number of cells with two or more DC, observed in all patients. These data indicate that chromosome damage accumulates in PBLs over time Fig. 6 Number of DC (a) and MN (b) observed in the PBLs (non-target-tissue) plotted against the AD to blood. Solid line represents the fitted curve and the grey area the 95% confidence interval. Bland-Altman plots of difference in number of c) DC and d) MN and the AD to blood. Red line represents the average between the two methods and blue dotted line indicate the 95% limits of agreement for comparison and reaches the highest complexity after the end of the therapy (T180), suggesting a persistence of the emission of alpha particles from the target to non-target organs.
It is noteworthy that the average background frequencies of chromosome damage (especially for dicentrics) found in PBLs of patients in T0 are very high if compared to background values found in healthy control subjects. This is more evident in patients previously treated with radiotherapy suggesting that patients treated with α-emitters, who underwent radiotherapies, should require particular attention on side effects to healthy tissues. In these patients, the radiation induced biological effect could, in fact, persist and could accumulate in PBLs for several years. T-lymphocytes are longlived circulating cells that can be considered as circulating dosimeters and, among them, the population of long-lived lymphocytes has half-life of 3.5 years or more [24].
The number of DC and MN observed in the PBLs has been plotted against the AD to blood. A linear correlation (increase) has been found between AD to blood and DC number as reported for other radionuclides [11,18,31].
A proper biological dosimetrical approach, that is currently lacking, could improve treatment of patients with α-emitters in relevant aspects of radiation protection for decreasing the stochastic radiation induced effects or reducing the rate of secondary tumors.
However, the correlation between the delivered dose to blood and the information provided by biological assays strictly depends on selection of the appropriate calibration curves, that are usually generated in vitro using only external photon radiation. Thus, an in vitro dose-response curve for DC and MN induced by 223 Ra should be developed, to compare the DC and MN frequency to the AD to blood at the corresponding times during therapy [32].
The average background frequency of MN in T0 is also very high (0.064 ± 0.016) compared with the background frequency in healthy population (quite variable from 0 to 0.040, depending on factors such as dietary, age and gender). On the other hand, the data is in line with the work of Lee et al. [33], in which the background frequency of MN in PBLs from prostate cancer patients undergoing radiotherapy is 0.057 ± 0.008. The background frequency observed in this study could therefore be due to previous therapy treatments, as observed for dicentrics.
Probably, this decrease could indicate a partial recovery of the damage as observed by MN, not maintained however in subsequent treatments.
Finally, 223 Ra 3D-images could potentially improve the estimation of AD distribution and local RBE.
Conclusions
The results of this study, despite the small sample of patients, highlight some interesting ideas that need to be further investigated: dosimetry may be useful to identify a more appropriate 223 Ra administered activity predicting AD to target tissue; a dose dependent complex chromosome damage occurs during 223 Ra administration that is more evident in heavily pre-treated patients; AD to blood could be used for radioprotection purposes. | 6,266.4 | 2021-09-06T00:00:00.000 | [
"Medicine",
"Biology",
"Physics"
] |
Mouth Morphometry and Architecture of Freshwater Cat Fish Mystus vittatus Bloch ( 1974 ) ( Siluriformes , Bagridae ) in Relation to its Feeding Habit
Mouth morphology and architecture of a freshwater cat fish Mystus vittatus was studied in relation to its food and feeding habits. The fish has small mouth and predates mainly on small sized preys. It possesses terminal mouth, equipped with villiform teeth on both lower and upper jaw. Lower jaw also bears molariform teeth in addition to villiforms teeth to grasp and prevent the escape of prey. Lack of papilliform teeth and prominent microridges suggest its plankton feeding habits and poor test sensation on captured preys.
Introduction
Mystus vittatus is a common fresh water fish that dwells in canals, ditches, rivers, ponds, lakes etc and has wide distribution throughout India, Bangladesh, Pakistan, Sri Lanka and Thailand.The body of the fish looks silver in colour with golden tinge and oriented with 5 narrow black bands, above and below the lateral line, and a black distinct shoulder spot on each side of the body.Mouth is small and terminal with 4 pairs of barbels.The fish dwells mainly in muddy bottoms rich in macro zooplanktonic food, insect larvae etc.Like other catfishes, its mouth morphology and architecture play significant role in searching, capturing and collecting food into the alimentary canal.Mouth morphology of few cat fishes like Ictalurus punctatus [1], Clarius gariepinus [2], two African catfishes Andersonia (Amphiliidae) and Siluradon (Schilbeidae) [3], and Rita rita [4] were well studied.Recently, Gamal et al. [5] performed scanning electron microscopic studies on the morphological adaptation of buccal cavity of the omnivorous cat fish Clarias gariepinus in relation to its feeding habits.
Recent studies indicate that there exists strong relationship between mouth architecture and feeding habits in fish.Herbivorous fish like Oreochromis niloticus, surgeonfishes have mouth architecture which correlates with their feeding habits [6,7].However, mouth morphology and architecture of M. vittatus has hardly received any attention.The present study, therefore, aims to examine the mouth morphology and architecture of M. vittatus to have better understanding on its feeding habits.
Collection of fish and morphometric analysis
M. vittatus (n = 35) were collected from fresh water ponds in and around Bolpur, West Bengal, India throughout February 2013 and preserved in 10% formalin solution.Morphometric analysis was performed in the laboratory using standardized scale and digital balance (Table 1).Vertical and horizontal mouth openings were measured and mouth area (M A ) was calculated [8].
Condition factor
The condition factor (K) was determined to verify the relative condition of fishes.Mathematically, K= (W /L 3 ) × 100, (where W, weight in g; L, length in cm).
Scanning electron microscopic (SEM) study
Freshly collected M. vittatus (n = 2) were washed with 1M phosphate buffer (p H = 7.4) and treated with 0.1M sucrose solution for 15 -20 minutes to remove mucus contents.After repeated washing, the samples were kept in 2.5-3% gluteraldehyde in cacodylate buffer for 4 hours at 4 0 C. Thereafter, samples were dehydrated through graded series of ethanol followed by critical point drying, sputtering with gold and then examined under scanning electron microscope.
Mouth architecture
Fig. 1 shows details of SEM studies from mouth of the fish.The upper lip is thick and more prominent than lower lip.Upper jaw bears numerous needle like long and conical villiform teeth, while lower jaw is equipped with a combination of villiform and molariform teeth.
Discussion
The fish has a dorsoventrally flattened head with head length nearly 2 cm and head depth half of the head length.The average total length of the fish was 8.8 cm, and maxillary barbel extended upto 60% of total length of the fish.In general, barbels in fish are out growths of gustatory (taste) system and the ratio of total length to barbel length is important as it indicates searching ability of the fish through gustatory arrangements in the body.This ratio was constant in M. vittatus throughout all sizes, indicating its continuous tactile feeding behavior throughout its growth.McCormick [9] on tropical goat fish, Upen eustragula (Mallidae) found that food availability influences the relationship between barbel length and fish size.Slower growing fishes have longer barbels relative to their body length.In that case M. vittatus is moderately growing fish.Presence of four pairs of barbels indicates its strong gustatory ability in searching food at the bottom.The edges of jaws in M. vittatus end in fleshy and blunt cartilaginous lips.It has strong upper and slightly wider lower jaws, intended for preliminary crushing of hard armature of its prey.A flattened sub-terminal mouth with narrow vertical and horizontal openings results smaller mouth area (0.453cm 2 ) that describes limited feeding regimes of this fish on smaller preys.
Most catfishes have either cardiform or villiform teeth.However, M. vittatus has numerous strong, small and sharp teeth found in the lower mandibular and upper maxillary jaws.The presence of teeth on jaws is required to hold or grasp prey items and to prevent them escaping from the mouth.The maxillary teeth in M. vittatus are sharp, pointed and straight.The mandibular teeth are formed by villiform and molariform types and located on the curved band of the jaw, not on the palatine.Exclusive carnivorous fishes bear teeth on jaws, tongue, roof of the mouth and pharynx [4].All these help in seizure, grasping and grinding of prey.Interestingly, M. vittatus has no canine and vomer teeth on jaws.Further, absence of papilliform teeth on jaws confirms that M. vittatus does not feed by seizure.Restriction of molariform and villiform teeth only to jaw regions helps in catching and grasping activity and therefore describes moderate carnivorous filter feeding nature of M. vittatus on zooplanktons.In addition, edentulous palatine (Figure 1) describes M. vittatus feeding on soft bodied food or if on shelled organisms, not on too hardy shelled (e.g.mollusc).Azadi et al. [10] reported that M. vittatus is a plankton feeder and feeds on copepods, cladocerans, rotifers, ostracods, insect larvae, oligochaetes, chlorophyceae, bacillariophyceae and debris.By food composition, it is 43% zooplankton feeder with majority from calanoid and copepod in the stomach.Zoobenthos contributes 22% to its diet with insect larvae as major component [11].By composition, it prefers crustacea (24%), protozoa (13%) and insect (11%) [12].Shafi and Quddus [13] also reported algae (22%) along with zooplankton (27%) in its gut.None of these workers reported mollusclike food in its gut.
M. vittatus bears poorly distributed microridges on its mouth.The functional significance of microridges has been considered to serve as a secretory source of lubricant, facilitating movement of materials over a cell surface and protecting the plasma lemma from damage by abrasion, especially from hard food substances.As M. vittatus has feeding regimes limited to soft shelled zooplanktonic organisms, microridges are not an essential architectural structure in the mouth for feeding activity.Lack of prominent or compact microridges further suggests its inability to adopt taste based (gustatory) foraging on selected prey items.As in most freshwater fishes, presence of traces of microridges may be an evolutionary remark, but without prominent functions.
The mouth morphometry and architecture describe functional ecology and ethology of the feeding regimes of fish [14,15].The shape of the body and mouth, dentition system and barbels in M. vittatus confirm its carnivorous feeding on small preys, like zooplanktons without strong taste sensation and poor predation on hardy prey items.
Fig. 1 .
Fig. 1.Scanning electron micrograph of mouth architecture of M. vittatus.(A) Dentition in upper jaw.The black star indicates upper lip; V, vellum; double headed black arrow focuses villiform dentition, (B) Magnified portion from upper jaw to show villiform dentition, (C) Dentition in lower jaw.The black star indicates lower lip; (D) Magnified portion from lower jaw to show villiform and mollariform dentition; mf, molariform teeth; Vf, villiform teeth.
Table 1 .
Definition of morphometric measures recorded for M. vittatus.(All lengths in cm).
HL Distance on a straight line between the anterior most part of snout and posterior most edge of the opercular bone
Table 1 (
contd.) 3.1.Mouth morphologyThe mean mouth morphometric measures of M. vittatus (with K values ranging between 0.55-1.18)werepresented in Table2.The mean HL and HD of the fish was 1.94 cm and 1.1 cm, respectively.The mean lengths of upper and lower jaws were indifferent (0.60 cm).It has a slightly protruding snout of 0.70 cm in length.The mouth bears four pairs of unequal barbels viz.maxillary (5.44 cm), long mandibular (2.29 cm), short mandibular (1.48 cm) and nasal (1.08 cm).VMO and HMO were almost of equal lengths (0.737 cm and 0.783 cm, respectively). | 1,943.2 | 2013-12-27T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Adapting Multilingual Models for Code-Mixed Translation
The scarcity of gold standard code-mixed to pure language parallel data makes it difficult to train translation models reliably. Prior work has addressed the paucity of parallel data with data augmentation techniques. Such methods rely heavily on external resources making systems difficult to train and scale effectively for multiple languages. We present a simple yet highly effective two-stage back-translation based training scheme for adapting multilingual models to the task of code-mixed translation which eliminates dependence on external resources. We show a substantial improvement in translation quality (measured through BLEU), beating existing prior work by up to +3.8 BLEU on code-mixed Hi → En, Mr → En, and Bn → En tasks. On the LinCE Machine Translation leader board, we achieve the highest score for code-mixed Es → En, beating existing best base-line by +6.5 BLEU, and our own stronger base-line by +1.1 BLEU.
Introduction
As code-mixing (Diab et al., 2014;Winata et al., 2019;Khanuja et al., 2020;Aguilar et al., 2020) becomes widespread in an increasingly digitized bilingual community, it becomes important to extend translation systems to handle code-mixed input.A major challenge for training code-mixed translation models is the lack of parallel data.Recent work on generating synthetic parallel data using available non-code-mixed parallel data depend on language specific tools for transliteration, wordalignment, and language identification (Gupta et al., 2021).This makes the approach difficult to scale to new languages and increases software complexity.Back-translation (BT) is another effective and popular strategy to handle non-availability of parallel data (Sennrich et al., 2016;Edunov et al., 2018).However, for the code-mixed to English translation task, simple BT is not an option since we cannot assume the presence of an English to code-mixed translation model.
Meanwhile the mainstream translation community is converging on frameworks based on multilingual models for translation between multiple language pairs (Johnson et al., 2017;Aharoni et al., 2019;Arivazhagan et al., 2019;Zhang et al., 2020;Fan et al., 2021).Going forward, code-mixed translation needs to be integrated within these frameworks to impact practical systems.
We propose a novel two stage back-translation methodology called Back-to-Back Translation (B2BT) targeted for adapting multilingual models to code-mixed translation.Our approach is simple and integrates easily with existing multilingual translation models without any need for special models or language specific tools.We compare B2BT with six other baselines on both standalone and mBART-based models across four benchmarks and show significant gains.For example, on codemixed Hindi to English translation B2BT improves state-of-art accuracy by +3.8 and by +6.3 over default back-translation.We analyze the reasons for the gains via both human evaluation and impact on downstream models.We release a new dataset and will publicly release our code.
Our Approach
Our objective is to train a model that can translate a sentence from the code-mixed language C, which contains words from English and an additional language S, to monolingual English E. Following (Myers-Scotton, 1997) we refer to S as the matrix language as it lends its grammar in a code-mixed utterance, and English as the embedded language since it lends only its words.We are given parallel S to English corpus (S, E) ⊂ (S, E) and a non-parallel code-mixed corpus C ⊂ C. Since code-mixing appears more in domains like social media, which differ from formal domains like news in which parallel data (S, E) is available, we addi- Training Base Multilingual Model The first step is to train a multilingual model (M) on parallel matrix language to English corpus (S, E) in both directions and non-parallel data in English E M , matrix language S M , and code-mixed C. Following Johnson et al. (2017) we prefix source sentences with one of <2en>, <2cm>, and <2xx> directing target as English, CM, or S respectively.For the non-parallel corpora, we train the model to copy the source to the target by masking out 20% tokens in the source as used in (Song et al., 2019b).
The above training exposes M to all three languages in both encoder and decoder, and a baseline is to just use this bidirectional model for our task.We will show that such a model provides marginal gains over a simple S → E model.However, we adapt M further using synthetic parallel data for the C → E task.Back-translation (BT) of E to C using M to generate synthetic parallel data provides very poor quality as we show in Section 4. This motivates our two stage BT approach.A key insight of B2BT method is that M trained with parallel S → E data gives better quality outputs when translating C to E than the reverse.The reason is C shares the grammar structure of S and M is trained to handle noise in the input.We describe the two step BT next.
Fine-tune for E → C Here we prepare M to back-translate pure English sentences to codemixed sentences so that the resulting synthetic parallel data can be used to train a better code-mixed to English translation model.We first back-translate the monolingual code-mixed corpus C to English E B using M. The back-translation is done by prefixing <2en> to the code-mixed input and sampling English output from M. This provides us with a synthetic English to code-mixed parallel corpus (E B , C).We fine-tune M on (E B , C) to produce a model M ′ where source sentences are prefixed with <2cm>.Since the target distribution C is preserved during training, we can now generate high quality in-domain code-mixed sentences using M ′ .
Fine-tune for C → E In the final step we realise our objective of C → E translation.We start by back-translating the in-domain monolingual English corpus E M D to code-mixed C B using M ′ .This is done by prefixing English sentences with the <2cm> tag, and sampling code-mixed outputs from M ′ .We now have a synthetic code-mixed to English parallel corpus (C B , E M D ).We fine-tune M to obtain our final model M * on this synthetic parallel corpus where all the source sentences in C B are prefixed with the <2en> token.The biggest challenge in translation of codemixed sentences is the lack of large parallel training data (Mahesh et al., 2005;Menacer et al., 2019;Nakayama et al., 2019;Srivastava and Singh, 2020).Gupta et al. (2021) propose to create synthetic parallel CM data via these two steps: (1) train an mBERT model to identify word set W to switch in a sentence from S to E, effectively creating a sentence from C (2) align parallel sentences from (S, E) and replace words in W to their aligned English words.We call this the mBertAln method in this paper.This pipeline for a new language S requires the following four external tools: (1) mBERT pre-trained on S, (2) a language identifier tool to spot English tokens in a CM sentence, (3) a word alignment model, and (4) a translator E → S for BT.For low-resource languages such tools may not exist.In contrast B2BT is totally standalone.Even when external tools exist, we show empirically that the synthetic sentences thus generated tend to be of lower quality than ours because of errors in any of the two steps.The CALCS 2021 workshop (Solorio et al., 2021) also released a shared task for CM translation but the submissions so far are straight-forward application of BART multilingual models, with which we also compare our method.
Related Work
B2BT is reminiscent of dual learning NMT methods (He et al., 2016;Artetxe et al., 2018;Hoang et al., 2018;Cheng et al., 2016) but these methods were designed for two generic languages whereas B2BT for code-mixed translation handles three languages related in specific asymmetric ways.We exploit that asymmetry to design our training schedule.For example, since C → E translations are more accurate than the reverse we insert the intermediate BT stage.
Experiments
We use the notation SoEn→En, to indicate translation from a code-mixed matrix language with code 'So' to English.We evaluate on four codemixed datasets: Hindi (HiEn→En) from Gupta et al. (2021), Spanish (EsEn→En) on the LinCE leaderboard1 , Bengali (BnEn→En) from Gupta et al. (2021) Results Table 1 compares B2BT approach against these baselines on HiEn→En, BnEn→En, and MrEn→En.Observe how B2BT significantly outperforms mBertAln and multilingual model adapted with existing single step back-translation across all language pairs.We also see substantial improvements on the two adversarial subsets ST-OOV and ST-Hard.This establishes the importance of our two-stage back-translation approach.Note in particular that when we fine-tuned with Our approach can also complement existing multilingual pre-trained models such as mBART.Table 2 presents results with base multilingual model M trained by fine-tuning an mBART checkpoint.
Here again we observe gains beyond simple BTbased fine-tuning of the multilingual model.
Why does B2BT outperform mBertAln?
We hypothesize that the reason our model performs substantially better is that the synthetic data generated by our model is of higher quality.To test this hypothesis we replace the synthetic code-mixed parallel data of B2BT with synthetic data from mBertAln (Gupta et al., 2021) while keeping the rest of the training of M * unchanged.Table 3 presents this result.It is important to note that all the fine-tuning sets have the exact same size and all fine-tuning is performed on the same multilingual base model, M. The only difference is in the method used to create the synthetic side of the fine-tuning dataset.The improvement of almost +4.9BLEU points on ST-Test over using mBertAln data, clearly shows that the synthetic data from our model has better quality.
To directly quantify this fact, we performed human evaluation of data quality.Human raters were asked to rate fluency and intent preservation for source-target pairs (similar to Wu et al. (2016)) on a scale of 0 (irrelevant) to 6 (perfect).Across 500 examples, we observe that synthetic data from B2BT is rated as 4.27 out of 6 on average compared to 3.74 for mBertAln.In 39% of examples B2BT is rated higher than mBertAln, 45% of examples get the same score, and only in 17% examples is mBertAln better (Table 4).In mBer-tAln the quality of synthetic data could suffer because of poor back-translation, mBERT failing to capture the code-switching pattern, or the alignment model failing to predict the aligned English token.Figure 2 presents examples of synthetic sentences generated by B2BT vs mBertAln.The mBertAln method has word repetition like "open" in row 2, which could be an alignment mistake, and word omissions like "box" in row 1 which could be caused by poor back-translation or alignment.
Finally, we compare code-mixing statistics between the synthetic data generated by B2BT and mBERT in Table 4.The data generated from B2BT is closer to the test data in terms of Code-Mixing Index, fraction of English tokens common in the source and target, and the average probability of switching at a given word.
Varying degree of code-mixing Following Gupta et al. (2021), we also evaluate the effectiveness of our model across different splits of the test set with varying Code-Mixing Index (Gambäck and Das, 2016) (CMI).Figure 3 presents the improvements from our model on the three splits of the test set.We see improvements across all splits, but the largest improvements are on the split with the highest degree of code-mixing.On the high CMI split, we see about +8.7 BLEU point improvement over the mBERT approach, and +14.5 BLEU point improvement over the baseline.Masking during fine-tuning in B2BT A distinctive property of code-mixed translation is word overlap between the source and target sentences.Such overlap makes the fine-tuned model overly biased towards the easier copy action.We alleviate this bias by introducing random masking of words in the source sentence (with masking probability 0.2).Unlike prior work (Song et al., 2019b) which apply such masking only for pre-training with mononlingual corpora, we propose to mask tokens even when training with parallel data.We evaluate the impact of this source side masking in B2BT's fine-tuning stages.Table 5 compares model performance with and without source side masking when fine-tuning.We observe noticeable gains, with the highest for BnEn at +1.5.
Conclusion
We present a simple two-stage back-translation approach (B2BT) for adapting multilingual models for code-switched translation.B2BT shows remarkable improvements on four datasets compared to recent methods, and default back-translation baselines.Our approach fits naturally with existing multilingual translation frameworks, which is crucial in expanding coverage to low resource lan-guages without building per-language pair models.We demonstrate with ablation studies and human evaluations that the synthetic data created through the two step process in B2BT is objectively higher quality than the one used by existing work.
Limitations
Our method depends on code-mixed monolingual data which may not be always available.Additionally, for low resource languages, we might not have access to enough non-code-mixed parallel data which also forms a crucial component of our approach.
Standalone Multilingual Models For training all non-mBART models, we use the standard transformer architecture from Vaswani et al. (2017) with six encoder and decoder layers.In the data pre-processing step, we first tokenize with Indic-NLP (Kunchukuttan, 2020) tokenizer for Indic language sentences and code-mixed sentences and Moses tokenizer6 for pure English sentences.Next, we apply BPE with code learned on monolingual English and monolingual non-code-mixed datasets jointly, for 20,000 operations (the resulting dictionary is manually appended with the special tokens <2en>, <2xx>, <2cm> and <M>).We use Adam optimizer with a learning rate of 5e-4 and 4000 warmup steps.We train all models for up to 100 epochs and select the best checkpoint based on loss on the validation split.For the two BT based finetuning stages in B2BT we use a constant learning rate of 1e-4 and use a random 2K subset of the BT data as the validation split.
Pre-trained mBART-based Multilingual Models
The mBART models are trained by fine-tuning the CC25 mBART checkpoint.The model has 12 encoder and decoder layers, with model dimension of 1024 and 16 attention heads (∼610M parameters).We modify the existing sentence piece model by adding the three special tokens <2en>, <2xx> and <2cm>, so they are not tokenized and also add them to the dictionary by replacing three tokens in a language we are not currently experimenting with.The multilingual model is trained for 100K steps, while fine-tuning stages of B2BT are trained for up to 25K steps.
Fine
Figure 1: B2BT training pipeline, showing the two-stage back-translation based adaptation of an initial multilingual model.( • ) indicates source side masking during training.
introduce 2 .A summary of the training data used, and our model setup is in Appendix A and B. Baselines We compare our method, B2BT against the mBertAln model (Gupta et al., 2021) and these baselines: (1) the base bi-lingual S → E model, (2) base model fine-tuned with E → S BT on domain data E M D , (3) base multilingual model M obtained after first stage of B2BT, (4) M finetuned with E → S BT on domain data E M D , (5) M fine-tuned with E → C BT on E M D .
Figure 2 :
Figure 2: Examples of synthetic sentences from mBer-tAln vs B2BT.English translations of Devanagari words are provided.
Figure 3 :
Figure 3: Improvements in BLEU with B2BT against the mBERT based model and the domain-adapted bilingual model baseline across three splits of the test set with varying degree of code-mixing in the source.
but augmented with the newly released Samanantar data to create a stronger baseline (evaluation is done on the splits released by the authors), and a new Marathi (MrEn→En) dataset that we
Table 5 :
Comparing BLEU on ST-Test between masked vs un-masked fine-tuning to train M * in the B2BT approach. | 3,581.2 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Expressibility and trainability of parameterized analog quantum systems for machine learning applications
Parameterized quantum evolution is the main ingredient in variational quantum algorithms for near-term quantum devices. In digital quantum computing, it has been shown that random parameterized quantum circuits are able to express complex distributions intractable by a classical computer, leading to the demonstration of quantum supremacy. However, their chaotic nature makes parameter optimization challenging in variational approaches. Evidence of similar classically-intractable expressibility has been recently demonstrated in analog quantum computing with driven many-body systems. A thorough investigation of trainability of such analog systems is yet to be performed. In this work, we investigate how the interplay between external driving and disorder in the system dictates the trainability and expressibility of interacting quantum systems. We show that if the system thermalizes, the training fails at the expense of the a large expressibility, while the opposite happens when the system enters the many-body localized (MBL) phase. From this observation, we devise a protocol using quenched MBL dynamics which allows accurate trainability while keeping the overall dynamics in the quantum supremacy regime. Our work shows the fundamental connection between quantum many-body physics and its application in machine learning. We conclude our work with an example application in generative modeling employing a well studied analog many-body model of a driven Ising spin chain. Our approach can be implemented with a variety of available quantum platforms including cold ions, atoms and superconducting circuits
I. INTRODUCTION
The recent achievement of quantum supremacy [1], the ability of quantum systems to compute tasks that are intractable by a classical computer, stands as an important milestone for noisy intermediate-scale quantum (NISQ) devices [2].A common approach to operate NISQ devices is to implement variational quantum algorithms (VQAs), where a classical feedback loop is used to passively correct the noise in the quantum device [3][4][5].VQAs have been implemented to tackle a wide range of problems, from quantum chemistry [6][7][8][9][10][11], machine learning [12,13], quadratic binary optimization [14][15][16], to high energy physics [17].
One of the key questions for NISQ devices is whether they can provide provable quantum advantage for realworld problems.A hint to answer this question lies in the ability of NISQ devices to efficiently explore Hilbert space.For example, in quantum chemistry, NISQ devices can produce highly-entangled variational ansatzes, such as unitary coupled clusters, that cannot be efficiently represented on a classical computer [18].In machine learning, quantum circuits have been proven to have more 'expressive power' than any classical neuron networks [19][20][21].This means that those circuits can produce complex Similar to classical variational algorithms, VQAs rely on 'good' ansatzes that can efficiently capture the answer of a given problem.In the case when such ansatz is not known or implementable, it is desirable to exploit high expressibility of some NISQ devices to generate an unbiased guess.The latter is known as 'hardware efficient' [8].A common feature of this approach is to explore the chaotic dynamics which allows the system to quickly explore the entire Hilbert space.However, this chaoticity also makes it difficult, if not impossible, to classically optimize the system since it is highly sensitive to any small changes in the parameters.In the digital case, hardware efficient ansatzes suffer from the barren plateaus problem [22,23], where the landscape of the cost function becomes exponentially flat as the number of qubits increases.Hence, finding the right VQA for a given problem is an emerging art of balancing expressibility, implementability and trainability of the NISQ devices.
Analog quantum simulators stand out from their digital counterpart when it comes to implementability [24][25][26].Here, a quantum device is built to mimic a specific Hamiltonian, which requires significantly less control than universal quantum circuits.State-of-the-art quantum simulators have already been able to produce dynamics intractable by existing classical algorithms [27].Quantum supremacy in analog simulators have also been proven in 2D Ising lattice [28,29], cluster states [30], and more recently in periodically-driven quantum many-body systems [31].Hybrid analog-digital approaches for VQAs have been explored in Refs [7-9, 13, 14, 32-34].
In this work, we analyze the expressibility and trainability of analog quantum devices focusing on parameterized driven quantum many-body systems.We show that these properties are intimately related to phases of the system.We focus on four generic phases depending on whether the dynamics is thermalized or many-body localized (MBL) [35,36] and whether a continuous drive is applied.As an example, we consider the standard Ising chain, globally driven by an external magnetic field.We find that, evolving under the dynamics resulting from a series of quenches between randomized disorder configurations, the system in all four phases are capable of reaching the quantum supremacy regime, illustrating its high expressibility beyond a classical computer.We then devise a simple sequential training protocol to train the system for generative modeling tasks in machine learning.We show that the chaoticity in the thermalized phase prevents the training as in the digital case.However, the integrability of the MBL within each quench increases drastically the trainability of the system.The final learning accuracy depends solely on the phase of the system.
II. DRIVEN ANALOG QUANTUM SYSTEMS AND THEIR STATISTICS
In this section, we study the many-body dynamics of generic parameterized quantum systems and the different statistics associated with their phases.We then analyze a specific example of driven quantum Ising chain which will be used for the analysis of the expressibility and trainability in the following sections.
General framework
We consider fully general quenched quantum manybody systems |ψ(Θ M ) = Û(Θ M )|ψ 0 , where |ψ 0 is an initial product state, Θ M is a vector containing all variational parameters during the evolution and M is the number of times the system is quenched.The unitary time evolution is where Θ M = {θ m } M m=1 and each quench/layer is obtained from a time-dependent Hamiltonian Ĥ(θ m , t), i.e.
Û (θ
with m ∈ {1, 2, ..., M }, T being the time-ordering operator and T being the evolution time during each layer.The Hamiltonian is further decomposed as where Ĥ0 (θ m ) is a static Hamiltonian, V is the driving Hamiltonian such that Ĥ0 (θ m ), V = 0.The modulation f (t) is an oscillating function with the period T .We require that the time-averaged Hamiltonian Ĥave (θ m ) = 1 T T 0 Ĥ(θ m , t)dt is many-body [37].
We can now define the four regimes or 'phases' of Û (θ m ) in the above sense according to whether the dynamics is thermalized or MBL and whether f (t) is zero or non-zero.To allow non-trivial dynamics within each layer, we require 2π/T to be smaller than a typical energy gap of Ĥave (θ m ).We assume that all Û (θ m )'s in Û(Θ M ) belong to the same phase for simplicity.
Let us explore the various statistics associated with the four phases, starting with the f (t) = 0 case in which Ĥeff (θ m ) = Ĥave (θ m ).For the thermalized dynamics, the statistics of Ĥave (θ m ) follows the Gaussian orthogonal ensemble (GOE) [47].This is the ensemble of matrices whose entries are independent normal random variables subjected to the orthogonality constraint.This randomness is a signature of quantum chaos, which is a crucial ingredient for thermalization [39].A large disorder can prevent the system from thermalization leading to MBL dynamics.In this case, the eigenenergies of Ĥave (θ m ) follow the Poison (POI) statistics, indicating that they are uncorrelated.
In the driven case, i.e. f (t) = 0, the statistics are defined at the level of the unitary operator Û (θ m ), as it is generally not possible to have access to Ĥeff .For the driven thermalized dynamics, the statistics of Û (θ m ) follows the circular orthogonal ensemble (COE) [48].This is the ensemble of matrices whose entries are independent complex normal random variables subjected to the orthogonality and the unitary constraints.Unlike the GOE, the COE is intimately related to the infinitetemperature ensemble and is not possible to obtain without a drive [48].As before, a large disorder can prevent thermalization even with f (t) = 0, leading to the High expressibility (quantum supremacy) yes yes yes yes Trainability for generative modeling no yes no yes (best) TABLE I.A summary of statistics, expressibility, and trainability in the four regimes, defined by whether Û (θ m ) is thermalized or MBL and whether f (t) = 0 or f (t) = 0.The symbol '-' indicates that the statistics is not defined.
POI statistics of the quasi-energies (to be defined later) [49,50].A summary of all the statistics is given in Table I.
Driven disordered quantum Ising chains
To illustrate the four generic phases, we will work on a specific example of driven quantum Ising chains with where f (t) = − F 2 cos(ωt), ω = 2π/T , θ m = {θ i,m } L i=1 , L is the number of spins, { Xi , Ẑi } are Pauli's operators acting on site i, J is the interaction strength, h is a static magnetic field and F is the driving amplitude.The parameters {θ i,m } are 'varied' by randomly drawing them from a uniform distribution in the range [0, W ] where W is the disorder strength.This allows us to vary the parameters without changing the phase of the system.The dimension of the Hilbert space is N = 2 L .The initial state |ψ 0 is prepared as a product state where each spin points along the +z direction.This simple model has been implemented in various quantum platforms, including Rydberg atoms [51], trapped ions [52] and superconducting circuits [16].
The standard way to analyze the statistics of the system is to define the level statistics Pr(r α ) as the normalized distribution of where ∆ α = E α+1 −E α is the level spacing with E α+1 > E α and α = 1, 2, .., 2 L − 1.In the f (t) = 0 case, {E α } are eigenenergies of Ĥave (θ m ).In the f (t) = 0 case, {E α } are quasi-energies, defined such that {exp (−iE α T )} are eigenvalues of Û (θ m ).Not only that H eff = H ave in the driven case, but the quasi-energies are also defined in the limited range E α ∈ [0, 2π).This energy folding has profound impact on the resulting statistic.
In Fig. 1, we show the level statistics for F = 0 and F = 2.5J with W = 1J and W = 20J.For a small disorder W = 1J, the level statistics of Ĥave (θ m ) and Û (θ m ) agree with the predictions from the GOE and the COE, respectively.For a large disorder W = 20J, the level statistics of both Ĥave (θ m ) and Û (θ m ) follows the POI distribution, as expected.
III. EXPRESSIBILITY OF DRIVEN QUANTUM-MANY BODY SYSTEMS
In this section, we show that, given a large number of quenches M , the overall dynamics described by Û(Θ M ) for all four phases is capable of reaching the quantum supremacy regime, implying high expressibility of our system beyond a classical computer.
Expressibility and quantum supremacy
Expressibility is the term used in machine learning to describe the range of the resulting functions that a model can compute [53].In the context of quantum computing, expressibility relates to how much a quantum system can explore the Hilbert space [54].For example, product state ansatz have a lower expressibility than tensornetwork ansatz, due to their inability to capture entangled states [55].
The concept of quantum supremacy and expressibility are interconnected.In random quantum circuit proposals for quantum supremacy, universal set of quantum gates are designed such that the system is chaotic and quickly explores the entire Hilbert space over time [56].Consequently, it is impossible for a classical computer to efficiently reproduce its output distribution, unless the polynomial hierarchy collapses.Hence, random quantum circuits with L 100 qubits have higher expressibility than any possible models, implementable on a classical computer.
Let us consider the task of approximating p(z; Θ M ) up to additive error, i.e.
where ν is a positive constant, {z} are output bitstrings measured in the computational basis, p(z; is the exact output probability, and q(z) is the approximated value obtained from a classical / quantum device.In principle, a quantum device can satisfy this condition by directly implementing Û(Θ M ) in the hardware and measure the output multiple times to construct q(z).To show that a classical computer cannot do the same efficiently unless the polynomial hierarchy collapses, one need to show that (i) it is #P-hard to approximate p(z; Θ M ) up to multiplicative error [57], i.e.
Pr p(z; Θ
where δ, γ are some constants.We refer interested readers to Ref. [28,59,60] for the derivation of how these two conditions lead to the proof of quantum supremacy.
Achieving quantum supremacy with quenched quantum many-body systems
The #P-hardness to approximate p(z; Θ M ) up to multiplicative error has been shown (for the worse instance) in the case where it results from a unitary evolution that follows the circular unitary ensemble (CUE) statistics [60,61].The CUE is the ensemble of matrices whose entries are independent complex normal random variables subject to the unitary constraint [62].Such statistics can be probed from both the previously defined level statistics Pr(r α ) and the distribution Pr(c = | z|E α | 2 ) of the eigenstates |E α of Û(Θ M ).Fig. 2(a) and (b) show the statistics of the eigenstates and the quasi-energies of Û(Θ M ) in the four regimes at M = 400, respectively.It can be seen that in all cases the results match with the CUE statistics, indicating the #P-hardness to approximate the resulting p(z; Θ M ) up to multiplicative error.Our finding agrees with Ref [63], which shows that random quenches in atomic Hubbard and spin models with long-range interactions lead to the n-design property.The n-design ensemble produces the CUE when n → ∞ which happens in the long-time limit [64].
In Fig. 2(c), we plot the Kullback-Leibler (KL) divergence of the output distribution Pr(p) from the Porter-Thomas distribution, Pr PT (p) = N e −N p .The latter implies that the system explores the entire Hilbert space.(Here, we drop the argument Θ M for brevity).The Porter-Thomas distribution satistifies the anti-concentration condition since [61].From Fig. 2(c), it can be seen that the system in all four phases reaches the Porter-Thomas distribution over time with different timescales.The thermalized case with F = 2.5J reaches it first at M ∼ 10.The thermalized case with F = 0 and the MBL case with F = 2.5J have a similar convergence rate and saturate at M ∼ 100.The MBL with F = 0 has the slowest rate and saturates at M ∼ 250.This is expected as MBL dynamics localizes the system, while the drive F 'heats up' the system leading to de-localization.
Fig. 2(a)-(c) provides evidences that |ψ(Θ M ) cannot be efficiently approximated by a classical computer.This suggests that, for a large number of qubits, our system in all phases have higher expressibility than any classical models.
IV. TRAINABILITY OF DRIVEN ANALOG QUANTUM-MANY BODY SYSTEMS
In the context of machine learning, having a model with large expressibility is necessary but not sufficient as the model also need to be trainable.We here address the interplay between expressibility and trainability for the four generic phases of driven analog many-body systems discussed so far.Interestingly, we show that the external drive and the temporal correlations between different quenches in the MBL phase are the key ingredients to combine those two crucial characteristics.
Generative modeling in classical machine learning
As a testbed to analyse the trainability of our model, we solve a generative modeling problem in machine learning [65].The latter is an unsupervised task, meaning that the training data are unlabelled.The goal is to find the unknown probability distribution, Q(z), underlying the training data.Here, the data is a set of binary vectors {z} data = {z 1 , z 2 , ...}.For example, it can represent the opinions of a group of customers on a set of L different products, as depicted in Fig. 3(a).The opinion of the customer i is represented by a binary vector z i = [z i1 , z i2 , ..., z iL ] where z ij = 1 if he/she likes the product j and −1 otherwise.After knowing Q(z), the company can generate new data from this distribution and recommends products with +1 score to new customers.
In this section we use an artificial dataset as a working example.To assure the generality of the data, we assume that Q(z) is the Boltzmann distribution of classical Ising spins with all-to-all connectivity, i.e., where Z = z exp (−E(z)/k B T 0 ) is the partition function, k B is the Boltzmann constant, T 0 plays the role of a temperature, and with a i , b ij being random numbers between ±J/2.This model is known as the Boltzmann machine which is one of the standard types of artificial neuron networks used in machine learning and has been shown to capture a wide range of real-world data [66].Its quantum version has been studied in [67,68].
Sequential training scheme using an analog quantum model
Classically, the distribution of {z} data can be obtained by first guessing a model P model (z; Θ), such as the Poisson or the Boltzmann distribution, which has some variational parameters Θ.The 'training' is done by minimizing the cost function, which is the KL divergence of P model (z; Θ) from Q(z) using either gradient descent or gradient-free optimization algorithms.Here, Q is the normalized histogram of {z} data .
In our case, we show how the distribution of {z} data can be recovered as the output probability p(z; Θ M ) of the driven quantum Ising chain.This approach is also known as the Born's machine [21].Our goal here is to guide or train the quantum system to a specific point in the Hilbert space such that p(z; Θ M ) = Q(z).Our training protocol, depicted in Fig. 3 2. Evolve the system by one layer |ψ(Θ m+1 ) = Û (θ m+1 )|ψ(Θ m ) with Θ m+1 = {θ m+1 } ∪ Θ m , and then measure p(z; Θ m+1 ) to compute C.
3. Repeat the step (2) D times with different disorder realization θ m+1 .In the thermalized case, the system will randomly explore the entire Hilbert space in this step.However, in the MBL case, the system will only explore the Hilbert space locally near |ψ(Θ m ) allowing systematic optimization, see Fig. 3(c).
4. Choose the disorder realization in the step (3) that minimizes C, then update m → m + 1.This will 'move' the state in the most promising direction in the Hilbert space.
We note here three characteristics of our training protocol.First, it is sequential since not all parameters in Θ are updated at the same time, making them easier for classical optimization.Second, although the parameters are randomly drawn during the training, our optimization is done systematically in the Hilbert space.This makes an important difference to the usual optimization approaches which are done in the parameter space [22,67].Third, a large fraction of results is 'thrown away' in the step (3).Although in principle this data can be utilized to improve the training efficiency, it is our goal to keep the training protocol as simple as possible, so that the focus is made on distinct learning behaviors displayed by each phase.
Training results
The training results are shown in Fig. 4(a).As expected, the system in the thermalized phase cannot be trained.The cost function from the thermalized case with F = 2.5J saturates at C ∼ 2 already at the first layer.In the F = 0 case, the cost function starts at around C ∼ 3.5 and then falls down to saturate at C ∼ 2, the same value as the driven case, when M ∼ 50.For the MBL case with F = 0, during M 10 2 , the cost function steadily decays to 0.7 .Then during 10 2 M 10 3 , the cost function continues to decay with a slower rate.Interestingly, after M 10 3 , the cost function increases and saturates at 0.7 when M ∼ 10 4 .In contrast, for the MBL case with F = 2.5J, the cost function goes down steadily when M 10.Then, the cost function further decays monotonically with a slower rate to saturate at C ∼ 0.1 at M ∼ 10 4 .This results show that the learning behavior changes qualitatively depending on the phase and the timescale of the system.The best learning accuracy is obtained with the MBL phase with F = 2.5J.
In Fig. 4(b), we plot the final learning results as a function of W for F = 0 and F = 2.5J.For comparison, in Fig. 4(c), we also plot the averaged level spacing r α as a function of W for both cases.In the F = 2.5J case, the final learning accuracy shows a transition between the trainable and the untrainable regimes, which corresponds roughly to the phase transition between the CUE and the POI statistics.In the F = 0 case, the system moves towards the trainability regime as W approaches 30J.However, we stop our calculation here as the training takes too long to converge when W > 30J [69].Nevertheless, our present results are sufficient to conclude that the drive leads to a better learning accuracy for this timescale.We conjecture that, once the system in the undriven case fully reaches the trainable regime, the learning accuracy should monotonically decreases with M until saturation.
Temporal correlations enabled by MBL
To understand different learning accuracy in different phases, we calculate the KL divergence between p(z; Θ M ) and p(z; Θ M +δm ) to measure the temporal correlations or the 'memory' between outputs at different layers.In Fig. 2(d), we plot such KL divergence as a function of δm, averaged over various M 's.In the thermalized phase, we find that there are no temporal correlations between layers.This is expected as each layer has chaotic dynamics which is highly sensitive to any small changes introduced to the system.In contrast, in the MBL phase, the system displays short-term memory that decays with δm.The MBL dynamics with f (t) = 0 has the longest memory.This memory were exploited during the training to improve trainability of the system.
V. CONCLUSIONS
In this work, we have throughly analyzed the expressibility and trainability of parameterized analog quantum many-body systems.We show that both thermalized and MBL dynamics with and without the modulation f (t) are capable of reaching the quantum supremacy regime, indicating high expressibility beyond any classical models.In the context of generative modeling, we show that chaoticity prevents systematic optimization of the system.However, the latter can be qualitatively improved by the MBL dynamics.In the future, it would be interesting to analyze scalability and generalizability of our models as well as more complex training protocol for efficient optimization.
FIG. 2 .
FIG. 2. Statistics of parameterized analog quantum many-body evolution: (a) and (b) shows the eigenstate distribution Pr(N c) and the level statistics Pr(rα) for the four phases of Û (θ m ), respectively with M = 400.The shaded areas are the predictions from the CUE statistics.(c) The KLD of the output distribution from the Porter-Thomas distribution as a function of M .(d) The KLD of p(z; Θ m+δm ) from p(z; Θ m ) as a function of δm.The KLD is averaged over M ∈ [378, 400) for a given δm.The thermalized and and the MBL phases are obtained with W = 1J and W = 20J, respectively.(L = 9, ω = 8J, h = 2.5J, 500 disorder realizations).
FIG. 3 .
FIG. 3. Machine learning with a driven analog quantum processor: (a) A table demonstrating a real-world application of generative modeling tasks in machine learning.Each customer is asked to rate whether he/she likes (+1) or dislikes (−1) a given product.(b) A sketch of optimization loops used in the training protocol.(c) A diagram showing the movement of the system in the Hilbert space during the training in the MBL phase.
FIG. 4 .
FIG. 4. Training analog quantum systems in the Hilbert space: (a) The lowest cost function at each training step M for F = 0 and F = 2.5J.The thermalized and and the MBL phases are obtained with W = 1J and W = 20J, respectively.The shaded areas represent standard deviations.(b) The cost function at M = 10 4 as a function of W .The results are averaged over 10 dataset, i.e., 10 realizations of {ai, bi} in Eq. (11).Each dataset consists of 3000 samples.(c) The averaged level spacing rα at M = 10 4 as a function of W . (L = 9, ω = 8J, h = 2.5J,kBT0 = J and D = 200.) | 5,784.8 | 2020-05-22T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Purification and In Vitro Evaluation of an Anti-HER2 Affibody-Monomethyl Auristatin E Conjugate in HER2-Positive Cancer Cells
Simple Summary Antibody-drug conjugates (ADCs) represent an innovative class of anticancer agents specifically aimed at targeting cancer cells, reducing damage to healthy tissues but showing some weaknesses. A promising approach for the development of high-affinity tumor targeting ADCs is the use of engineered protein drugs, such as affibody molecules. Our aim was to develop a more efficient purification method for the cytotoxic conjugate ZHER2:2891DCS-MMAE that targets human epidermal growth factor receptor 2 (HER2)-positive breast cancer cells. The conjugate is based on ZHER2:2891 affibody and a drug conjugation sequence (DCS), which allowed for site-specific conjugation of the cytotoxic auristatin E molecule (MMAE) to the affibody. We tested the in vitro efficacy of ZHER2:2891DCS-MMAE on several parameters, such as cell viability, proliferation, migration, and apoptosis. Our results confirmed that the cytotoxic conjugate efficiently interacts with high affinity with HER2 positive cancer cells, allowing the selective and specific delivery of the cytotoxic payload. Abstract A promising approach for the development of high-affinity tumor targeting ADCs is the use of engineered protein drugs, such as affibody molecules, which represent a valuable alternative to monoclonal antibodies (mAbs) in cancer-targeted therapy. We developed a method for a more efficient purification of the ZHER2:2891DCS affibody conjugated with the cytotoxic antimitotic agent auristatin E (MMAE), and its efficacy was tested in vitro on cell viability, proliferation, migration, and apoptosis. The effects of ZHER2:2891DCS-MMAE were compared with the clinically approved monoclonal antibody trastuzumab (Herceptin®). To demonstrate that ZHER2:2891DCS-MMAE can selectively target HER2 overexpressing tumor cells, we used three different cell lines: the human adenocarcinoma cell lines SK-BR-3 and ZR-75-1, both overexpressing HER2, and the triple-negative breast cancer cell line MDA-MB-231. MTT assay showed that ZHER2:2891DCS-MMAE induces a significant time-dependent toxic effect in SK-BR-3 cells. A 30% reduction of cell viability was already found after 10 min exposure at a concentration of 7 nM (IC50 of 80.2 nM). On the contrary, MDA-MB-231 cells, which express basal levels of HER2, were not affected by the conjugate. The cytotoxic effect of the ZHER2:2891DCS-MMAE was confirmed by measuring apoptosis by flow cytometry. In SK-BR-3 cells, increasing concentrations of conjugated affibody induced cell death starting from 10 min of treatment, with the strongest effect observed after 48 h. Overall, these results demonstrate that the ADC, formed by the anti-HER2 affibody conjugated to monomethyl auristatin E, efficiently interacts with high affinity with HER2 positive cancer cells in vitro, allowing the selective and specific delivery of the cytotoxic payload.
Introduction
The human epidermal growth factor receptor 2 (HER2) is a tyrosine kinase receptor that belongs to the family of the epidermal growth factor receptors (EGFRs).
Amplification of HER2 gene is observed in 20-30% of human cancers, especially breast and ovarian cancers [1], and in about 30% of feline mammary carcinomas (FMCs) [2], while its overexpression is correlated with poor prognosis and worse clinical outcomes [3]. The overexpression of HER2 in tumor cells leads to the activation of various signaling pathways involved in cellular proliferation, migration, and apoptosis suppression [4]. Thus, HER2 represents an important pharmacological target for HER2-positive breast cancer therapy. The most used drug targeting HER2 is trastuzumab (Herceptin ® ). This is a humanized IgG1 monoclonal antibody (mAb) that binds to the extracellular domain of the human HER2 protein and is currently used in patients with metastatic breast or gastric cancer characterized by HER2 overexpression [5]. Trastuzumab seems to exert its therapeutic effect through different mechanisms, including the activation of antibody-dependent cellular cytotoxicity [6], the inhibition of the MAPK and PI3K/AKT pathways [7], leading to cell cycle arrest, and by blocking the shedding of the HER2 extracellular domain [8].
Trastuzumab, when combined with chemotherapy, improves overall survival in patients with HER2-positive breast cancer [9]. However, the clinical efficacy of trastuzumab is limited, due to the development of resistance to the drug in a significant number of women with HER2 overexpressing tumors [10].
Target therapy using specific antibodies conjugated with a cytotoxic drug represents an innovative strategy in cancer treatment and a valid alternative to naked antibodytargeted therapy [11]. Antibody-drug conjugates (ADCs) combine the highly specific targeting of mAbs with the potent cytotoxic activity of small molecule agents. The Food and Drug Administration (FDA) approved two ADCs, brentuximab vedotin (Adcetris ® ) and trastuzumab emtansine (Kadcyla ® ), for the treatment of patients with Hodgkin lymphoma and HER2 metastatic breast cancer, respectively [12]. Brentuximab vedotin consists of an anti-CD30 antibody linked to the potent antimitotic drug monomethyl auristatin E (MMAE) [13], whereas trastuzumab emtansine combines trastuzumab with another antimitotic cytotoxic agent, derivative of maytansine (DM1), via a chemical linker [14]. Interestingly, a recent paper described the use of anti-HER2 mAbs and ADCs as a new targeted therapy for feline FMC [15]. Although several ADCs are currently used in clinics, they show some limitations due to their large size (mAbs are 150 kDa and over), which reduces their ability to penetrate solid tumors, and associated high production cost [16].
To overcome these limitations, a new class of affinity ligands based on non-antibody scaffolds has become an attractive alternative to mAbs, due to their smaller size of~6.5 kDa compared to whole antibodies or antibody fragments (~20-150 kDa), a rapid blood clearance that allows for faster penetration and distribution into tissues, and cost-efficient production in prokaryotic hosts (Escherichia coli), in contrast to mAbs that are mainly produced in mammalian cells [17].
These small molecules called affibodies are derived from mutagenesis at specific amino acid residues of the B domain of staphylococcal protein A to increase their chemical stability. The resulting engineered variant is called the Z domain [18]. This Z domain consists of 58 amino acids, and 13 of these surface amino acid residues were randomized to generate affibody libraries, followed by phage display selection against different target proteins, including HER2, EGFR, and amyloid-β peptide [19,20].
In the present study, we developed a rapid and simple method for Z HER2:2891 DCS purification that, in contrast to the previously described procedure [21], does not require protein tagging. According to previous studies, mAbs conjugated with MMAE show selective antitumor activity in patients with solid tumors [22]. For this reason, we decided to conjugate our affibody to auristatin E, a synthetic analogue of the natural product dolastatin 10 that acts by inhibiting cell division and blocking the polymerization of tubulin [23,24]. MMAE is extremely cytotoxic and is not tumor-specific, for these reasons it cannot be used as a drug itself. However, it is clinically used as payload in ADCs such as brentuximab vedotin and polatuzumab vedotin-piiq [13,25]. Abdollahpour-Alitappeh et al. and Sochaj-Gregorczyk et al. demonstrated by MTT and Alamar Blue assay that free MMAE is cytotoxic in several breast and kidney cancer cell lines tested, with an IC 50 value in the nanomolar range [21,26].
In a previous study, Sochaj-Gregorczyk et al. used an affinity chromatography approach for the purification of the anti-HER2 affibody Z HER2:2891 DCS, since Z HER2:2891 DCS was tagged with GST (Z HER2:2891 DCS-GST) [21]. However, this purification method gave only 1-5 mg of protein from 1-litre culture. Therefore, we set up another, more efficient and faster method to purify Z HER2:2891 DCS and then characterized its in vitro binding to HER2 and how it might affect cancer cells.
Generation of the pDEST15-GST Removed-Z HER2:2891 -DCS
To remove the sequence that encodes for the GST tag from the pDEST15-Z HER22891 DCS construct, inverse PCR with 5 -phosphorylated primers (Table 1) was performed. Subsequently, the PCR product was subjected to ligation using T4 DNA ligase (Thermo Fisher Scientific). The resulting construct pDEST-GST removed-Z HER2:2891 DCS was verified by sequencing (LGC Genomics), which confirmed that the sequence encoding GST had been removed.
Affibody Purification
Bacterial pellets were resuspended in an ion exchange buffer (50 mM HEPES buffer, pH 8.1) and sonicated to disrupt the cells. Following centrifugation (50,000× g, 1 h, 4 • C) and filtration with a 22 µm syringe filter unit, cell lysate containing untagged Z HER2:2891 DCS was subjected to ion exchange chromatography. The chromatographic separation was performed using an ÄKTA chromatography system (GE Healthcare, Chicago, IL, USA) with a weak cation exchanger column, HiTrap CM Fast Flow (GE Healthcare).
The elution of the affibody was performed by grading the salt concentration (from 10 mM to1 M NaCl). Then the HEPES buffer pH 8.1 was exchanged to the conjugation buffer (25 mM phosphate, 150 mM NaCl, 0.5 mM EDTA pH 6.8) using a HiTrap Desalting, 1 × 5 mL column (GE Healthcare) prepacked with Sephadex G-25 Superfine. Aliquots of affibody were stored at −80 • C.
After the conjugation reaction, the mixture was purified by hydrophobic interaction chromatography high-performance liquid chromatography (HIC-HPLC) using an Agilent Eclipse XDB-C18 column and an Agilent 1200 HPLC Liquid Chromatography System (Santa Clara, CA, USA). To elute the affibody we used increasing concentrations of acetonitrile from 20% of buffer A (dH 2 O, 0.1% TFA) to 50% buffer B (acetonitrile, 0.1% TFA). The peak containing Z HER2:2891 DCS-MMAE conjugates was collected and lyophilized.
Cells were cultured in 100 mm dishes at 37 • C in a humidified atmosphere containing 5% CO 2 , and when they reached confluence, cells were passaged using a Trypsin-EDTA solution.
Cell Viability Assay
Cells were seeded in 24 well-plates at a density of 1 × 10 5 cells/well and 7 × 10 4 cells/well, respectively, and then left to grow for 24 h at 37 • C. At confluency, cells were treated with increasing concentrations of trastuzumab and Z HER2:2891 DCS-MMAE. Z HER2:2891 DCS not conjugated with MMAE was used as a negative control.
Cell viability upon treatments was evaluated by the MTT method. IC 50 values were calculated using GraphPad Prism software (GraphPad Prism software, San Diego, CA, USA).
Cell Proliferation Assay
Cells were seeded in 24-well plates at a density of 2 × 10 6 cells/well. After 24 h, media were removed, and cells were incubated for 72 h with medium containing 0.4% FBS to synchronize cells at G 0 phase of the cell cycle. After 72 h, control dishes were counted with a Coulter Counter (Beckman Coulter, Life Scientific, Milan, Italy) and this was considered the "basal" number of cells at T0. Consequently, cells were treated with 5-100-500 nM and 1.25 µM of trastuzumab and 5-100 and 500 nM of Z HER2:2891 DCS-MMAE or Z HER2:2891 DCS not conjugated with MMAE in medium supplemented with 10% of FBS for 24, 48, and 96 h. Cells number were measured and compared to the zero time-point.
In Vitro Directional Migration (Wound Healing Assay)
Cells were plated in 24-well plates and grown to confluence. Cell monolayers were scratched with a 200 µL pipet tip in a straight line. Thereafter, the cell monolayer was washed with growth medium to remove detached cells. Cells were then incubated with medium containing 0.4% FBS and 5, 100, and 500 nM of Z HER2:2891 DCS-MMAE or 5-100-500 nM and 1.25 µM of trastuzumab. Images of the wounded area were taken at the same spot at different time points (0, 6, 24, 48, and 72 h) using an inverted microscope (Axiovert 200; Carl Zeiss, 10× objective lens) equipped with a digital camera. Quantification of the wound area was performed using ImageJ, and cell migration was expressed as a percentage of wound areas at different time-points compared to initial wound area (T0).
Apoptosis Analysis
Cell apoptosis was performed using an Annexin V-FITC Apoptosis Detection kit (Sigma-Aldrich) according to the manufacturer instructions. Briefly, SK-BR-3 and MDA-MB-231 were seeded in 24-well plates and grown to confluence. Cells were then treated for 10 min followed by drug removal and an additional 48 h of incubation in medium alone or for 48 h of continuous exposure to Z HER2:2891 DCS-MMAE (5-500 nM) and trastuzumab (5 Nm-1.25 µM). Next, cells were washed with PBS, resuspended in 300 µL of binding buffer (25 mM CaCl 2 , 1.4 M NaCl, and 100 mM HEPES/NaOH, pH 7.5), and incubated in the dark for 10 min with 5 µL of Annexin V-FITC conjugate and 10 µL of Propidium Iodide (PI). The percentage of apoptotic cells was evaluated with a flow cytometer (ACEA Biosciences NovoCyte, San Diego, CA, USA)
RNA Isolation and qRT-PCR
SK-BR-3 and MDA-MB-231 were seeded in 24-well/plates at a density of 1 × 10 5 cells/well and 7 × 10 4 cells/well, respectively. After 24 h, cells were treated with 5, 100, and 500 nM of Z HER2:2891 DCS-MMAE and incubated for 24, 48, and 96 h. At each time point, cells were washed once with PBS and RNA extraction was performed using a Direct-zol RNA MiniPrep Plus kit (Zymo Research, Milan, Italy). cDNA was generated by reverse transcription of RNA with an iScript gDNA Clear cDNA Synthesis Kit (Biorad). qRT-PCR was performed using iTaq Universal SYBR Green Supermix (Biorad). Samples were analyzed with a CFX CONNECT TM Real Time detection System (Biorad). Results were normalized to β actin gene expression and the relative quantification was determined by the 2 −∆∆CT method. The primer sequences are listed in Table 2. For the preparation of total cell lysates, cells were washed with ice-cold PBS and lysed with lysis buffer (NaCl 150 mM, TRIS 50 mM pH 7.6, NONIDET P-40 0.5% and protease inhibitors (Merck, Milan, Italy)).
Protein concentration was determined using a Pierce BCA Protein Assay Kit (Pierce, Rockford, IL, USA) and samples were run on SDS-PAGE.
Densitometric analysis was performed using the ImageJ program.
Statistical Analysis
Data are presented as the mean ± SEM of 3 experiments performed in triplicates. Comparison between 2 groups was analyzed by independent t-test and the difference between 3 or more was determined by Dunnett post one-way ANOVA test. Results were considered statistically significant for p-values < 0.05.
Expression and Purification of the Z HER2:2891 -DCS
In a previous study, since the affibody was fused with a GST tag, the anti-HER2 affibody Z HER2:2891 -DCS was purified by affinity chromatography followed by size exclusion chromatography [21].
Since this method only allowed us to obtain a very low yield (1-5 mg of affibody from 1 L of bacteria culture), we developed a faster and more efficient method to purify the affibody, after GST tag removal.
In order to remove the sequence that encodes GST tag from the pDEST15-Z HER22891 DCS vector, an inverse PCR with 5 -phosphorylated primers was performed. The removal of the GST tag was confirmed by agarose gel electrophoresis. As shown in Figure 1, we observed a difference of size of the plasmid with GST (used as control) and without GST. Densitometric analysis was performed using the ImageJ program.
Statistical Analysis
Data are presented as the mean ± SEM of 3 experiments performed in triplicates. Comparison between 2 groups was analyzed by independent t-test and the difference between 3 or more was determined by Dunnett post one-way ANOVA test. Results were considered statistically significant for p-values <0.05.
Expression and Purification of the ZHER2:2891-DCS
In a previous study, since the affibody was fused with a GST tag, the anti-HER2 affibody ZHER2:2891-DCS was purified by affinity chromatography followed by size exclusion chromatography [21].
Since this method only allowed us to obtain a very low yield (1-5 mg of affibody from 1 L of bacteria culture), we developed a faster and more efficient method to purify the affibody, after GST tag removal.
In order to remove the sequence that encodes GST tag from the pDEST15-ZHER22891DCS vector, an inverse PCR with 5′-phosphorylated primers was performed. The removal of the GST tag was confirmed by agarose gel electrophoresis. As shown in Figure 1, we observed a difference of size of the plasmid with GST (used as control) and without GST. For the purification of the untagged anti-HER2 affibody ZHER2:2891DCS, we performed ion exchange chromatography. The 9.38 isoelectric point of our affibody was calculated by using the program EXPASY. We used HEPES buffer pH 8.1, because at this pH, affibody is positively charged and therefore interacts with a cation-exchanger resin, such as the carboxymethyl cellulose, which is negatively charged ( Figure 2).
All the fractions collected were analyzed by SDS-PAGE. As we can observe in Figure 3, each fraction contained untagged ZHER2:2891DCS. Subsequently, the fractions were pooled together and subjected to desalting ( Figure 3) to change the HEPES buffer of pH 8.1 for a phosphate buffer of pH 6.8. We changed pH because the conjugation to MMAE via thiolmaleimide reaction occurs at a pH between 6.8-7.4. For the purification of the untagged anti-HER2 affibody Z HER2:2891 DCS, we performed ion exchange chromatography. The 9.38 isoelectric point of our affibody was calculated by using the program EXPASY. We used HEPES buffer pH 8.1, because at this pH, affibody is positively charged and therefore interacts with a cation-exchanger resin, such as the carboxymethyl cellulose, which is negatively charged ( Figure 2).
All the fractions collected were analyzed by SDS-PAGE. As we can observe in Figure 3, each fraction contained untagged Z HER2:2891 DCS. Subsequently, the fractions were pooled together and subjected to desalting ( Figure 3) to change the HEPES buffer of pH 8.1 for a phosphate buffer of pH 6.8. We changed pH because the conjugation to MMAE via thiol-maleimide reaction occurs at a pH between 6.8-7.4.
Conjugation of MMAE to ZHER2:2891-DCS
Affibody-MMAE conjugation was obtained according to the method described in Sochaj-Gregorczyk et al. [21]. After the conjugation reaction, the mixture was analyzed by HIC-HPLC. This chromatography allowed us to separate unconjugated proteins from the affibody conjugated to the hydrophobic auristatin E. The HPLC chromatogram described by Sochaj-Gregorczyk et al. confirmed the presence of a peak corresponding to ZHER2:2891DCS conjugated with MMAE [21].
Cells were incubated with increasing concentrations (from 1 nM to 500 nM) of ZHER2:2891DCS-MMAE and ZHER2:2891DCS not conjugated with MMAE, used as negative control, in two different ways: for 10 min followed by drug removal and an additional 48 h of incubation in medium alone, or 48 and 96 h of continuous exposure to the drugs.
As shown in Figure 4A, 10 min exposure with ZHER2:2891DCS-MMAE was sufficient to reduce cell viability in a concentration-dependent and statistically significant manner in both HER2 expressing cell lines, with an IC50 value of 80.2 nM in SK-BR-3 cells.
A stronger effect was observed after 48 h of continuous exposure to ZHER2:2891DCS-MMAE, with a 50% reduction of cell viability at a concentration of 5.33 nM ( Figure 4B), whereas the longest exposure time (96 h) reduced cell viability close to 0 at a concentration of 500 nM with an IC50 of 7.13 nM ( Figure 4C). ZHER2:2891DCS-MMAE also reduced ZR-75-1
Conjugation of MMAE to ZHER2:2891-DCS
Affibody-MMAE conjugation was obtained according to the method described in Sochaj-Gregorczyk et al. [21]. After the conjugation reaction, the mixture was analyzed by HIC-HPLC. This chromatography allowed us to separate unconjugated proteins from the affibody conjugated to the hydrophobic auristatin E. The HPLC chromatogram described by Sochaj-Gregorczyk et al. confirmed the presence of a peak corresponding to ZHER2:2891DCS conjugated with MMAE [21].
Cells were incubated with increasing concentrations (from 1 nM to 500 nM) of ZHER2:2891DCS-MMAE and ZHER2:2891DCS not conjugated with MMAE, used as negative control, in two different ways: for 10 min followed by drug removal and an additional 48 h of incubation in medium alone, or 48 and 96 h of continuous exposure to the drugs.
As shown in Figure 4A, 10 min exposure with ZHER2:2891DCS-MMAE was sufficient to reduce cell viability in a concentration-dependent and statistically significant manner in both HER2 expressing cell lines, with an IC50 value of 80.2 nM in SK-BR-3 cells.
A stronger effect was observed after 48 h of continuous exposure to ZHER2:2891DCS-MMAE, with a 50% reduction of cell viability at a concentration of 5.33 nM ( Figure 4B), whereas the longest exposure time (96 h) reduced cell viability close to 0 at a concentration of 500 nM with an IC50 of 7.13 nM ( Figure 4C). ZHER2:2891DCS-MMAE also reduced ZR-75-1
Conjugation of MMAE to Z HER2:2891 -DCS
Affibody-MMAE conjugation was obtained according to the method described in Sochaj-Gregorczyk et al. [21]. After the conjugation reaction, the mixture was analyzed by HIC-HPLC. This chromatography allowed us to separate unconjugated proteins from the affibody conjugated to the hydrophobic auristatin E. The HPLC chromatogram described by Sochaj-Gregorczyk et al. confirmed the presence of a peak corresponding to Z HER2:2891 DCS conjugated with MMAE [21].
Cells were incubated with increasing concentrations (from 1 nM to 500 nM) of Z HER2:2891 DCS-MMAE and Z HER2:2891 DCS not conjugated with MMAE, used as negative control, in two different ways: for 10 min followed by drug removal and an additional 48 h of incubation in medium alone, or 48 and 96 h of continuous exposure to the drugs.
As shown in Figure 4A, 10 min exposure with Z HER2:2891 DCS-MMAE was sufficient to reduce cell viability in a concentration-dependent and statistically significant manner in both HER2 expressing cell lines, with an IC 50 value of 80.2 nM in SK-BR-3 cells.
A stronger effect was observed after 48 h of continuous exposure to Z HER2:2891 DCS-MMAE, with a 50% reduction of cell viability at a concentration of 5.33 nM ( Figure 4B), whereas the longest exposure time (96 h) reduced cell viability close to 0 at a concentration of 500 nM with an IC 50 of 7.13 nM ( Figure 4C). Z HER2:2891 DCS-MMAE also reduced ZR-75-1 cell viability, although it was less effective ( Figure 4A-C) and reached its IC50 of about 500 nM after 48 h of incubation ( Figure 4C).
To evaluate if non-conjugated Z HER2:2891 DCS could affect SK-BR-3 and ZR-75-1cell viability, we treated the cells in the same experimental conditions. As shown in Figure 4A-C, affibody not conjugated to MMAE did not affect cell viability at all time points considered.
As expected, the Z HER2:2891 DCS-MMAE displayed only a weak in vitro cytotoxic effect on the MDA-MB-231 cells that express a basal level of HER2 at all time points, with a 15% reduction of cell viability only at the highest concentration used, and after 96 h of incubation ( Figure 4A-C).
Since trastuzumab is used in patients with HER2-overexpressing metastatic breast cancer, we decided to use it as a reference compound. Therefore, we incubated both SK-BR-3 and ZR-75-1 cells with increasing concentrations of trastuzumab. As shown in Figure 4B,C, at all time points and concentrations tested, trastuzumab showed a lower cytotoxic effect on these cell lines compared to Z HER2:2891 DCS-MMAE. Of note, not even at the additional higher concentration tested with trastuzumab (1.25 µM) cell viability was reduced up to at least 50%. Figure 4C). To evaluate if non-conjugated ZHER2:2891DCS could affect SK-BR-3 and ZR-75-1cell viability, we treated the cells in the same experimental conditions. As shown in Figure 4A-C, affibody not conjugated to MMAE did not affect cell viability at all time points considered.
As expected, the ZHER2:2891DCS-MMAE displayed only a weak in vitro cytotoxic effect on the MDA-MB-231 cells that express a basal level of HER2 at all time points, with a 15% reduction of cell viability only at the highest concentration used, and after 96 h of incubation ( Figure 4A-C).
Since trastuzumab is used in patients with HER2-overexpressing metastatic breast cancer, we decided to use it as a reference compound. Therefore, we incubated both SK-BR-3 and ZR-75-1 cells with increasing concentrations of trastuzumab. As shown in Figure 4B,C, at all time points and concentrations tested, trastuzumab showed a lower cytotoxic effect on these cell lines compared to ZHER2:2891DCS-MMAE. Of note, not even at the additional higher concentration tested with trastuzumab (1.25 µ M) cell viability was reduced up to at least 50%.
ZHER2:2891DCS-MMAE Negatively Regulates HER2 Expressing Cell Line Proliferation
Next, we investigated whether treatment with ZHER2:2891DCS-MMAE would result in a decreased in vitro proliferation rate of HER2 overexpressing breast cancer cells.
As shown in Figure As shown in Figure 5 (all panels), trastuzumab significantly inhibited cell in both SK-BR-3 and ZR-75-1 cell lines by more than 50% only at the highest concentration tested (1.25 µM).
ZHER2:2891DCS-MMAE Inhibits SK-BR-3 Migration
Next, we evaluated whether ZHER2:2891DCS-MMAE treatment would affect SK-BR-3 and MDA-MB-231 cell motility. A low percentage of serum (0.4% of FBS) was used to minimize cell proliferation. The results of the wound healing experiments show that ZHER2:2891DCS-MMAE did not significantly affect the wound reclosure after 6 h ( Figure 6A). On the contrary, after 24 h of treatment, ZHER2:2891DCS-MMAE significantly inhibited SK-BR-3 cell migration in a concentration-dependent manner compared with the untreated cells. After 48 and 72 h of incubation, cells were partially detached from the well surface, which made it impossible to measure the wounded area at these time-points (Data not shown). As shown in Figure 5 (all panels), trastuzumab significantly inhibited cell in both SK-BR-3 and ZR-75-1 cell lines by more than 50% only at the highest concentration tested (1.25 µM).
ZHER2:2891DCS-MMAE Inhibits SK-BR-3 Migration
Next, we evaluated whether ZHER2:2891DCS-MMAE treatment would affect SK-BR-3 and MDA-MB-231 cell motility. A low percentage of serum (0.4% of FBS) was used to minimize cell proliferation. The results of the wound healing experiments show that ZHER2:2891DCS-MMAE did not significantly affect the wound reclosure after 6 h ( Figure 6A). On the contrary, after 24 h of treatment, ZHER2:2891DCS-MMAE significantly inhibited SK-BR-3 cell migration in a concentration-dependent manner compared with the untreated cells. After 48 and 72 h of incubation, cells were partially detached from the well surface, which made it impossible to measure the wounded area at these time-points (Data not shown). In MDA-MB-231 cells, both the untreated group and those treated with ZHER2:2891DCS-MMAE showed a similar migration ability, which led to an almost complete reclosure of the wounded area after 48 h of cell incubation ( Figure 6B).
As shown in Figure 6C, the migration rate of SK-BR-3 cells was significantly inhibited compared to the control by trastuzumab at all concentrations tested, ranging from 5 nM to 1.25 µM.
ZHER2:2891DCS-MMAE Induces Apoptosis of SK-BR-3 Cells
To further investigate the cytotoxic effects of ZHER2:2891DCS-MMAE, we next assessed if the compound induced cell death in HER2 positive SK-BR-3 cells. Therefore, we analyzed by flow cytometry the expression of phosphatidylserine, a marker of apoptosis, together with DNA staining, as a readout of cell death due to membrane permeability, using annexin V-FITC and PI, respectively. Treatment with ZHER2:2891DCS-MMAE increased the percentage of SK-BR-3 cells undergoing apoptosis (calculated by the sum of cells in early, positive to AnV, and late, double positive for AnV and PI, apoptosis). In MDA-MB-231 cells, both the untreated group and those treated with Z HER2:2891 DCS-MMAE showed a similar migration ability, which led to an almost complete reclosure of the wounded area after 48 h of cell incubation ( Figure 6B).
As shown in Figure 6C, the migration rate of SK-BR-3 cells was significantly inhibited compared to the control by trastuzumab at all concentrations tested, ranging from 5 nM to 1.25 µM.
Z HER2:2891 DCS-MMAE Induces Apoptosis of SK-BR-3 Cells
To further investigate the cytotoxic effects of Z HER2:2891 DCS-MMAE, we next assessed if the compound induced cell death in HER2 positive SK-BR-3 cells. Therefore, we analyzed by flow cytometry the expression of phosphatidylserine, a marker of apoptosis, together with DNA staining, as a readout of cell death due to membrane permeability, using annexin V-FITC and PI, respectively. Treatment with Z HER2:2891 DCS-MMAE increased the percentage of SK-BR-3 cells undergoing apoptosis (calculated by the sum of cells in early, positive to AnV, and late, double positive for AnV and PI, apoptosis).
After a 10 min exposure followed by drug removal and an additional 48 h of incubation in medium alone, Z HER2:2891 DCS-MMAE (100 and 500 nM) induced a significant increase (40% each) of apoptotic cells compared to the control. After 48 h of treatment, even the lowest concentration of 5 nM significantly increased the rate of apoptosis, by 40% ( Figure 7A). In contrast, after 48 h, no significant effect on apoptosis was observed following trastuzumab treatment in SK-BR-3 cells (Figure 7C).
After a 10 min exposure followed by drug removal and an additional 48 h of incubation in medium alone, ZHER2:2891DCS-MMAE (100 and 500 nM) induced a significant increase (40% each) of apoptotic cells compared to the control. After 48 h of treatment, even the lowest concentration of 5 nM significantly increased the rate of apoptosis, by 40% (Figure 7A). In contrast, after 48 h, no significant effect on apoptosis was observed following trastuzumab treatment in SK-BR-3 cells ( Figure 7C).
As expected, 48 h of treatment with ZHER2:2891DCS-MMAE did not induce apoptosis in the MDA-MB-231 cell line compared to the control ( Figure 7B). Moreover, as shown in the bottom right quadrant of each plot, treatment with ZHER2:2891DCS-MMAE or trastuzumab did not induce necrosis in either cell line.
As shown in Figure 8, HER2 mRNA levels were significantly reduced (p < 0.001) by ZHER2:2891DCS-MMAE in a concentration and time-dependent manner. The lowest concentration of 5 nM significantly decreased HER2 transcript by 50% within 24 h. The strongest effect was observed after 96 h of treatment, with an 80% inhibition of HER2 mRNA levels. Whereas, as shown in Figure 9, HER2 mRNA levels in MDA-MB-231 were not affected by ZHER2:2891DCS-MMAE.
To confirm the data obtained by RT-PCR, the expression of HER2 in SK-BR-3 cells, was evaluated by Western blot analysis. As shown in Figure 10, after 24 h, the treatment with 5 and 100 nM of ZHER2:2891DCS-MMAE did not significantly affect HER2 expression. By contrast, 500 nM of ZHER2:2891DCS-MMAE significantly decreased HER2 expression by
As shown in Figure 8, HER2 mRNA levels were significantly reduced (p < 0.001) by Z HER2:2891 DCS-MMAE in a concentration and time-dependent manner. The lowest concentration of 5 nM significantly decreased HER2 transcript by 50% within 24 h. The strongest effect was observed after 96 h of treatment, with an 80% inhibition of HER2 mRNA levels.
As shown in Figure 8, HER2 mRNA levels were significantly reduced (p < 0.001) by ZHER2:2891DCS-MMAE in a concentration and time-dependent manner. The lowest concentration of 5 nM significantly decreased HER2 transcript by 50% within 24 h. The strongest effect was observed after 96 h of treatment, with an 80% inhibition of HER2 mRNA levels. Whereas, as shown in Figure 9, HER2 mRNA levels in MDA-MB-231 were not affected by ZHER2:2891DCS-MMAE.
To confirm the data obtained by RT-PCR, the expression of HER2 in SK-BR-3 cells, was evaluated by Western blot analysis. As shown in Figure 10, after 24 h, the treatment
Discussion
ADCs represent a successful class of anticancer agents that combine the selectivity of mAbs with the cytotoxic potency of a chemotherapeutic agent [29].
Tissue penetration and biodistribution are important factors, which most of the time seriously limit the response to treatment. One of the major limitations of these molecules is their large size (150 kDa), which limits their ability to penetrate solid tumors [16]. In addition, due to their long serum half-life and slow blood clearance, they are not suitable for radioimmunotherapy or imaging purposes [16]. Another limitation is represented by To confirm the data obtained by RT-PCR, the expression of HER2 in SK-BR-3 cells, was evaluated by Western blot analysis. As shown in Figure 10, after 24 h, the treatment with 5 and 100 nM of Z HER2:2891 DCS-MMAE did not significantly affect HER2 expression. By contrast, 500 nM of Z HER2:2891 DCS-MMAE significantly decreased HER2 expression by 90%. The treatment with 100 and 500 nM of Z HER2:2891 DCS-MMAE after 96 h was cytotoxic and we could not analyze HER2 protein expression.
Discussion
ADCs represent a successful class of anticancer agents that combine the selectivity of mAbs with the cytotoxic potency of a chemotherapeutic agent [29].
Tissue penetration and biodistribution are important factors, which most of the time
Discussion
ADCs represent a successful class of anticancer agents that combine the selectivity of mAbs with the cytotoxic potency of a chemotherapeutic agent [29].
Tissue penetration and biodistribution are important factors, which most of the time seriously limit the response to treatment. One of the major limitations of these molecules is their large size (150 kDa), which limits their ability to penetrate solid tumors [16]. In addition, due to their long serum half-life and slow blood clearance, they are not suitable for radioimmunotherapy or imaging purposes [16]. Another limitation is represented by the fact that many ADCs, including brentuximab vedotin and trastuzumab emtansine, still have a variable drug-to-antibody ratio and variable sites for drug conjugation, thus leading to the formation of heterogeneous species, each with different pharmacokinetic and efficacy profiles [30,31]. Therefore, a promising approach is represented by small carrier proteins able to interact specifically and with a high affinity (in the picomolar to the nanomolar range) with several targets overexpressed in tumor cells such as HER2, EGFR, and IGF-1R [32,33].
Affibody molecules are made of 58 amino acids (with a molecular weight of approximately 6.5 kDa) folded into a three-helical bundle and devoid of cysteines in their structure. Thus, they can be site-specifically modified by introducing one or more cysteine residues into the scaffold, permitting a site-specific conjugation of a cytotoxic payload. In the present work, a DCS that contains a single cysteine residue was introduced at the C-terminus of Z HER2:2891 , which allowed for site specific conjugation of the cytotoxic MMAE molecule to the affibody via thiol-maleimide chemistry.
Eigenbrot et al. demonstrated that these small molecules have remarkable biophysical properties, such as high thermal stability (Tm = 67 • C), rapid folding, and high solubility in aqueous solutions [34]. The favorable properties of the Z HER2 affibody molecule has led to their employment in diagnostic and therapeutic applications.
Affibodies represent a promising approach in terms of imaging because of their rapid biodistribution and rapid blood clearance, due to their small size. Affibodies against several cancer markers, such as HER2, have been developed for tumor imaging [17]. The first radiolabeling of affibody was investigated by Orlova et al. for imaging of HER2 expression. In this study, DOTA-Z HER2:342 (ABY-002) was efficiently labelled with indium-111. A biodistribution study of 111In-benzyl-DOTA-Z HER2:342 was performed in nude mice bearing LS174T xenografts. In vivo, 111In-benzyl-DOTA-Z HER2:342 demonstrated effective tumor uptake 4 h post injection [35].
For therapeutic applications, several groups have developed and characterized affibody molecules interacting with HER2 [36,37]. Zielinski et al. constructed a conjugate based on a modified version of the exotoxin A derived from Pseudomonas aeruginosa (PE38) fused to the Z HER2:342 and Z HER2:2891 affibodies. These constructs efficiently bind and kill cancer cells expressing HER2 after 1 min of exposure [36]. In addition, in vivo studies were carried out using xenograft HER2-overexpressing BT-474, SKOV3, and NCI-N87 tumors. HER2 affitoxin treatment resulted in a 60% volume reduction in BT-474 tumors after the first injection and a significant slowing down of tumor growth in mice bearing SKOV3 and NCI-N87 tumors [36]. Consistent with this, Gräslund et al. investigated the therapeutic efficacy of the Z HER2:2891 affibody conjugated with the cytotoxic maytansine derivate mcDM1 in a xenograft of HER2-overexpressing SKOV3 tumors. Treatment with Z HER2:2891-ABD-E 3 -mcDM1 led to a significant reduction in tumor size, with a complete tumor regression in some animals at the end of the study [37]. These results demonstrate valuable evidence for the development of an anti-HER2 affibody in cancer-targeted therapy, which represents a promising alternative as a therapeutic agent in clinical practice and also in the veterinary field. In fact, similarly to human breast cancer, FMC is the third most common tumor type in cats. The feline homologue of HER2 is overexpressed in about 30-60% of FMC and is associated with aggressive behavior and poor prognosis [2]. A recently published paper by Ferreira et al. demonstrated that combined treatment with mAbs or an ADC targeting HER2 and the tyrosine kinase inhibitor lapatinib had a synergistic antiproliferative effect in feline cell lines [15].
The aim of our project was to develop a method for the efficient purification and characterization of the Z HER2:2891 DCS affibody that specifically targets HER2. In contrast to the exotoxin and maytansine derivate used in previous studies, we used MMAE, a potent antimitotic agent that inhibits cell division by blocking the polymerization of tubulin.
In a previous study [21], the Z HER2:2891 DCS affibody was purified fused with a GST tag. However, this method turned out to be complicated and inefficient. In the present study, the GST tag was removed from the pDEST15-Z HER2:2891 DCS construct using an inverse PCR, which allowed us to express the affibody without any tag.
Subsequently, Z HER2:2891 DCS was subjected to an ion exchange chromatography that enabled us to obtain a one-step affibody purification compared to the previously published method, which was carried out in three steps (affinity chromatography, cleavage, gel filtration) [21]. The presently proposed purification method turned out to be simpler and faster, it did not require the removal of the tag, and it was 10 times more efficient than the previous one. The purity of our sample was estimated to be about 90%, and we obtained a much higher yield (25 mg from 1 L of culture).
To gain more insights into the pharmacological effect of Z HER2:2891 DCS-MMAE, we tested its in vitro effect on tumor cell growth, migration, and apoptosis pathways. As a reference molecule, the clinically approved monoclonal antibody, trastuzumab (Herceptin ® ), was included in the study. Z HER2:2891 DCS not conjugated with MMAE was used as a negative control.
The MTT assay showed that Z HER2:2891 DCS-MMAE had a concentration-dependent and significant toxic effect in both the HER2 overexpressing cell lines, even at the lowest concentration tested of 5 nM (Figure 4). On the contrary, Z HER2:2891 DCS not conjugated with MMAE did not affect cell viability. The fact that this toxic effect was due to a specific binding to the HER2 was demonstrated by the lack of any significant effects on cell viability in MDA-MB-231 (cells that have only a basal expression of HER2). Due to their small size, affibody molecules have a very short half-life (T 1/2 < 20 min), since they undergo a rapid renal excretion. As observed by Zielinski et al., the NCI-N87 gastric cell line exposed for 1 min to HER2-affitoxin, followed by an additional 72 h of incubation with medium, resulted in 90% cell death [36]. Our results on cell viability show that exposure to Z HER2:2891 DCS-MMAE for 10 min, followed by drug removal and an additional 48 h of incubation with medium alone, is sufficient to reduce both SK-BR-3 and ZR-75-1 cell viability by 60% and 40%, respectively, at the highest concentration used.
The cytotoxic effect of Z HER2:2891 DCS-MMAE was confirmed by measuring cell death by flow cytometry. The total cells undergoing cell death by apoptosis was significantly increased (up to 40%) in SK-BR-3 cells exposed for 10 min to 100 nM of Z HER2:2891 DCS-MMAE. These findings support the evidence that the high affinity for HER2 receptor allows the affibody to selectively target the cytotoxic payload to HER2 positive cancer cells, thus exerting a cytotoxic activity.
As shown by the MTT assay and cell apoptosis analysis, our reference compound, trastuzumab, displayed only a low cytotoxic effect in SK-BR-3 and ZR-75-1 cells compared to Z HER2:2891 DCS-MMAE. Consistently, Abdollahpour-Alitappeh et al. showed that different concentrations of trastuzumab (from 1 to 1000 ng/mL) exhibited only a weak cytotoxic effect in SK-BR-3 cells [40].
Since, HER2 overexpression is directly involved in the overstimulation of cell proliferation and migration, we evaluated the in vitro effects of Z HER2:2891 DCS-MMAE on these parameters. As shown in Figure 5, Z HER2:2891 DCS-MMAE in SK-BR-3 and ZR-75-1 cells induced cell death starting from 24 h of treatment, with the strongest effect observed after 48 h. Interestingly, the antiproliferative effects of Z HER2:2891 DCS-MME were less evident after 96 h of treatment, which was probably due to the very short half-life of the affibody molecule. Alternatively, this data might be explained by a reduced cell surface expression of HER2 induced by the affibody, either by stimulating HER2 internalization or by reducing HER2 recycling once internalized.
We next utilized a wound healing assay to evaluate the impact of Z HER2:2891 DCS-MMAE on cell migration. We found that Z HER2:2891 DCS-MMAE strongly inhibited SK-BR-3 cell migration, as evidenced by a concentration-dependent decrease in wound area reclosure within 24 h of treatment. Images of the wounded area were also taken after 48 and 72 h (data not shown), but we were not able to measure the lesioned area because of the cell death induced by Z HER2:2891 DCS-MMAE and consequent cell detachment from the well surface.
In addition, SK-BR-3 treated with increasing concentrations of trastuzumab showed a significant concentration and time-dependent inhibition of cell growth and migration rate. Our results revealed that trastuzumab strongly inhibits cell proliferation, up to 70% at the highest concentration tested, and significantly suppresses SK-BR-3 migration rate compared with untreated cells. Consistently, Emlet et al. reported that trastuzumab, alone and in combination with erlotinib and bevacizumab, exerts a significant growth inhibition on HER2 overexpressing cell lines [41].
Subsequently, we evaluated whether treatment with Z HER2:2891 DCS-MMAE might affect HER2 expression. Our results demonstrated that, after 24 h of treatment, HER2 mRNA levels were significantly reduced (up to 50%) in SK-BR-3 cells using the lowest concentration of Z HER2:2891 DCS-MMAE, compared to the control. Furthermore, as shown in Figure 10, 24 h of treatment with 500 nM of Z HER2:2891 DCS-MMAE drastically decreased HER2 protein expression, by 90%, while after 48 and 96 h the compound was cytotoxic and we could not see any protein on the gel. As expected, the presence of HER2 in MDA-MB-231 was too low to be detected. The fact that Z HER2:2891 DCS-MMAE can strongly downregulate HER2 expression suggests a potential use of HER2 affibody in the treatment of HER2 positive tumors.
Conclusions
In conclusion, our experimental data demonstrate that the cytotoxic conjugate formed by the anti HER2 affibody and monomethyl auristatin E efficiently interacts with HER2expressing cancer cells in vitro, allowing for a selective and specific delivery of the cytotoxic payload. This demonstrates that affibody may be used to target HER2 expressing cells. This technique might allow avoiding some of the problems encountered by using trastuzumab in clinics, such as poor tissue penetration due to its high molecular weight. In addition, the innovative purification procedure applied to isolate the affibody will permit much better yields and reduce production costs. | 9,835 | 2021-08-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Response to the reviewer comments Title : Comparison of dealiasing schemes in large-eddy simulation of neutrally-stratified atmospheric boundary-layer type flows
Reviewer general comment: The paper compares various approaches for the dealiasing of the non-linear terms in large eddy simulation of atmospheric flows. Given that spectral methods are widely used in studying such flows under idealized conditions since they offer higher speed and better accuracy, the general theme is of interest to GMD readers. The paper does a good job in presenting the fundamentals of the problem, the proposed solutions, and how they compare when implemented in an actual code. But major revisions are needed. In particular, the study is a valuable comparison of the methods that is not available (to the best of my knowledge) in the literature and the authors should not try to conclude that one is more optimal than the others. They can simply present their findings and let the users determine which method is suitable for their needs.
Based on the comments from both reviewers, we have added a new discussion section to improve the readability of the manuscript. We have also improved the results section by running additional simulations at the resolutions of 192, 192, 128 and 192 3 , as well as running the 256 3 -case long enough to be able to report converged statistics. As a result Figures 7, 8 and 9 have been updated, and a new figure ( Figure 10) has been included, which is discussed at the end of the results section.
Changes in manuscript:
The new discussion section is reported here: "In the development of this manuscript, focus has been directed to the study of the advantages and disadvantages of different dealiasing methods. In this regard, throughout the analysis we have tried to keep the structure of the LES configuration as simple and canonical as possible, to remove the effect of other add-on complexities. Additional complications might arise when considering additional physics; here we discuss the potential effect that these different dealiasing methods could have on them. One of such elements of added complexity is for example the use of more sophisticated subgrid scale models based on dynamic approaches to determine the values of the Smagorinsky constant (Germano et al., 1991;Bou-Zeid et al., 2005). In most of these advanced subgrid models, information from the small-scale turbulent eddies is used to determine the evolution of the subgrid constant. However, in both the FT and FS method, the small turbulent scales are severely affected and hence use of dynamic subgrid models could be severely hampered unless these are accordingly modified and adjusted, maybe via filtering at larger scales than the usual grid scale. Another element of added complexity consists in using more realistic atmospheric forcing, considering for example the effect of the Coriolis force with flow rotation as a function of height and velocity magnitude. In this case, we hypothesize that the FT method could lead to stronger influences on the resultant flow field as this dealiasing technique not-only affects the distribution of energy in the small turbulent scales, but also in the large scales (as apparent from Fig. 2), being these later ones potentially more affected by the Coriolis force. This represents a strong non-linear effect, that is hard to quantify and hence further testing, including realistic forcing with a geostrophic wind and Coriolis force would be required to better quantify these effects. Also often in LES studies of atmospheric flows one is interested in including an accurate representation of scalar transport (passive/active). In this case the differential equations don't include a pressure term and hence most of the computational cost is linked to the evaluation of the convective term. As a result, the benefit of using alternative, cheaper dealiasing techniques (FT or FS) will be even more advantageous, yet the total gain is not trivial to evaluate a priori, and the effect on the scalar fields should also be further evaluated.
In general, we believe that it is not fair to advocate for one or other dealiasing method based on the results of this analysis. Note that the goal of this work is to provide an objective analysis of the advantages and limitations that the different methods provide, letting the readers the ultimate responsibility to choose the option that will adjust better to their application. For example, while having exact dealiasing (3/2-rule) might be better in studies focusing on turbulence and dispersion, one might be well-off using a simpler and faster dealiasing scheme to run the traditionally expensive warm-up runs, or to evaluate surface drag in flow over urban and vegetation canopies, where most of the surface force is due to pressure differences (Patton et al.,2016)."
Specific responses
Major comments: 1. Reviewer comment: While the FS method seem to be giving an acceptable performance as the authors argue, I wonder whether the ABL LES community should be going in a direction of saving computing time rather than maximizing the accuracy of the computation. We push for higher resolution to gain better accuracy and, with increasing computing power, I wonder whether a 20% drop in simulation time is worth it. We use dynamic SGS models that increase the computing time by 20% all the time. The plots in Fig 7 do not indicate that the FS method is as good as the 3/2 method. So in general I think the authors should not focus on the conclusion that the FS method is a good surrogate. They should present the information and findings, which will help modelers decide on the trade offs they want (on my end this convinces me that using a 3/2 method is indeed worth it.).
Authors response: Thanks, we indeed agree with the reviewer's point. We believe that gaining a 20% in computational time can be of interest in certain occasions, for example during warm up periods. However, as the reviewer mentions the strength of this manuscript should reside on conveying the facts of using different dealiasing methods, and allowing the corresponding end users to decide what's best for them according to their application.
Changes in manuscript:
To clarify this point we have added a discussion section (reported above) and rewritten the conclusion section. The new conclusion section emphasize the trade offs of each method. The modifications to conclusion reported here: "The Fourier-based pseudo-spectral collocation method (Orszag, 1970;Orszag and Pao, 1975;Canuto et al., 2006) remains the preferred "work-horse" in simulations of wall-bounded flows over horizontally periodic regular domains, which is often used in conjunction with a centered finite-difference or Chebychev polynomial expansions in the vertical direction (Shah and Bou-Zeid,2014;Moeng and Sullivan, 2015). This approach is often used because of the high-order accuracy and the intrinsic efficiency of the fast-Fourier-transform algorithm (Cooley and Tukey, 1965;Frigo and Johnson, 2005). In this technique, aliasing that arises when evaluating the quadratic non-linear term in the NS equations can severely deteriorate the quality of the solution and hence need to be treated adequately. In this work a performance/cost analysis has been developed for three well-accepted dealiasing techniques (3/2-rule, FT and FS) to evaluate the corresponding advantages and limitations. The 3/2-rule requires a computationally expensive padding and truncation operation, while the FT and FS methods provide an approximate dealiasing by low-pass filtering the signal over the available wavenumbers, which comes at a reduced cost.
The presented results show compelling evidence of the benefits of these methods as well as some of their drawbacks. The advantage of using the FT or the FS approximate dealiasing methods is their reduced computational cost (∼15% for the 128 3 case, ∼25% for the 256 3 case), with an increased gain as the numerical resolution is increased. Regarding the flow statistics, results illustrate that both, the FT and the FS methods, yield less accurate results when compared to those obtained with the traditional 3/2-rule, as one could expect.
Specifically, results illustrate that both the FT and FS methods over-dissipate the turbulent motions in the near wall region, yielding an overall higher mass flux when compared to the reference one (3/2-rule). Regarding the variances, results illustrate modest errors in the surface-layer, with local departures in general below 10% of the reference value across the considered resolutions. The observed departures in terms of mass flux and velocity variances tend to reduce with increasing resolution. Analysis of the streamwise velocity spectra has also shown that the FT method redistributes unevenly the energy across the available wavenumbers, leading to an over-estimation of the energy of some scales by up to 100%. Contrary, the FS methods redistributes the energy evenly, yielding a modest +13% energy magnitude throughout the available wavenumbers. Compared to the 3/2-rule, these differences in flow statistics are the result of the sharp low-pass filter applied in the FT method and the smooth filter that characterizes the FS method." 2. Reviewer comment: How do the FT and FS method influence the potential use of dynamic models that require good accuracy on the smallest resolved scales? If as the spectra show they damp these scales, than that would preclude using dynamic models and would be a significant disadvantage of FT and FS. The authors have in their code some dynamic models, they could perform the dynamic computations while still using the Static Smagorinsky (compute a dynamic Cs but dont use it).
Authors response: Thank you, this is a great point. In this regard, we have now run some additional simulations using the dynamic Smagorinsky model. Results are illustrated in Figure 1. In this figure, it can be observed that the dynamic model fails to compute the Smagorinsky constant when using either the FS or the FT methods as traditionally implemented. In this case the FT method strongly suppresses turbulence, resulting in the laminarization of the flow. Alternatively, while the consequences of using the FS method are less dramatic, the flow also exhibits a large acceleration at the top of the domain. As mentioned by the reviewer, these results are not surprising given that the dynamic models are using a relation between the small scales to compute the Smagorinsky constant. We believe that these could though be slightly improved by using information from scales larger than the traditional filtering scale, if the reader was really interested. Yet this reamins outside the scope of this manuscript. Therefore, it is advisable not to use the FT and FS methods with dynamic SGS models. In this regard, we have added additional text in the new discussion section that relates to the use of dynamic models and the fact that they probably require some modification to run with the FT and FS methods.
Changes in manuscript:
A discussion related to this comment has been added to the discussion section (see new discussion and conclusion section in comment #1).
3. Reviewer comment: Fig 8d and the associate sentence "Interestingly, results of the vertical flux (or stress, resolved and SGS) of stream-wise momentum (figure 8(d)) illustrate a good agreement between the different scenarios." The authors should be careful in this interpretation. The constant pressure gradient forcing requires and forces the stress profile to be linear. Regardless of how turbulence ends up looking like the turbulent fluxes have to adjust to balance the mean ∂P/∂x. What this figure indicates is that the SGS fraction is not strongly affected by the choice of dealiasing method, which is a good thing.
Authors response: Thank you for bringing this to our attention. As mentioned above the results section has been adjusted where this is taken care of.
4. Reviewer comment: Figure 9, and more generally: I would have liked to see a direct comparison of the largest scales (by filtering all simulations at n∆ , where n correspond to start of the damping or cutoff in figure 1) to see if the differences are only on the smallest scales or not (although given the mean velocity profiles, I suspect they are not).
Authors response: Thank you very much, this is a very interesting and important point.
In order to answer this comment and clarify the effect of the FT and FS methods on the spectra, we have developed an additional analysis using the spectra presented in the paper. Although the effect of the FT and FS methods on the small scales can be clearly observed on the spectra, their effect on the large scales cannot be directly assessed from the figure, as pointed out by the reviewer. To compute a direct comparison scale by scale, the following ratio was used for the 128 3 , 192 3 , and 256 3 -simulations, where E u,k denotes the power spectral density of the u velocity component at wavenumber k, XX stands for the dealiasing method FT or FS. Hence, if ρ(k) < 0 energy is removed at that scale, and if ρ(k) > 0 energy is added at that scale. Figure 2 presents the ratio ρ(k) for both methods where it can be observed that the effect of FT methods is very large at all scales. The large scales (0 ≤ k/k max ≤ 0.2) are affected with a reduction of energy of ∼25%. The mid-range scales (0.2 ≤ k/k max ≤ 0.6), corresponding to the inertial subrange, exhibit an overestimation of their energy of about ∼50% on average. Therefore, this method redistributes the energy of the small scales into the inertial sub-range scales. On the contrary, in the FS method, the energy from the filtered small-scales is redistributed more or less uniformly throughout with an averaged overall variation of less the 13%.
Changes in manuscript:
We have added figure 2 and its interpretation in the manuscript. This now reads as: "Although the effect of the FT and FS methods on the small scale can be clearly observed in figure 9, their effect on the large scales also needs to be quantified. To compute a direct comparison scale by scale, the following ratio was used (equation 2) for the 128 3 , 192 3 , and 256 3 -simulations, where E u,k denotes the power spectral density of the u velocity component at wavenumber k, XX stands for the dealiasing method FT or FS. If ρ(k) < 0 the energy density at that given wavenumber (k) is less than the corresponding one for the run using the 3/2 rule, viceversa if ρ(k) > 0. Figure 2 presents the ratio ρ(k) for both methods.
When using the FT method, energy at the low wavenumbers is underpredicted, whereas energy at the large wavenumbers is overpredicted. Departures are in general larger with decreasing resolution, with an excess of up to 100% for the 128 3 -simulations in the wavenumber range close to the cutoff wavenumber. On the contrary, when using the FS method, the energy from the filtered (dealiased) small-scales is redistributed quasi-uniformly throughout the spectra with an averaged overall variation of less than 13%." Minor comments: 1. Reviewer comment: Title is long and too descriptive: how about replacing the wordy "atmospheric boundary-layer type flows" with "atmospheric flows". One in fact could foresee using such methods for cloud resolving LES outside the ABL. Same on last line of abstract: why restrict the applications only to ABL flows?
Authors response: Thank you for this comment. We have adapted these very interesting suggestions. The new title is included below, as well as the new abstract.
Changes in manuscript:
Title: "Comparison of dealiasing schemes in large-eddy simulation of neutrally-stratified atmospheric flows" Abstract: "Aliasing errors arise in the multiplication of partial sums, such as those encountered when numerically solving the Navier-Stokes equations, and can be detrimental to the accuracy of a numerical solution. In this work, a performance/cost analysis is proposed for widely-used dealiasing schemes in large-eddy simulation, focusing on a neutrallystratified, pressure-driven atmospheric boundary-layer flow. Specifically, the exact 3/2 rule, the Fourier truncation method, and a high order Fourier smoothing method are inter-compared.
Tests are performed within a newly developed mixed pseudo-spectral collocation -finite differences large-eddy simulation code, parallelized using a two-dimensional pencil decomposition. A series of simulations are performed at varying resolution and key flow statistics are inter-compared among the considered runs and dealiasing schemes. Both the Fourier Truncation and the Fourier Smoothing method correctly predict basic statistics. However, they both prove to provide less accurate flow statistics when compared to the traditional 3/2-rule. The accuracy of the methods is dependent of the resolution. The biggest advantage of both of these methods against the exact 3/2-rule is a notable reduction in computational cost with an overall reduction of 15% for a resolution of 128 3 , 17% for 192 3 and 21% for 256 3 ." miss-understanding.
Changes in manuscript:
The text now reads as: "Note that the molecular viscous term has been neglected within the flow. However, the effect of the molecular viscosity at the surface is modeled using the logarithmic law, where the surface drag is parameterized through the surface roughness." 15. Reviewer comment: Page 6 line 25: what does "module" mean? Do they mean modulus?
Authors response: Indeed. We have clarified this in the manuscript.
16. Reviewer comment: Should equation 12 include f i to be consistent ?
Authors response: Thanks, this has been corrected.
17. Reviewer comment: Page 8, lines 10-15: Authors should clarify this is with the baseline 3/2 dealiasing I presume. Also how does the parallelization method impact these numbers?
Authors response: Thank you for pointing this out, we used the 3/2 as a baseline. In addition, an other comment on this regard was also made by an other reviewer, and hence we have included some clarification in the manuscript.
Here is our response to the question regarding the parallelization and the pressure solver.
We have noticed that the cost breakdown for the resolution of the convective term and the Poisson solver is also influenced by the pencil decomposition.
When treating the convective term with the pencil decomposition, the communication cost increases with respect to the traditional slice parallelization. In this case, a total of nine transpositions are needed to compute the convective term, significantly increasing the computational cost.
Opposite, the Poisson solver becomes faster when using the pencil decomposition in comparison to the slice parallelization. Note that in the pseudo-spectral method the horizontal directions (x and y) are treated in Fourier space and only the vertical direction (z) remains in physical space, therefore each mode in k x and k y become independent of each other. In this case the system of equations, originally of size n x × n y × n z becomes n x × n y systems of n z equations, making each vertical line in the domain independent. The pencil decomposition can take full advantage of this fact, making the resolution of the Poisson equation faster. Specifically, once the domain is transposed in the Z-pencil (square pipe aligned with the z-coordinate), the process of solving each of the n x × n y systems does not require any communication, making it very efficient, and limiting its cost to the transposition between the different pencils.
Changes in manuscript:
We realize that this is an important detail that should be also mentioned in the manuscript. For this reason we have included a couple of lines in section 3.3.
The text now reads as: "In addition, it is important to note that the low computational cost of the Poisson solver is related to the the use of the pencil decomposition, which takes full advantage the pseudo-spectral approach. Specifically, the Z-pencil combines with the horizontal treatment of the derivatives to make the implementation of the solver very efficient." 18. Reviewer comment: Page 10, lines 1-2: The z 0 they impose is 1cm, which corresponds more to a grass field than to a sparse forest of to a farmland. I suggest they check Brutsaerts books rather than to Stull for z 0 .
Authors response: We realized that there was a mistake in the actual value of z 0 . This one is actually of 0.1m, which corresponds to a sparse forest according to Stull and Brutsaert. We added the reference to Brutsaert's book in the manuscript.
19. Reviewer comment: Figure 5 and other are difficult to read. Why not use colors for the online version (Color is free with EGU, no?) Authors response: Because we couldn't find any information in this regard on the publisher web page we decided to go on black & white. If the editor confirms that color figures are free of charge, then we would be happy to change them 20. Reviewer comment: Page 11, lines 9-11: 30% drop in the convective term cost is good but I would not say it is significant. It would only be equivalent to about 20% drop in total computing time (given Fig 2), which would only be equivalent to a 5% reduction in the resolution. So I would remove significantly on line 9.
Authors response:
We have removed the "significantly" in the text. Additional detail in this regard has been provided earlier in comment 17.
Authors response: Thank you for pointing out the repetition.
23. Reviewer comment: Page12, line 9: correct the misspelling of "stream-wise" Authors response: Thank you for pointing out the misspelling.
24. Reviewer comment: Page 12 line 25, and page 14 line 10: "differentiated" is an unclear word. Please remove and clarify the two sentences.
Authors response: Thank you for pointing out these two sentences. The discussion of the results have been changed and these sentences have been removed.
25. Reviewer comment: Page 14 lines 15-16. There wont be any dispersive stresses in their simulations over homogeneous terrain so why mention them?
Authors response: In order to avoid any confusion in the discussion, we remove the mention to dispersive stresses in the text | 5,035.8 | 2017-11-17T00:00:00.000 | [
"Environmental Science",
"Physics",
"Engineering"
] |
Solving the multicommodity flow problem using an evolutionary routing algorithm in a computer network environment
The continued increase in Internet traffic requires that routing algorithms make the best use of all available network resources. Most of the current deployed networks are not doing so due to their use of single path routing algorithms. In this work we propose the use of a multipath capable routing algorithm using Evolutionary Algorithms (EAs) that take into account all the traffic going over the network and the link capacities by leveraging the information available at the Software Defined Network (SDN) controller. The designed routing algorithm uses Per-Packet multipath routing to make the best use of the network’s resources. Per-Packet multipath is known to have adverse affects when used with TCP, so we propose modifications to the Multipath TCP (MPTCP) protocol to overcome this. Network simulations are performed on a real world network model with 41 nodes and 60 bidirectional links. Results for the EA routing solution with the modified MPTCP protocol show a 29% increase in the total network Goodput, and a more than 50% average reduction in a flow’s end-to-end delay, when compared to OSPF and standard TCP under the same network topology and flow request conditions.
Introduction
One of the major drawbacks of computer networks using a distributed architecture is their low efficiency caused by the lack of routing solutions aware of the entire network status. In a distributed architecture network, every routing device contains its own independent control plane; each routing device takes independent routing decisions based on information local to the device. A distributed network can improve the routing decisions taken by individual components by constantly sharing a snapshot of the current global network status. However, due to the impracticality of such a solution, distributed network architectures resort to either single path routing algorithms or very simple multipath solutions such as Equal Cost Multipath Routing (ECMP) [1]. In ECMP, flows are distributed over paths with equal cost where a hash of the packet header, not the current network state determines the path taken [2]. Improving the network resource usage efficiency requires the use of routing algorithms with access to an accurate and up-to-date snapshot of the global network status. A centralized network cost is an abstract metric and can represent the financial cost to use a link or may also be a function of one or multiple link properties such as delay or reliability. A three-dimensional chromosome was designed where each gene is a two-dimensional matrix representing the traffic generated on each link by a given flow. To limit the search space, paths longer than four hops were excluded, and a flow was restricted to a maximum of two paths. The authors noted that, their proposed three-dimensional chromosome design is not very space efficient and its size is dependent on the network size. In contrast, our chromosome size is not directly dependent of the network size, but depends only on the number of flows to route and the set of paths each flow is able to use.
Masri et al. [7] used the Ant Colony Optimisation (ACO) algorithm to solve the MCFP where each flow is restricted to use only one path. This constraint converts the MCFP to an NP-hard problem, which explains the use of the ACO [7]. ACO algorithms are a class of optimizers primarily designed to find the shortest paths within a graph and take inspiration from the behaviour of ants [13]. The ACO is set to minimize both the time required to satisfy all the requests and the network cost. In [14], Evolutionary Algorithms (EAs) were used to find a path that links a source and destination node together with the condition that the path may only pass through a domain once. A domain can be seen as a subnetwork within a larger network. The constraint of passing through a domain only once converts the problem to NPhard, which is why Evolutionary Algorithms (EAs) are used. The EA designed in [14] only deals with the path finding problem and unlike the work presented here, does not tackle the problem of assigning data rates to flows while taking into account link capacities and other flows using the network. Stefano et al. [15] combined SDN with an Alienated Ant Algorithm (AAA) named A4SDN to optimize for better throughput, delay and packet loss. The AAA is very similar to the ACO algorithm with the exception that ants under the AAA follow the path with the lowest pheromone trail. This modification allows for the generation of solutions with better load balancing performance as the ants do not converge to a single path. By exploiting the network status information offered by SDN and forwarding it to the AAA, A4SDN managed to decrease packet loss by 11% and increase the total network throughput by 16% when compared to the Extended Dijkstra algorithm [16]. A similar concept is used in [17], where an EA is used to find alternative routes to move video streams from congested to less congested paths in the hopes of improving the video stream performance. Initially, all video streams are routed over the shortest path using the Bellman-Ford algorithm. Periodically the SDN controller gathers the network status from the switches and if congestion is detected, the EA algorithm is used to find alternative paths for videos currently transmitted over congested routes. The EA was designed to minimize the path's aggregate delay and remaining capacity. This technique resulted in a 20% reduction in packet loss and a Peak Signal-to-Noise Ratio (PSNR) improvement of nearly 100% when compared to the Bellman-Ford algorithm.
Not all network optimization approaches use Machine Learning (ML); two particularly interesting non-ML approaches are those by Google [5] and Microsoft [18], which serve as good reference on the practical deployment of global routing solutions on a physical network. In [18] the routing solution is found using LP iteratively on a subset of flows in order of priority, with the objective of maximizing throughput with a preference given to shorter paths. Achieving fairness between flows requires the solution of a number of LP problems, which was considered too costly so was replaced by an approximation. The work in [5] uses similar objectives to those in [18], however the LP solver was replaced by a bespoke greedy heuristic, in the interest of speed. Both [5,18] use algorithms that are based on LP, optimizing for a single objective. In contrast, in this work multiple objectives are simultaneously optimized. A summary of work closely related to this one has been presented; for an in-depth survey of the various ML algorithms developed for route optimization in SDNs, the reader is referred to [19].
Because of the limitations of LP, including the lack of multi-objective support, Evolutionary Algorithms (EAs) are used in our routing algorithm designs. LP generated solutions, which are optimal for a single objective, are used to compare with the EA generated solutions to gauge the effectiveness of the suboptimal EA algorithm, at least with respect to the objective being optimized by the LP solution. This work focuses solely on the development of the routing algorithm on an already existing network topology and a functioning centralized network architecture is assumed. The routing algorithm proposed here is optimizing for throughput maximization and delay minimization. The EA developed is designed in such a way to allow for easy addition and/or modification of the objectives presented here. The algorithms and results presented in this work assume a static flow set. This is a known limitation, with suggestions for extending our system to work with dynamic flow sets given in the conclusion.
Notation
Let G ¼ ðV; EÞ be a loop-free directed graph representing the network topology, where V and E are the set of nodes and links respectively. Each link is represented by e = (u, v)2E where u, v 2 V are the link's source and destination node, respectively. Let � e ¼ ðv; uÞ 2 E represent the reverse of link e = (u, v)2E. The capacity and cost of each link e 2 E is represented by λ e and γ e , respectively. The definition of link cost depends on the application, and can take a myriad of values, such as the actual financial cost to use a given link. In this work, the cost of a link is set equal to the link's delay value. The link's delay value represents the time taken for information to travel over the said link. Let F = {f 1 , f 2 , . . ., f n } be the set of n flows, where f i is the ith flow in set F. Multiple flows can exist between the same source/destination pair. Let δ i represent the data rate requested by flow f i 2 F. A path is defined as the sequence of links that connect a sequence of distinct nodes from the flow's source to the destination. Let k represent the maximum number of different paths flows are allowed to take, with the actual number of paths flow f i is allowed to use given by k i , where k i � k. We define P i ¼ fp i;1 ; p i;2 ; . . . ; p i;k i g as the set of paths related to flow f i and g i;j 2 R �0 as the data rate flow f i transmits on path p i,j . The aggregate delay value of path p i,j , denoted by �ðp i;j Þ, is calculated using Finally, let α(g i,j ) represent the TCP acknowledgement flow generated when flow f i transmits at a data rate of g i,j on path p i,j . Fig 1 shows a high level overview of all the modules presented in this work and the links that connect the said modules together. The network is split into two parts; the data and control plane. The data plane is the plane where the actual data packets are transmitted on. The control plane is used for the bidirectional communication between the network controller, network switches and applications. Information shared on the control planes includes application transmission requests, and switch table updates to give a few examples. The network controller has access to both the network topology and its properties, and all the flows that are transmitting, or wish to start transmission over the network. The gathered network status information is fed to the routing algorithm to generate a routing solution. The routing solution contains the paths each flow is allowed to take and the data rate at which to transmit on each path. This work assumes that the network controller has full control over the entire network.
System overview
The routing algorithm can be divided into: path selection and data rate optimization. Path selection algorithms generate a set of paths for each flow from a given flow set which is then forwarded to the data rate assignment optimization algorithm. The routing solution is relayed back to the network controller to install the necessary routes on the network switching devices and informs applications requesting transmission permission with the allocated data rate.
Path selection
The routing algorithms developed in this work rely on external algorithms to supply them with a set of loop free paths that each flow is allowed to use. One of the objectives sought after by all the routing algorithms used here is delay minimization. Therefore, an obvious choice for a path selection algorithm is the k-Shortest Path (KSP). The KSP algorithm used here is a variation on Yen's KSP algorithm [20], where all the paths with an equivalent cost to the k th path are chosen at random such that a flow will always have at most k paths. This modification is required to have an upper bound on the number of paths available to a flow. Having control over the number of paths is important as it affects the routing algorithm's complexity.
One shortcoming of the KSP algorithm is the lack of link diversity when used on a highly interconnected network topology, as most of the selected paths will share the vast majority of links between them. From a flow's perspective, this gives the perceived illusion of being able to transmit over multiple paths; however, allocating data rate to a path reduces the capacity of all the remaining paths that make use of that same link. From the point of view of the routing algorithm, the lack of link diversity limits the different paths a flow may be assigned to. To increase link diversity for a given flow set the k-Shortest Edge Disjoint Path (KSEDP) [21] algorithm was considered. Contrary to the KSP algorithm, the paths returned by the KSEDP algorithm do not share any edges, but node sharing is allowed. However, the KSEDP algorithm may be too restrictive in situations where a node is connected to the rest of the network via a single link. To counter this, the k-Shortest Relaxed Edge Disjoint Path (KSREDP) algorithm is used instead where the initial path segments that are the only means of communication between a source and destination pair is allowed to be used by multiple paths.
The implementation of both the KSP and KSREDP path selection algorithms described here are based on the algorithm developed by Szcześniak [22]. All the path selection algorithms have their cost set equal to the link's delay value; the shortest path is equal to the path with the lowest aggregate delay value.
Multipath transport protocol
The major barrier blocking Per-Packet multipath routing from global adoption is the performance penalty suffered by TCP flows. TCP is the main transport protocol used to date, a system that negatively effects TCP will not find many takers. TCP is a stream oriented transport layer protocol designed for applications seeking a reliable connection between two devices over a computer network. TCP assumes that all packets travel over the same path and builds the congestion control algorithms based on it. This assumption is broken when a flow's packets are transmitted over multiple paths, as transmitting packets over different paths, with different properties, may lead to packets being received out of order. TCP mistakenly treats this as a sign of congestion and reduces the transmission rate. Due to the exclusive use of Per-Packet multipath by the designed routing algorithms, modifications to the MPTCP are proposed to solve the mentioned problems.
Background. mptcp [23] is a transport layer protocol that aggregates multiple TCP subflows to improve the flow's data rate and/or reliability. MPTCP, being a transport layer protocol, does not have the ability to control the path taken by each created TCP sub-flow. Therefore, MPTCP has been originally targeted, and found its first major practical use case in multi-homed devices. Apple first deployed MPTCP with iOS 7 to increase the reliability of the Siri voice assistant [24]. In this case, MPTCP is used to create two connections, one over Wi-Fi and another over LTE for a seamless handover in the event a user loses Wi-Fi connection. The lack of path selection knowledge requires MPTCP to implement a shared congestion control mechanism between all the TCP sub-flows such that multiple MPTCP sub-flows do not starve a single TCP connection from resources if they happen to share the same link [23]. The availability of SDN allows the routing algorithm to gain the intelligence required to distinguish between different MPTCP sub-flows and thus avoid routing them over the same path. Zannettou et al. [25] does just this by exploiting SDN to route MPTCP sub-flows over different paths. However, previous to the work done by Zannettou et al. in [25], the Linux kernel implementation of MPTCP is only able to create one sub-flow for a pair of Internet Protocol (IP) addresses. This limitation has been addressed and fixed by Zannettou et al. [25] and added to version 0.9 of the Linux kernel MPTCP implementation. This change allows MPTCP to open more than one sub-flow for a pair of IP addresses.
Proposed modifications. MPTCP does not have the capability of transmitting a sub-flow at a specific data rate out of the box. This functionality is required to adhere with the data rates generated routing solution. The distribution of packets between the different sub-flows (paths) for a given flow is implemented with the help of a stochastic scheduler. A stochastic scheduler is used because of its implementation simplicity and ability to handle any arbitrary split ratio without running into scalability issues. In short, for every packet to be transmitted, a random number is generated for every packet that dictates on which sub-flow the packet will be transmitted on. The size of the bins that each random number falls into is proportional to the data rate assigned to the flow. The proposed MPTCP framework model is shown in Fig 2, which outlines the steps taken in sequence by a flow before starting data transmission over the network. A more detailed explanation of the stochastic splitter and the MPTCP modifications can be found in [26][27][28], respectively. The proposed MPTCP framework. The numbers represent the sequence of events, in order, when an application has data to transmit [27]. https://doi.org/10.1371/journal.pone.0278317.g002
Routing solution 1: Path constrained Max-Flow Min-Cost (LP)
In this work we target the Path Constrained Maximum-Flow Minimum-Cost (PC-MFMC) variant of the MCFP. The PC-MFMC problem is a combination of two problems, solved in succession: Maximum Flow and Minimum Cost. Both problems share the constraint where a flow is restricted to travel on a given path set. Three reasons are behind the use of the path constrained version of the Multi-Commodity Maximum-Flow Minimum-Cost (MMFMC). First, control on the paths a flow is allowed to use allows us to manage the algorithm's complexity by varying the number of paths each flow is permitted to use. If the algorithm's complexity is of no concern, flows may be allowed to use all the paths that exist between a source and destination. Second, some flows can be excluded from using certain paths for multiple reasons, including legal ones. Finally, the Evolutionary Routing Algorithm (ERA) routing solution is also path based; therefore, allowing a direct and fair comparison between the two solutions. More information on the PC-MFMC and the non path constrained alternative, referred to as the MMFMC can be found in [29]. The problem formulation of the PC-MFMC follows.
Maximum flow. The Maximum Flow problem is solved first and is given by such that where T represents the total network flow allocated when solving the Maximum Flow solution. Constraint Eq (3) ensures that no negative data rate is assigned. Constraint Eq (4) makes sure that no flow is allocated a higher data rate than what it requested. Constraint Eq (5) guarantees that no link is used beyond its capacity, including the acknowledgement flows generated by TCP.
Minimum cost. The Minimum Cost solution is formulated as
such that constraints Eq (3), Eq (4), Eq (5), and are met. Constraint Eq (7) guarantees that the Minimum Cost solution allocates the same total network flow to that found by the Maximum Flow solution. In [29], the Minimum Cost solution is set to allocate the same total network flow to that found by the Maximum Flow solution by restricting flows to transmit at the data rate allocated by the Maximum Flow solution. The Minimum Cost solution given here improves on the one given by Szymanski [29]. The solution in [29] does not have constraint Eq (7) and replaces constraint Eq (4) with where D i represents the data rate allocated to flow f i by the Maximum Flow solution in Eq (2). Compared to the formulations used in [29], the ones presented here do not force the Minimum Cost solution to use the flow assignment set by the Maximum Flow solution. This gives the Minimum Cost solution the freedom to adjust a flow's allocated data rate in search for a lower cost solution, as long as the same data rate found by the Maximum Flow solution is kept. Additionally, the link capacity constraint is updated to take into account the TCP acknowledgement flows. Failing to account for these acknowledgement flows may still provide a routing solution that results in some parts of the network to become congested.
Routing solution 2: Evolutionary routing algorithm
The objective of the designed ERA is to generate a multi-objective, globally optimized, multipath capable routing solution that maximizes the total network flow and minimizes the application's mean end-to-end delay. Unlike the LP based solution, the ERA, being a true multiobjective solver, optimizes for all the objectives concurrently. Table 1 summarizes the objectives used by both routing solutions along with an explanation behind their design and use. The ERA presented here is based on [30,31] by the same authors and uses the NSGA-II algorithm [32]. It shares the chromosome design, crossover operator, total network flow objective, and the excess removal algorithm with the algorithm given in [30]. For completeness, a summary of these components is included in this text. Chromosome design. The design of the chromosome is the foundation of any EA as it represents the way a solution is formulated. The chromosome C is defined as the sequence C = (G 1 , G 2 , . . ., G n ) where G i ¼ ðg i;1 ; g i;2 ; . . . ; g i;k i Þ is the sequence of genes related to flow f i . The chromosome has been carefully designed to accurately represent a routing solution, include the flow conservation constraint and scale independently of the underlying network topology. The flow conservation constraint ensures that all data transmitted from a source node must reach its destination node in its entirety with no losses incurred at the relay nodes. As each gene in the chromosome represents the data rate to transmit on a path, as long as the path starts from the source and ends at the destination node, then the flow conservation constraint is implied. Using the set-up shown in Fig 3 as an example, where Flow 1 is transmitting at 10 Mbps, Flow 2 is transmitting at 20 Mbps and both have k = 2, the chromosome representation is equal to C = ((5, 5), (5, 15)). Table 1. Description and relationship between the objectives used by the ERA and PC-MFMC problem solved using LP. ✔: Objective present in algorithm. ✗: Objective not present in algorithm.
Objective
Algorithm Comments
ERA PCMFMC
Flow Maximization ✔ ✔ Aims at maximizing the total flow passing through the network at any given time.
Cost Minimization ✗ ✔ Aims at minimizing the total network cost. This translated to delay minimization because the cost of a link is set equal to its delay value.
Estimated Mean End-to-End delay ✔ ✗ Aims at minimizing the end-to-end delay experienced by a flow. This is a more accurate representation of reality than the cost minimization objective defined in the above row. However, this objective is non-linear; therefore, cannot be used with LP. https://doi.org/10.1371/journal.pone.0278317.t001 Objective functions. The fitness of a routing solution is based on two objectives: the total network flow, and the application's estimated mean end-to-end delay, represented by O 1 , and O 2 , respectively. Throughput maximization and delay minimization are the objectives chosen in this work as they are two of the most common and generally applicable metrics used when judging the performance of a given network. Having said this, routing algorithms that require the optimization of a highly specific network metric for which the objectives presented here are either too generic, or do not fully encompass the requirements can either add new objectives, or modify the existing ones.
Total network flow maximization. One of the fundamental requirements of a routing algorithm is to maximize the total network data rate, as this has a direct impact on the network efficiency and flow satisfaction rate. The total network flow objective is given by This objective is normalized by dividing it with the total requested data rate across all flows, P n i¼1 d i . Estimated mean end-to-end delay minimization. When using the modified MPTCP protocol, the mean delay experienced by the application is affected by both the path delay values and the data rate transmitted on each path. An application's end-to-end delay is defined as the time taken from when the transmitting application sends a byte of data, to when the receiving application receives that same byte of data. Modelling the interaction between the packets transmitted on different paths to calculate the mean end-to-end delay is not trivial. Using a simple queue model, we observed that the average end-to-end delay tends to be very close to the largest path delay value from the set of paths used. We simplify the application's end-toend delay measurement by setting it equal to the largest delay value from the set of paths used by a given flow. Based on this simplified model and the goal of minimizing the delay experienced by a flow; this objective minimizes the transmission rate on paths with large delay values from the set of paths available to a flow. The objective's formulation is given by where F i represents the objective value for flow f i and the setP i � P i includes all the paths p i,j , where g i,j > 0. F i is normalized with respect to the total flow rate that is allocated for that given solution such that the final value is independent of the solution's total network flow value. Note that this objective is non-linear because the flow's delay value is conditionally based on which paths the flow is currently using; thus, it cannot be used with an LP solver. This objective is normalized by dividing it with the cost of the path with the largest delay from the set of all paths. Crossover. The crossover operator is used to generate new offspring (routing solutions) by mating two chromosomes together, referred to as the parent chromosomes, to generate two new offspring solutions. Two parent chromosomes, C a and C b , are selected using dominance based tournament selection [33]. For every crossover operation, a mixing ratio z 2 Uð0; 1Þ is chosen. Uð0; 1Þ represents a random source uniformly distributed between 0 and 1. Each gene in the sequence C a ¼ ðG a 1 ; G a 2 ; . . . ; G a n Þ is swapped with its corresponding sequence . . . ; G b n Þ with probability z 2 Uð0; 1Þ. A random mixing ratio is used to allow the possibility of an offspring to inherit most of the genes from a single parent. Fig 4 shows an example of a crossover operation where the genes related to Flow 2 are swapped to create two new routing solutions.
Mutation. While the crossover operator generates new routing solutions by combining chromosomes together, it does not modify any of the flow's data rate assignments as this task is left to the mutation operator. The mutation operator works on a single chromosome, modifying the gene sequences related to a fraction μ of flows, chosen at random, within that chromosome. For every gene sequence G i that is selected for mutation, the mutation operation selects a subsetP i � P i of paths which flow f i is allowed to use. OnceP i is chosen, the paths are considered in random order, transmitting as much data as possible until the flow's requested data rate is met, or all paths are used. This data rate assignment does not exceed any link capacity and takes into account all the other flow data rate assignments. The path setP i is generated using one of the two methods explained next. Method selection is done at random with both methods having equal probability.
Minimize the maximum path delay. This method attempts to minimize the probability of including paths with high delay values from the set of paths a flow is allowed to use. For a given flow f i , all paths with �ðp i;min Þ �ðp i;j Þ � z are included inP i , where z 2 Uð0; 1Þ and p i,min represents the path with the lowest delay from the set P i , given by Paths with a higher delay value have a diminishing probability of being selected as this has a direct impact on the flow's end-to-end delay performance.
Maximize flow. The objective of this method is to transmit as much of the flow's requested data rate as possible over all available paths; thus,P i ¼ P i .
Initial population generation. The initial population is generated using the following procedure. For each flow f i , the number of paths the flow is allowed to use, ν, is randomly selected using a uniform distribution from the set ν 2 {0, 1, 2, . . ., k i }. Subsequently, ν paths are chosen at random from the set P i . The fraction of data rate the flow is to transmit compared to its requested data rate δ i is randomly determined usinĝ where z 2 Uð0; 1Þ. For each of the chosen paths p i,j , the smallest link capacity along that path is calculated by and the corresponding gene is set to g i;j ¼ minðrðp i;j Þ;d i Þ. Genes for paths that were not in the chosen subset are set to zero. This population initialization method may create solutions that break the constraints defined by the MCFP. In such instances, the chromosome is repaired using the methods described in the following section. Constraint handling. The chromosome's design already ensures that a number of constraints are met. Two additional constraints that are not implicitly satisfied remain and are: the flow over-provision constraint and the link capacity constraint. Chromosomes are first checked for over-provisioned flows, followed by a check for over-capacity links. The order is important because it is pointless to fix over-capacity links when over-provisioned flows may still be present in a given solution. After a crossover operation is performed, the two newly generated solutions are checked against the link capacity constraint. The crossover operation does not modify the flows' data rate assignment; therefore, there is no need to validate the flows' over-provision constraint. The mutation operation does not require any validation because the current network usage is taken into account when assigning data rate on paths, and no flow is assigned more data rate than requested. Finally, the method used to generate the initial population requires that each chromosome is checked for both the flow over-provision and link capacity constraints. Excess flow is removed using the excess removal algorithm explained below from G i . For every link found to be over capacity, the excess removal algorithm explained below is used to remove the excess in an unbiased way from the genes in the set {g i,j : e 2 p i,j }. Since the excess removal operation affects a whole path, links other than the one that triggered the operation may be affected. To reduce bias, links are considered and repaired in a random order. This process is terminated when no over-capacity links remain. For a detailed explanation, refer to [28,30].
Excess removal algorithm. The excess removal algorithm is used to remove a known excess amount from a set of values while being as fair as possible in relation to the amount to remove from each element in the given set. Let G = {g 1 , g 2 , . . ., g κ } represent the sequence of κ genes, determined by the flow over-provision constraint or link capacity constraint, from which we need to remove an excess value of τ. That is, we want to determine an updated sequence of genes G 0 ¼ fg 0 Let ξ i represent the amount to remove from gene g i , such that g 0 i ¼ g i À x i . We randomly determine each ξ i with the constraints imposed by G, τ, and previously determined ξ j , j < i: Each ξ i is chosen uniformly at random within a range satisfying both constraints. To avoid introducing a bias in the evolutionary algorithm, the genes in G are considered in a random order.
Complexity analysis
ERA. Let � represent the number of links in a given graph, where e i 2 E represent the i th link. Let θ represent the total number of paths (chromosome size), given by where |P i | = k i represents the cardinality of the set P i . Let M and P represent the number of objectives and population size, respectively. The complexity of each function used by the ERA is given in Table 2 where functions have been reduced to their most dominant function and constants removed. Let χ represent the number of generations, ω the crossover probability, and ψ the mutation probability. Using the equations in Table 2, and assuming the worst case scenario where ω = ψ = μ = 1, the complexity of the developed ERA is equal to OðPðy� 2 þ nkÞ þ wðy� 2 þ ny� þ nk þ Py þ P 2 ÞÞ: ð18Þ The derivations for the complexity equations in Table 2 are given in [28]. LP. The GNU Linear Programming Kit (GLPK) library does not provide information on the complexity and scalability of the algorithms used to solve LP formulations. Therefore, empirical evidence is used to determine the scalability of the developed LP routing algorithm that solves the PC-MFMC problem. Fig 5 shows the time taken by the LP solver to find a solution to the PC-MFMC problem as the number of variables increases, grouped by the network load. In this context, variables refer to all the g i,j variables that the LP solver needs to find a suitable value for. Using curve fitting, it is clear that in general, the LP solver used in this work scales quadratically with the number of variables. Note that there are particular instances where the time required to find a solution is more than double the time required by solutions with the same number of variables. The reason behind these outlier values needs to be investigated further. To the best of the authors' knowledge, the most recent work on how to solve LP problems efficiently is the one by Cohen et al. [34], which still scales quadratically with the number of variables.
ERA vs LP. Solving the PC-MFMC problem using LP scales quadratically with the number of variables. The number of variables is equivalent to the total number of paths in a given solution, which is equal to the chromosome size (θ). On the other hand, the developed ERA scales quadratically with the number of links in a given topology (�) and the population size ðPÞ. Assuming that the ERA is used over a fixed network topology and the population size is kept constant, the designed ERA scales linearly with the chromosome size which is an improvement over the LP solution. The ERA proposed here makes use of non-linear objectives as they offer a more accurate representation of the network behaviour compared to their linear counterparts. Even though the ERA is inherently suboptimal, when the system to be modelled has non-linear properties, as is this case in this work, alternate solvers to LP need to be sought. Finding a solution to a non-linear problem using LP can convert the problem to an NP-Hard one [7].
Network topology
The 2017 GÉANT network topology is used as it models an actual network topology and has a dense network core with multiple paths between any two locations. Such a feature is important to this work as it allows us to test the multipath capability of our routing algorithms. Detailed information on the topology, link capacity and delay values used can be found in [28,31].
Flow setup
In this work we simplify the problem by using a static flow set, meaning that no flows enter or exit the network for the duration of the simulation. Three network loads are used: Low, Medium and High. Except for the high network load, that goes up to 150 flows, the number of flows ranges from 50 to 300 in steps of 50. The high network load scenario is capped at 150 flows, because network capacity is already exceeded at this point, with only 70% of the total requested flow rate being allocated. Five flow sets are generated for each network load. The flow data rate is generated using a normal distribution with mean (standard deviation) for the low and high network set at 5Mbps(0.25Mbps) and 25Mbps(2.5Mbps), respectively. The flow data rate values for the low and high network load setup are chosen to represent Definition (HD) and Ultra High Definition (UHD) video transmission, respectively. The medium load setup has an equal number of flows having a low and high network load profile. Under all scenarios considered here, the flows' source and destination nodes are selected randomly with the selection probability directly proportional to the node's total outgoing or incoming capacity, respectively. Flows with identical source and destination nodes are not allowed.
Solver frameworks
Solutions to the LP formulations are found using the GLPK [35] library accessed through the LEMON's [36] library interface. LEMON's GLPK interface has been updated to run the glp_exact function after running the glp_simplex function to improve numerical stability. The use of the glp_exact is required as otherwise, flows may be allocated very small negative data rates, even though a constraint is set where solutions are only allowed to use positive numbers. The ERA designed in this work is implemented using the Distributed Evolutionary Algorithms in Python (DEAP) v1.3 [37] library.
Network simulations
Network simulations are carried out using the Network Simulator version 3.29 (ns-3) [38]. Custom devices are developed to simulate the required functionality of an SDN switch and the Per-Packet Flow Splitting (PPFS) switch developed in [26]. All switches are assumed to have unlimited buffers to eliminate the effect of packet loss caused by buffer overflow. Although this is unrealistic, this assumption simplifies the analysis of the network performance results. All flows are assumed to transmit at a Constant Bit Rate (CBR) with a data packet size of 590 bytes including all the necessary headers with each TCP acknowledgement packet being 54 bytes long. Methods of how to shape bursty traffic to have a profile similar to a CBR exist, with Szymanski [29] presenting one such method. Using the above packet sizes, and the assumption that TCP transmits an acknowledgement packet for every two data packets received [39], the TCP's acknowledgement rate α(g i,j ) = 0.0458 × g i,j . The NewReno tcp congestion control mechanism is used [40]. Except for Open Shortest Path First (OSPF), applications transmit at the rate assigned by the routing algorithm, not that requested. Data rate transmission modification is possible as SDN allows for bidirectional communication between the routing algorithm hosted on the network controller and the application. OSPF lacks such functionality; thus, OSPF results are generated by setting the flows to transmit at their requested data rate. For each TCP connection/sub-flow created by the MPTCP protocol, the TCP transmit and receive buffer size is automatically adjusted such that it is large enough to support transmitting at the data rate assigned by the routing algorithm on that given path. The buffer size in bytes is calculated using the bandwidth delay product [39] as given by where Round Trip Time (RTT) is given in seconds and g i,j is in bits per second. A minimum buffer size of 4096 bytes is set to match the value used by the TCP implementation in the Linux kernel. Under all scenarios presented here, the routing tables are populated before packet transmission starts, eliminating any routing protocol overhead. Due to the lack of a native ns-3 MPTCP protocol implementation, the TCP sub-flow generator, and scheduler were developed as these two blocks are sufficient to test the performance of the updated MPTCP protocol. The TCP sub-flow generator is the module that creates a number of TCP sessions, where the number of sessions to open, and which port numbers to use on each session is specified by the routing algorithm. The developed MPTCP stochastic scheduler distributes packets between the different TCP sessions (each TCP session is equivalent to a path), where the split ratios are given by the routing algorithm. Due to time constraints, the shared congestion control algorithm is not implemented and the MPTCP receiver applications are assumed to have infinite receiver buffers to avoid the need to implement MPTCP's acknowledgement mechanism to recover from packet losses caused by receiver buffer overflow. We do not see any reason why the shared congestion control used by MPTCP should negatively impact performance due to the use of a globally optimized routing solution. Packet loss and congestion caused by the dynamic network environment are handled by the underlying TCP sub-flows and are handled by our developed MPTCP model.
Network simulation results were generated by allowing the simulation to run for 120 simulation time seconds.
Results
In the results that follow, Goodput is defined as the rate at which an application is able to generate or consume data. Delay is defined as the time taken from when the transmitting application sends a byte of data, to when the receiving application receives that same byte of data. Any time used waiting to deliver a block of data to the application in its correct order is included in the delay measurements. This setup is used to accurately represent the performance of an application using the protocols under test.
EA parameter selection
Finding the optimal parameter values for an EA is itself a multi-objective optimization problem, and depends heavily on the algorithm's use case. Although several tests were carried out to ensure that the values chosen work well for a wide number of cases, as is demonstrated by the results presented in this work, we do not state that these are the optimal values. The EA parameter values need to be set up in such a way to strike a balance between the rate of convergence and the even coverage of the optimal Pareto Front. On the one hand, the improvement between generations should not be minute, as this would require a very large number of generations before reaching a good enough approximation of the true Pareto Front. And on the other, there should not be a huge gap between the current and previous population as this may skip over solutions that make up the true Pareto Front. In addition, the solutions found by an EA, need to be evenly spread over the entire Pareto Front. Having a number of solutions either grouped in a relatively small area, or heavily biased to one objective, is a sign that the EA is not exploring all areas equally. The ideal set of EA parameters needs to offer a steady and gradual improvement with each generation until a satisfactory Pareto Front is generated. The EA parameter values are highly dependent on the design and function of the crossover and mutation operators. Due to the wide selection of such operators, only recommendations on what values to use exist with most EA implementations favouring a high crossover and low mutation rate. The EA parameters used here are given in Table 3. In this work, the crossover probability (ω) is set to 0.9, as such value is commonly used with the NSGA-II algorithm [32] and has been found to work well in our setup. During the EA parameter selection process, the effect on the EA's evolution progress when changing a single parameter value has been thoroughly analysed and tested before taking the final decision. An in-depth explanation of how each value was chosen is given in [28].
Choosing k
Two different path selection algorithms: KSP and KSREDP are presented in this work. Both algorithms take a single parameter, k, which represents the maximum number of paths a flow is allowed to take. When choosing a value for k a compromise needs to be reached between the algorithm's running time and path variety. To choose k, the total assigned network flow when solving the unconstrained Maximum Flow problem is compared with the solution of the PC-MFMC problem. The solutions to the two aforementioned problems is found using LP because of LP's optimality guarantee, meaning that any performance difference is solely attributed to the change in k value. The unconstrained Maximum Flow problem can be seen as the PC-MFMC problem where k = 1. For the formulation of the unconstrained Maximum Flow problem, the reader is referred to Eq (11) in Section III of [31]. Fig 6 shows the total network flow allocated by the PC-MFMC problem as a fraction of that found by the unconstrained version. The results in Fig 6, confirm that the KSREDP algorithm is able to find solutions with a higher total network flow for a smaller k when compared to the KSP algorithm. The KSREDP is more suited for scenarios that prioritize throughput over delay as it may have better performance in terms of throughput. However, this comes at the cost of increased risk of worse delay performance, where the KSREDP algorithm can match the delay performance of the KSP algorithm, but never surpass it. On the contrary, the KSREDP solutions are more likely to offer worse delay performance when compared to KSP. Using Fig 6 as a reference, there is marginal difference in the allocated network rate between k = 5 and k > 5; therefore, in this work we set k = 5. Reaching the same network allocation rate as the unconstrained Maximum Flow when using the PC-MFMC problem is possible; however, this would require a large k value that would increase the complexity in such a way that would make running and testing the ERA infeasible.
Routing algorithm performance
Data rate allocation. The performance of the developed ERA is compared with the optimal solution provided by LP. The solutions generated by LP for the PC-MFMC problem are only truly optimal for a single objective; in our case, the maximization of the total network flow. For a fair comparison between the algorithms, the highest network flow solution found by the ERA is compared with that found by LP, for all the flow sets and network loads used here. Fig 7 shows the percentage of demand the ERA managed to satisfy when compared to the LP found solution. From the results, we can observe that the solutions found by the ERA match those found by LP very closely for a lightly loaded network. As the network load and number of flows increase, the difference in the satisfied demand between the solutions found by the ERA and LP increases by at most 7% and 8% when using the KSP and KSREDP algorithms, respectively. These results serve to show that even though the ERA is suboptimal, it manages to satisfy, on average, 98% of the demand reached by the optimal solution.
Hybrid. The Hybrid algorithm is a minor modification over the developed ERA, where the LP optimal solution is added to the initial population. Note that the population size remains equal to 800 including the LP optimal solution. There are three main advantages of this inclusion. First, thanks to the elitist nature of the NSGA-II algorithm, the Pareto Front generated by the Hybrid ERA is guaranteed to contain the LP found solution, or an alternative that has the same allocated total network flow but is better in any of the other objectives. Second, the Hybrid routing algorithm is able to provide multiple valid solutions. Having multiple valid solutions at the end of a run is beneficial as it allows the algorithm to explore the search space for all objectives without any bias towards any objective. Choosing a solution from the generated pool of solutions means that a compromise between the objectives has to be made; however, this bias between objectives was made when taking into account the entire pool of generated solutions. This is very different from setting biases on the objectives before the algorithm even starts, which requires very deep knowledge of the area to get such biases right. More information on this can be found in [28,33] with examples of how the provision of multiple results is beneficial in a network setting can be found in [30]. Third, the convergence rate is improved as shown in Fig 8 where the mean Euclidean distance for the Hybrid and non-Hybrid version are compared. The mean Euclidean distance, M A,B , between the set of solutions on the Pareto Front in generations A and B, respectively S A and S B , is given by d(a, b) refers the Euclidean distance between a and b, where a and b are tuples of the normalized objectives for the algorithm used. The Hybrid algorithm reports a mean euclidean distance of zero for the first few generations because the PC-MFMC solution dominates all the generated solutions. The obvious downside of such a Hybrid algorithm is that both the LP and EA algorithms have to be run. Fig 9 shows the generated Pareto Front for both the standard ERA and Hybrid routing algorithm. From Fig 9 it can be observed that the Hybrid algorithm found a number of solutions that dominate those found by the standard ERA. However, the final Pareto Front of the Hybrid algorithm is much tighter than the one found by the non-Hybrid version. This is not ideal, as it shows that the Hybrid algorithm is not looking into all directions equally, but rather favouring solutions with a high network flow. The Hybrid algorithm was developed and tested during the last stages of this research; therefore, there was not enough time left to identify the cause of such a constricted Pareto Front. To try and identify the cause of such a problem, one may start by looking at the entire evolution of the algorithm to determine at which stages the Pareto Front starts to close up. If this behaviour is noticed even at the early stages of evolution, the insertion of the LP optimal solution in the first generation might be premature. Having an optimal solution in the initial population might require tweaks to the ERA parameters to either increase or decrease the aggressiveness of the mutation operator.
Simulated network performance
ERA vs Hybrid vs OSPF. Fig 10 presents the network simulation results for the solutions marked in Fig 9. As expected, the Hybrid Hybrid Maximum Flow (MF) solution has the overall better network Goodput performance when compared to all the other points chosen here. OSPF has the worst overall performance both in terms of Goodput and mean application delay, even though all the flows are assigned some data rate. Note that OSPF's delay performance is kept in check thanks to the TCP congestion control mechanism that tries to avoid network congestion. Observe that the ERA MF solution has an overall better delay performance when compared to the Hybrid MF solution. This means that even though the ERA's Estimated Mean End-to-End Delay objective is an approximation, it correlates well with the actual delay performance. Allocated vs actual network performance. The main hypothesis surrounding this work is that using routing solutions generated by taking into account the network status leads to an increase in network performance; both in terms of the total throughput carried over the network and resource usage. Given that the routing solution has the network status during its computation, it is to be expected that the network will be devoid of congestion because data rate allocation takes into account the link capacities and all other flows using the network. Therefore, one comes to expect that the rate allocated by the routing algorithm, and the actual data rate seen by the application to be very close to each other. To measure the relationship between the allocated and the actual, the flow satisfaction metric is used. A flow's satisfaction rate represents the fraction of Goodput received when compared to that allocated by the routing algorithm. For example, a flow allocated 10 Mbps and achieving a throughput of 10 Mbps has a 100% flow satisfaction rate, whereas if it only achieves a throughput of 5 Mbps, the flow satisfaction rate drops to 50%. Fig 11 shows the distribution of the flow's satisfaction rate for all the flows in a given flow set. The tighter the distribution and the closer it is to 100%, the closer the actual network performance is to the one allocated by the routing solution.
It is well known that TCP and Per-Packet multipath routing solutions do not work well with each other. Therefore, it comes as no surprise that when using standard TCP and PPFS, the network performance is nowhere near that promised by the routing algorithm. This can be verified by looking at the columns in Fig 11 that use the PPFS method to deploy Per-Packet multipath. What may not be so obvious is the impact the inclusion of acknowledgement flows have on the flow satisfaction rate. Comparing the flow satisfaction rate distribution between the setup when considering acknowledgement rates and when not, the negative effects of ignoring acknowledgements on the flow's satisfaction rates is clear. Ignoring acknowledgement flows leaves the network open to congestion as soon as acknowledgement flows start. If acknowledgement flows are not taken into consideration when generating the routing solution, not enough capacity may be left in the network links to route this additional traffic. Therefore, as soon as the acknowledgement flows start, some links may be over subscribed. Fig 11 also shows that the modifications done to the MPTCP protocol allow TCP consumer applications to benefit from the advantages offered by a Per-Packet multipath routing algorithm with the vast majority of flows having 100% flow satisfaction rate. Even though only one particular flow set is shown in Fig 11, similar conclusions and distributions were seen under different flow sets. Based on these results, we have no reason to believe that the results and conclusions drawn in this section fail to apply under different scenarios than the ones considered here.
Effect of path selection algorithm. The path selection affects the properties of the routing solution and the performance that can be extracted from the network. As explained earlier, the KSP algorithm is suited for applications that prioritize delay over throughput, while the opposite is true for the KSREDP algorithm. The KSREDP algorithm is preferred by throughput oriented flows as it has been designed to increase link diversity. As the number of different links available to a flow increases, so does the probability of assigning a flow a higher data rate due to the fact that the paths do not have common links. Having said this, using the KSREDP algorithm is not guaranteed to return better performance than the KSP as this is highly dependent on the network topology and flow set. On the other hand, when using the KSP algorithm, it is guaranteed that the shortest k paths are made available to a flow. This means that paths chosen by the KSP algorithm will always outperform or at least match the paths returned by the KSREDP algorithm, in terms of delay. The paths returned by the KSREDP algorithm are not guaranteed to result in solutions that offer higher total network throughput.
The network topology used in this work is highly interconnected; therefore, there is a good chance that the paths found by the KSREDP path selection mechanism do not share many links between them. This link variation brought forward by the KSREDP algorithm allows the routing algorithm to find solutions with a higher amount of total allocated data rate when compared to their KSP counterpart. However, the higher network capacity offered by the KSREDP algorithm comes at the cost of worse delay performance when compared to the KSP algorithm. The network simulation results shown in Fig 12, confirm such statements. The network performance results shown in Fig 12 only use routing solutions to the PC-MFMC problem using LP. Only LP solutions are used here due to LP's optimality guarantee, such that any performance difference is solely attributed to the different path selection methods used and not the randomness of the algorithm. Based on the results shown in Fig 12, flows that prefer throughput to delay are better off using the KSREDP path selection algorithm. On the other hand, flows that require the best delay performance should favour the KSP algorithm. The scale of the performance difference between the two path selection algorithms relies heavily on the network topology. Developing and testing a solution where each flow gets to choose its own path selection mechanism based on its priorities is an avenue of research worth investigating further.
Conclusion
The aim of this research has been to increase the efficiency of already deployed networks by increasing throughput and minimizing latency by developing a globally optimized, multipath capable routing algorithm for networks using a centralized network architecture. As a routing algorithm is usually tasked with optimizing for multiple, often conflicting objectives, using LP to find such a solution is not possible. To overcome this limitation, a multi-objective ERA is proposed that makes exclusive use of Per-Packet multipath. Due to the well known negative side effects Per-Packet multipath exhibits when deployed on a network using TCP, TCP is replaced by a modified version of MPTCP. The combination of the modified MPTCP protocol, and the inclusion of the TCP acknowledgement flows when generating a routing solution, guarantee, with a very high probability, that a flow reaches the data rate assigned to it by the routing algorithm. Even though Evolutionary Algorithms (EAs) are inherently suboptimal, the ERA proposed in this work is able to find solutions that are, under all scenarios considered here, on average, 2% off of the optimal solution found using LP.
All the network simulations presented here assume switches are equipped with an infinite buffer size. Network simulations with finite buffers will provide a closer to reality depiction of the performance gap between the devised ERA and the OSPF solution. We conjecture that with the use of finite buffers, the gap between OSPF and the developed system will increase due to the packet drops caused by buffer overflow in times of network congestion. The reason being that TCP treats a packet drop as a sign of congestion and immediately reduces the transmission rate. In the current setup, due to infinite buffers, TCP is allowed to adjust to the everincreasing RTT.
A limitation of this work is the assumption of a static flow set. For this routing algorithm to be deployed in practice, the routing algorithm has to be updated to handle a dynamic flow set. Network performance comparison between the path selection algorithms. A: Probability that a flow achieves at least a given average Received Goodput in Mbps, using network simulations. The total received Goodput achieved by each algorithm is shown in parentheses. The probabilities do not add up to one as unassigned flows are set with a delay value equal to infinity. B: Probability that a flow experiences at most a given average end-to-end delay at the application layer, using network simulations. The mean application delay achieved by each algorithm is shown in parentheses. https://doi.org/10.1371/journal.pone.0278317.g012 When using a dynamic flow set, the problem of network instability is one of the main problems that needs to be tackled. To retain network stability, an additional objective needs to be added that minimizes the number of route changes between one routing update to the next. Additionally, the ERA may be set to have the initial population to be equivalent to the final population during the last optimization run. Alternatively, one can use an entirely random population and insert the chosen solution as part of the initial population.
Although no design decision has been made on the basis of the network topology being used here, the fact remains that the systems developed here have been thoroughly tested on a single network topology. Every effort has been made to ensure that the routing algorithm's design is independent of the topology it is deployed on; however, tests on other network topologies are required to confirm this statement. Having said this, the chosen topology used in this work has enough complexity in terms of number of nodes, links and paths between sourcedestination pairs that gives us enough confidence to state that with very high probability the solutions and algorithms proposed here will work on other topologies without any major modifications. | 14,107.8 | 2023-04-19T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Utility of sequencing for ATP6AP1 and ATP6AP2 to distinguish between atypical granular cell tumor with junctional component and melanoma
Granular cell tumor (GCT) is a S100+ neoplasm with atypical and malignant variants. Similar to melanocytic neoplasms, the tumors make nests and can have junctional components raising a differential diagnosis of melanoma. Nevi and melanomas may also have granular cell cytoplasm. MelanA is useful in distinguishing melanocytic from granular cell lineage, but increasingly MelanA/SOX10 negative melanomas have been recognized by correlation with molecular methods.
Granular cell tumor (GCT) is a tumor of presumed schwannian histogenesis with atypical and malignant variants.The characteristic histopathologic finding is granular cytoplasm.Neoplasms in the granular cell lineage make nests, similar to melanocytic neoplasms.Immunostains for S100 and SOX10 are typically positive, in keeping with schwannian histogenesis.More specific melanocytic markers such as MelanA and HMB45 are typically negative.Criteria have been proposed for distinguishing benign, atypical, and malignant GCTs 1,2 ; however, these are difficult to apply consistently and there are case reports of GCTs with benign histopathologic features going on to metastasize.A multifocal benign variant also exists. 3In addition there are reports of atypical and malignant GCTs with junctional components that raise a differential diagnosis of melanoma. 4][7][8][9][10][11] Melanoma-specific immunostains such as MelanA and HMB45 are usually positive in melanoma and negative in GCT, 12 but also have limited sensitivity for melanoma in the range of 75%-92%, and as low as 17%-21% in spindle cell/desmoplastic melanoma. 13MelanA can rarely be positive in GCTs. 12Microphthalmia-associated transcription factor is positive in the majority of GCTs and cannot be used as a marker to discriminate between GCTs and melanocytic tumors. 12Other markers for GCT such as PGP9.5, inhibin, and calretinin are not specific and may be positive in both GCTs and melanoma. 125][16][17][18][19][20] Recently, recurrent loss of function mutations in the vacuolar ATPase complex genes ATP6AP1 and ATP6AP2, which regulate endosomal pH, have been identified in 72% of GCTs, including atypical and malignant GCT. 21These were frameshift mutations and premature stop codons.In addition to mutations in ATP6AP1 and ATP6AP2, a subsequent study found mutually exclusive mutations in six different genes encoding various components of the vacuolar ATPase complex. 22 practical terms, the potential morphologic overlap of GCT and melanoma is infrequently a diagnostic problem, as most GCTs feature banal, benign-appearing cytologic features.We have recently encountered several cases with sufficient morphologic overlap to seek additional molecular means of establishing a definitive diagnosis.This is important because an atypical GCT and a melanoma of similar thickness would be expected to have distinctly different prognosis, clinical work-up, and therapy.
| METHODS
All available cases of GCT, atypical or malignant GCT in the archives of the Dermatopathology Section, Department of Pathology, Indiana University from 2010 to 2022 were retrieved and reviewed.Immunohistochemical studies were performed on select cases with monoclonal antibodies against S100 (Dako), Sox-10 (Cell Marque), MelanA (Dako), PGP9.5 (Cell Marque) on a Dako Omnis stainer using formalin-fixed paraffin-embedded tissue sections cut at 4 μm sections.DNA sequencing of 648 genes and full-transcriptome RNA sequencing were performed on the two index cases at Tempus Labs, Chicago, IL, as previously described. 23This study was conducted under an institutional IRB and did not require ethical approval.
| Index case 1
A 57-year-old man with no relevant previous history presented with a solitary lesion on his back.Biopsy demonstrated a neoplasm with a junctional component, nesting, maturation in the deep dermis, granular cytoplasm, and mitotic figures at 1/10HPF (Figure 1A,B).Some spindling was also present.There was no necrosis.S100 (Figure 1C,D However, the assay also includes RNA whole transcriptome sequencing, and, on further interrogation of the RNA data, a frameshift mutation conferring loss of function was identified in ATP6AP1 (pN406fs), with a variant allele fraction of 40%.A diagnosis of atypical GCT was therefore established, with a recommendation for re-excision.
| Index case 2
A 32-year-old woman with no relevant previous history presented with a solitary lesion on her abdomen.Biopsy revealed a neoplasm with pseudo-epitheliomatous hyperplasia, nesting, and granular cytoplasm (Figure 2A,B).There was mild nuclear pleomorphism and mitotic figures at 1/10HPF were visualized.Immunostaining for S100 was positive (Figure 1C), and demonstrated nests of cells closely approximating the epidermis, but not convincing junctional nests as were seen in index case 1. Melan A (Figure 1D), as well as HMB45, and tyrosinase were negative.NGS using the Tempus XT platform was pursued; the DNA study did not identify melanoma-associated driver mutations or other pathogenic mutations.The RNA data were again further interrogated and demonstrated a premature stop codon in
| Additional cases of GCT with junctional component
Hematoxylin and eosin (H&E) stains and immunostains from all additional cases of cutaneous GCT diagnosed in the period 2010-2022 with available slides were reviewed.There were 19 GCTs, 3 atypical GCTs, and 1 malignant GCT, for a total of 23 cases.No additional true junctional components were identified; however, in one case, nests of granular cells were present closely approximating an epidermis showing pseudoepitheliomatous hyperplasia (Figure 3).Maturation of the dermal component was identified in no additional cases.
| DISCUSSION
Neoplasms with granular cell lineage may have an atypical and malignant variant.There have been reports of junctional nested components in GCTs. 4 This, along with the immunoprofile (S100, SOX10+) can raise a differential of melanocytic lineage.The issue can be important in the setting of atypical GCT where a differential diagnosis of melanoma with granular cell change is possible.We encountered two atypical GCTs with junctional nesting and with dermal maturation in one.We confirmed a diagnosis of granular cell lineage by means of RNA whole transcriptome sequencing, identifying a frameshift mutation in ATP6AP1 (pN406fs) in index case 1 and a premature stop codon in ATP6AP2 pY326* in index case 2. Other more limited molecular studies that would have potential utility in this differential would include BRAF, NRAS sequencing or BRAF V600E immunostain.If any of these studies were positive, it would imply a diagnosis of melanoma, while negative studies would not fully exclude melanoma.
| Specificity of ATP6AP1/2 frameshift and premature stop codon mutations
Loss of function mutations in ATP6AP1/2 have been identified in 72% of 82 tumors with granular cell lineage in the seminal paper of Pareja et al. 21Both of these genes are located on the X chromosome and in females the mutation is present on the active/non-methylated X chromosome, so that a single inactivating mutation would be sufficient to cause its complete loss of function. 21In all patients tested, the mutations were detectable by RNA sequencing.A search of the cancer genome atlas identified mutations in ATP6AP1 in only 0.27% of 6285 unrelated tumors, and in ATP6AP2 in only 0.25% of cases.Importantly the mutations identified in these cases were predominantly missense single nucleotide variants (SNVs) with only 0.04% frameshift or truncating mutations (ATP6AP1) and 0.02% for ATP6AP2. 21We also searched the Cosmic database for mutations in ATP6AP1/2 in cutaneous melanoma.Of 1784 skin samples tested for ATP6AP1, there were 24 melanomas with mutations identified in ATP6AP1, but these were all SNV or silent mutations, with no frameshift or premature stop codons identified.Of 1784 skin samples tested for ATP6AP2, there were 22 melanomas with SNV or silent mutations identified.A single melanoma had a premature stop codon in ATP6AP2, pS95*, but this lesion also had a melanoma-associated driver mutation in NRAS Q61K and two pathogenic mutations in ARID2.Frameshift and premature stop codons in ATP6AP1 and ATP6AP2 therefore appear to be highly specific for granular cell lineage, and capable of excluding melanoma, in the absence of known melanoma-associated driver mutations.
We identified a junctional component in GCT in our two index cases and 1 of 23 additional cases of GCT retrieved from our files.
Our literature review identified an additional report of junctional component in three GCTs including an atypical GCT. 4 We found that the differential diagnosis with melanoma is not typically an issue in the setting a bland, granular cell morphology.The differential diagnosis with melanoma becomes an issue in the context of atypical or malignant GCT.In one of our two index cases, we found that maturation was present, further mimicking a melanocytic neoplasm.This does not appear to have been commented on previously in the literature to our knowledge.Histopathologic criteria for atypical and malignant GCT have been proposed 1,2 but are hard to apply.In addition, there are reports of benign-appearing GCTs with metastasis.For this reason, there has been interest in using NGS to help make these distinctions, with mutations identified in TGFB and MAPK pathways in malignant GCT. 24We did not identify any recurrent DNA changes distinguishing atypical GCT from the benign variant in our cases.It might be feasible to distinguish these entities based on RNA expression but currently, there are insufficient data.
Pareja et al. reported results of immunofluorescence using antibodies to ATP6AP1 and ATP6AP2 and demonstrated loss of expression in GCTs with loss of function mutations in these genes. 21This raises the possibility that the same antibodies could be used to demonstrate granular cell lineage by immunoperoxidase on a clinical basis.
Whether this could have utility in the differential with melanoma would need additional study.
| CONCLUSION
We present two index cases of atypical GCT with a junctional component and maturation in one case, raising a differential diagnosis of melanoma with granular cell change.We reviewed a larger series of 23 GCTs from our archives and identified a junctional component in one additional case.We show the potential utility of sequencing for ATP6AP1/2 in excluding melanoma, and the specificity of loss of function mutations in these genes in the differential diagnosis with melanoma.
5 (
) and Sox10 immunostains were positive.S100 and PGP 9.Figure1E,F) both identified the junctional component.S100 and PGP 9.5 both demonstrate maturation, meaning smaller nests and single cells toward the deep margin of the lesion.MelanA, HMB45, and tyrosinase immunostains were negative.A differential diagnosis of melanoma with granular cell change versus atypical GCT was considered.Initial attempts to confirm or exclude the possibility of melanoma with granular cell change included sequencing BRAF and NRAS using the Therascreen platform (Qiagen).These studies were both negative and next generation sequencing (NGS) using the Tempus Xt platform was then employed.DNA sequencing demonstrated no known melanoma drivers.Drivers known to be associated with granular cell lineage in 70% of cases (ATP6AP1, ATP6AP2) are not sequenced in the DNA capture assay format of the Tempus assay.
F I G U R E 3
Additional granular cell tumor identified by review of case archives with nests of granular cells closely approximating an epidermis showing pseudoepitheliomatous hyperplasia.(A) Hematoxylin and eosin 200Â.(B) S100 immunostain, 200Â. | 2,417.8 | 2023-08-11T00:00:00.000 | [
"Medicine",
"Biology"
] |
INTERPOLATING A LOW-FREQUENCY TIME TO A HIGH-FREQUENCY ONE: PROGRAMING AND ESTIMATION PROCEDURE FOR MATLAB
This study provides an estimation procedures and statistical package programing for temporal disaggregation of time series data. That is, this method is used to disaggregate low frequency data to higher frequency data. Temporal disaggregation can be performed with one or more high frequency indicator series.
INTRODUCTION
Data is a crucial part of responsible research. Whenever the investigators or the research teams start a new research they should concern about issues related to data. If you have a clear plan about your data at the beginning of the research, you save time and effort later on. Also, you are assured that the data you produce will be preserved in a clear, useable format.
Research data are an essential and costly output of the scholarly research process, across all disciplines. They are an important part of the evidence necessary to evaluate research results, and to reconstruct the events and processes leading to them. It is a common problem for researchers and analysts about not having a series at the preferred frequency. For instance, instead of monthly output (gross domestic product: GDP), they only have either quarterly or annual GDP. Even in some time they don't have quarterly GDP. Instead of a daily stock market index, they only have a weekly index. While there is no way to completely make up for the missing time series, there are some useful techniques. That is, using one or more high frequency data series, the low frequency series can be disaggregated into a high frequency series. For example, quarterly imports could help disaggregating annual GDP, and/or monthly investment and monthly exports could help disaggregating the annual output.
In order to maintain the reliability of research, accurate data collection is necessary regardless of the field of study or preference for defining data (quantitative, qualitative). Both the selection of appropriate data collection/disaggregation method and clearly delineated instructions for their correct use reduce the likelihood of errors occurring are essential. The primary motivation for preserving data integrity is to support the detection of errors in the data collection process, whether they are made intentionally or not. Most, Craddick, Crawford, Redican, Rhodes, Rukenbrod, and Laws (2003) explain 'quality assurance' and 'quality control' 1 as two approaches that can preserve data integrity and ensure the scientific validity of study results. Each approach is implemented at different points in the research timeline (Whitney, Lind, and Wahl, 1998).Several researchers are considering above mentioned quality approaches when they interpolate the low frequency data in to high frequency data. For example, Chow and Lin (1971) and Goldberger (1962) used best linear unbiased interpolation method.
Although Litterman (1983), Fernandez (1981 and Chow-Lin (1971) use one or several indicators and perform a regression on the low frequency series, Litterman(1983) andFernandez (1981) are dealing with non-cointegrated series while Chow and Lin (1971) suited for cointegrated series. Alternatively, Dagum and Cholette (2006) disaggregate a series without an indicator. They primarily concerned with movement preservation, generating a series that is similar to the indicator series whether or not the indicator is correlated with the low frequency series. Pavía-Miralles (2010) classifies and reviews the procedures, provides interesting discussion on the history of the methodological development in this literature and permits to identify the assets and drawbacks of each method, to comprehend the current state of art on the subject and to identify the topics in need of further development.
All of the above mentioned techniques confirm that either the first or the last value (the sum/the average) of the resulting high frequency series is consistent with the low frequency series. state that "they can deal with situations where the high frequency is an integer multiple of the low frequency (e.g. years to quarters, weeks to days), but not with irregular frequencies (e.g. weeks to months)". The interpolation methods are widely used in official statistics packages. That is, to perform the temporal disaggregation researchers are employing different software packages. For example, R extension by , Quilis (2012) used Matlab extension, RATS extension by Doan (2008), Barcellan et al. (2003) employed Ecotrim extension.
Although a very few studies (e.g. Quilis, 2012) provide a programming to interpolate low frequency data in to high frequency one using Matlab software, which is also early version 1 Quality assurance is the activities that take place before data collection begins and Quality control is the activities that take place during and after data collection even it is either a primary or secondary data. of it (Matlab 7.6 [R2008a]). Therefore, in this paper we derive best linear unbiased predictor of an individual drawing of Y (low frequency series) given X (X may be one or more high frequency series) in the linear regression model using Matlab 7.14 (R2012a) version. To that aim, we are describing the estimation procedure and manual Matlab programming.
The section 2 discusses the framework of interpolation method. Section 3 presents the estimation procedure and programing using an example. Finally the key results are obtained and discussed in section 4.
THE INTERPOLATION METHOD
The purpose of interpolation is to find out an unknown high frequency series (say Y: monthly GDP), whose averages, sums, first or last values are consistent with a known low frequency series (say annual or quarterly GDP). In order to estimate monthly GDP, one or more other high frequency indicator variables can be used. We collect these high frequency series in a matrix X. Hence, monthly observations of a series can be estimated using either bivariate or multiple regression relationship. Following Chow and Lin (1971) approach, the generalized linear regression model is given by 2 : where, Y is 12n × 1(or T × 1)vector of regress and observations, X is 12n × K (or T × K) matrix of regressors observations,β is K × 1 vector of coefficients and u~N(0, W). Also is the year, is the number of high frequency indicator series, is month (12 × ) and W is the 12n × 12 (or T × T) positive-definite of variance-covariance of disturbances (W = 12 2 ). For the purpose of statistical analysis, the indicator series X is going to be treated as fixed in the equation (1).
The equation (1) describes the sample period of 12n months relations between regressand and regressors, but we don't have the monthly series of Y instead we have annual data only. Therefore, to converts the 12n monthly observations into n annual observations, we need to transform this equation by multiplying compatibility matrix form. The transformed equation is given by: Throughout this study we will speak about estimating a monthly series given its annual data of that series and monthly data of indicator series. Also, we will provide the Matlab program on estimating monthly series given it annual and quarterly data of that series and monthly series of indicator series.
where, C is n × 12n matrix. In this case averages, first values or sums are consistent with a known low frequency series (i.e. Y). Therefore, for distribution and interpolation, C matrix can be takes the form as: Where CA represents that the averages values are consistent with regressand, C s denotes that the sums of monthly values are consistent with regressand and C F indicate that first month value is consistent with the regressand. For the temporal disaggregation we can use any one of these three alternatives.
Since all the data series are in annual basis in equation (2), now we will able to estimate this model by ordinary least square (OLS) method. The estimator of β is given by: Where, Ẋ, Ẏ are based on the annual data, and ẇ= CC ′ 2 Now the problem is, how to estimate the monthly observations on the dependent variables. To that aim, now assume that we estimate a vector of , which is identical with in the case of temporal disaggregation. Therefore, the regression model is given by: where, X Z and u Z are identical with X and of the equation (1) for the interpolation and distribution. Using some × matrix A, a linear unbiased estimator ẑ of z satisfies as: After solving this, the estimated value of Z is: where u̇= Ẏ− Ẋβ For the temporal disaggregation, we assume that where the definition of C is given by either C F or C A or C S .
As in Chow and Lin, to compute the estimated equation of (6) without any difficulties, we took three assumptions. The first case is to assume that monthly regression residuals are serially uncorrelated. In this case, to estimate the monthly observation of the dependent variable, one can easily assume that the term (ẇZẇ− 1 )u̇= C F ′ u̇ in equation (6) assigns the regression residual for any year to the first month of that year. On the other hand if we assume that the term (ẇZẇ− 1 )u̇= C A ′ u̇ in equation (6) assigns the regression residual for any year to the average values of 12 month of that year. Moreover, if we assume that the term (ẇZẇ− 1 )u̇= C S ′ u̇ in equation (6) assigns the regression residual for any year to the summation of 12 month of that year. Second case is to assume that monthly residuals follow first order auto-regression u t = αu t−1 + ε t with E(ε t ε s ) = θ ts σ 2 In this case, for the interpolation we needs In the third case, to estimate the w, we assume that although monthly residual series are serially uncorrelated, but variances are proportional to a certain linear combination of independent variables or to a known function of a regressors. Then, w will be the proportional and diagonal to a given matrix.
THE ESTIMATION PROCEDURE AND PROGRAMMING
Let assume that we have annual data of gross domestic product (GDP), which is taken as dependent variable (named as ). Let be monthly data matrix, which is considered as explanatory variables. Following the methodology described in the section 2, now we start to write a Matlab programing to interpolate GDP data from annual series to monthly series.
Before starting the programing, we should have data in proper format. Hence, since number of observations is differing between dependent and explanatory variables keep two separated excel/txt data file; one for dependent variable (low frequency data series) and another for explanatory variable/s (high frequency indicator series). First, we have to load the data file in to Matlab software. To do that, follow as below: clear all; % will clear the memory of the work file and start freshly format bank; % or you can write "format short"/ "format long" as well [num, txt, raw] = xlsread('explanatory.xlsx'); % num is initialized with all the numbers % txt is initialized with all the text % raw is a cell matrix with all the numbers & text numbers=cell2mat(raw(2:end,2:end));% This returns the matrix with all the numbers headings=cell2mat(raw(1:1,1:end)); % This returns the headings of your matrix text=raw(2:end,1); % This returns the first column. data1=numbers(:,:); % Defining the monthly data as a matrix data2=xlsread('gdp.xlsx'); % loading the annual data to Matlab file Second, we are assigning the raw and column for data series. To do that, follow as below: [p1 q1]=size(data1); % defining the raw and column for data1 [p2 q2]=size(data2); % defining the raw and column for data2 Third, define the dependent and independent variables as in matrix/vector form: x=data1(:,:); % defining explanatory variables as a matrix y=data2(:,2); % defining the dependent variable as a vector Fourth, since variables y and x has different frequencies, now we will construct matrix "C" to convert the regression model (1) from monthly to annual to maintain consistent number of observation. In this case, we use C F matrix, which first month value of a particular year is consistent the annual series.
First, we load the xlsx. data file to Matlab using following commands. clearall; formatbank; [num, txt, raw] = xlsread('explanatory.xlsx'); % num is initialized with all the numbers % txt is initialized with all the text % raw is a cell matrix with all the numbers & text numbers=cell2mat(raw(2:end,2:end)); % This returns the matrix with all the numbers headings=cell2mat(raw(1:1,1:end)); % This returns the headings of your matrix text=raw(2:end,1); % This returns the first column. data1=numbers(:,:); % Defining the monthly data as a matrix data2=xlsread('gdp.xlsx'); % loading the annual data to Matlab file Second, we assign the raw and column for data series using below commands.
[p1 q1]=size(data1); % defining the raw and column for data1 [p2 q2]=size(data2); % defining the raw and column for data2 Third, define the dependent and independent variables as in matrix/vector form: x=data1(:,:); % defining explanatory variables as a matrix y=data2(:,2); % defining the dependent variable as a vector Fourth, we will construct C_A matrix, in which average monthly value is equal to low frequency data, to convert the eq. (1) as annual series equation.
ESTIMATION RESULTS
We used three methods such as first month value, average monthly value and sum of the monthly value to disaggregate low frequency data into high frequency data following Chow and Lin (1971) approach. There is indeed one to one relationship between initial data and interpolated data among three methods. The results are given below: Figure 1, 2, and 3 shows the monthly data of GDP from 1978M1 to 2011M12 After the temporal disaggregation and 2 nd panel represents the comparison of original low frequency data (i.e. original annual GDP) and interpolated GDP data. According to the 1 st panel, we can observe the similar pattern among three methods regardless of the matrix forms of C, that is either first month value (C F ) or average monthly value (C A ) or sum of the monthly value (C S ), that we used to convert regression model (1) to (2). This implies that our estimates satisfy the Best Linear Unbiased property as shown in Chow and Lin (1971).
In our example, we actually know the true data on annual GDP, so we can compare the interpolated values to the true values. With an indicator series, Chow-Lin procedures produce the series with one to one relationship among all three methods (Panel 2 of Figure 1, 2 and 3). This is, of course, due to fact that in this example, our estimate satisfies the Best Linear Unbiased property.
CONCLUDING AND REMARKS
This study attempt offers estimation procedure and Matlab programing to disaggregate a low frequency time series into a higher frequency series, using either first or the average, or the sum value of the resulting high frequency series is consistent with the low frequency series. Although temporal disaggregation can be performed with or without the help of one or more high frequency indicators, here we used more than one high frequency indicator series to the disaggregation. If good indicators are and estimation procedures are at hand, the resulting series may be close to the true series.2 nd panel of Figures 1, 2 and 3 proof this statement and we found one to one relationship between resulting series and true series among all three method suggesting that empirical researchers can use one of these method to disaggregate their data from low frequency to high frequency and proceed their work. | 3,551.8 | 2014-12-30T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Integrated multimodal artificial intelligence framework for healthcare applications
Artificial intelligence (AI) systems hold great promise to improve healthcare over the next decades. Specifically, AI systems leveraging multiple data sources and input modalities are poised to become a viable method to deliver more accurate results and deployable pipelines across a wide range of applications. In this work, we propose and evaluate a unified Holistic AI in Medicine (HAIM) framework to facilitate the generation and testing of AI systems that leverage multimodal inputs. Our approach uses generalizable data pre-processing and machine learning modeling stages that can be readily adapted for research and deployment in healthcare environments. We evaluate our HAIM framework by training and characterizing 14,324 independent models based on HAIM-MIMIC-MM, a multimodal clinical database (N = 34,537 samples) containing 7279 unique hospitalizations and 6485 patients, spanning all possible input combinations of 4 data modalities (i.e., tabular, time-series, text, and images), 11 unique data sources and 12 predictive tasks. We show that this framework can consistently and robustly produce models that outperform similar single-source approaches across various healthcare demonstrations (by 6–33%), including 10 distinct chest pathology diagnoses, along with length-of-stay and 48 h mortality predictions. We also quantify the contribution of each modality and data source using Shapley values, which demonstrates the heterogeneity in data modality importance and the necessity of multimodal inputs across different healthcare-relevant tasks. The generalizable properties and flexibility of our Holistic AI in Medicine (HAIM) framework could offer a promising pathway for future multimodal predictive systems in clinical and operational healthcare settings.
INTRODUCTION
Artificial intelligence (AI) and machine learning (ML) systems are poised to become fundamental tools in next-generation clinical practice and healthcare operations 1 .Such anticipated utility, particularly in AI/ML systems aimed to improve clinical efficiency and patient outcomes, will require knowledge from multiple data sources and various input modalities [2][3][4] .Multimodal architectures for AI/ML systems are attractive because they can emulate the input conditions that clinicians and healthcare administrators currently use to perform predictions and respond to their complex decision-making landscape 2,5 .A typical clinical practice uses a diverse set of information formats contained within the patient electronic health record (EHR) such as tabular data (e.g., age, demographics, procedures, history, billing codes), image data (e.g., photographs, x-rays, computerized-tomography scans, magnetic resonance imaging, pathology slides), time-series data (e.g., intermittent pulse oximetry, blood chemistry, respiratory analysis, electrocardiograms, ultra-sounds, in-vitro tests, wearable sensors), structured sequence data (e.g., genomics, proteomics, metabolomics) and unstructured sequence data (e.g., notes, forms, written reports, voice recordings, video) among other sources 6 .Recently, AI/ML models leveraging multiple data modalities have been demonstrated for the domains of cardiology [7][8][9] , dermatology 10 , gastroenterology 11 , gynecology 12 , hematology 13 , immunology 14 , nephrology 15 , neurology 16,17 , oncology [18][19][20] , ophthalmology 21 , psychiatry 22 , radiology [23][24][25] , public health 26 and healthcare operational analytics (i.e., mortality, length-of-stay, and discharge predictions) [27][28][29][30] .Furthermore, it has been shown that multimodality in most of these domains can increase the performance of AI/ML systems (accuracy: 1.2-27.7%)compared to singlemodality approaches for the same task 2 .However, developing unified and scalable pipelines that can consistently be applied to train multimodal AI/ML systems that leverage and outperform their single-modality counterparts has remained challenging 2 .This motivates the development of our Holistic Artificial Intelligence in Medicine (HAIM) framework, a modular ML pipeline (Fig. 1) that can be adapted to receive standard EHR information from multiple input data modalities (i.e., tabular data, images, time-series, and text).Our HAIM framework addressed the need for a more generalizable methodology to create this class of systems.It can leverage user-defined pre-trained featureextraction models as part of a unified processing and feature aggregation stage that allows for simple and scalable downstream modeling of a variety of clinically relevant predictive tasks.Based on this pipeline, we build and test thousands of classification models with sample EHR inputs to systematically investigate the value of adding individual data modalities to these systems.To our knowledge, this has not yet been analyzed to greater detail in prior clinical multimodal AI/ML demonstrations.We provide this work as an open-source codebase for clinicians and researchers in the hope it will allow them to train and test AI/ML systems more easily with the local datasets, pretrained feature extractors, and clinical questions of their choosing to fully leverage multimodality at their institutions.
Demonstration of HAIM framework on multimodal clinical dataset
We demonstrate the feasibility and versatility of the HAIM framework on a compiled multimodal dataset (HAIM-MIMIC-MM), which includes a total of 34,537 samples involving 7279 hospitalization stays and 6485 unique patients.We summarize the general characteristics of HAIM-MIMIC-MM (i.e., number of samples and features) in Table 1.Qualitatively, our HAIM framework appears to improve on previous work in this field 30 by including scalable patient-centric data pre-processing and enabling standardized feature extraction stages that allow for rapid prototyping, testing, and deployment of predictive models based on user-defined prediction targets.Our HAIM framework displays consistent improvement on average AUROC (Fig. 2a color gradient) across all models as the number of modalities and data sources increases.Furthermore, the trend of reducing AUROC standard deviation (SD) values also appears to follow from increasing the number of modalities and data sources (Fig. 2a greyscale gradient).We also report Receiver Operating Characteristic (ROC) curves for the best found single-modality predictive models (Fig. 2c) as compared with typical multimodal predictive models based on the HAIM framework (Fig. 2b).All 14,324 individual model AUROCs (10,230 for chest diagnosis prediction tasks, 2047 for length-of-stay and 2047 mortality prediction) are shown along with their respective SDs in Supplementary Fig. 1A-D.These results suggest that our HAIM framework can consistently improve predictive analytics for various applications in healthcare as compared with single-modality analytics.Quantitatively, Fig. 3a, b shows that our HAIM framework produces models with multisource and multimodality input combinations that improve from average performance of canonical single-source (and by extension single-modality) systems for chest x-ray pathology prediction (Δ AUROC : 6-22%), length-of-stay (Δ AUROC : 8-20%) and 48 h mortality (Δ AUROC : 11-33%).Specifically, for chest pathology prediction, the minimum per task improvements include: Fracture (Δ AUROC = 6%), Lung Lesion (Δ AUROC = 7%), Enlarged Cardio mediastinum (Δ AUROC = 9%), Consolidation (Δ AUROC = 10%), Pneumonia (Δ AUROC = 8%), Atelectasis (Δ AUROC = 6%), Lung Opacity (Δ AUROC = 7%), Pneumothorax (Δ AUROC = 8%), Edema (Δ AUROC = 10%) and Cardiomegaly (Δ AUROC = 10%).Furthermore, the average percent improvement of all multimodal HAIM predictive systems is 9-28% across all evaluated tasks (Fig. 3a).All AUROC-related results displayed in Figs.2a and 3a, b are grouped and ordered by number of modalities (range = 1-4, encompassing tabular, time-series, text, and images), number of data sources (range = 1-11, including each individual data source in HAIM-MIMIC-MM) and sample size (N) for ease of analysis.
Analysis of source and multimodality contributions on model performances
To understand how each data source and modality contributes to the final performance, we calculate Shapley values 31 of each of the 11 sources and 4 modalities as it contributes to the final AUROC test-set performance.Since our demonstrated predictive tasks are treated as binary classification problems, we assumed that the AUROC of a model with no data source is 0.5, and the AUROC of the model of a particular modality is the average AUROC of the models of all sources that belong to such modality.Aggregated Shapley values for all data modalities per predictive task are reported in Fig. 3c, while Shapley values for all data sources per predictive task are shown in Supplementary Fig. 2. Different tasks exhibit distinct distributions of aggregated Shapley values across data modalities and sources.In particular, we observe that vision data contributed most to the model performance for the chest pathology diagnosis tasks, but for predicting length-of-stay and 48 h mortality, the patient's historical time-series records appeared to be the most relevant.Shapley values also provide a way to monitor errors and information loss propagation during the feature extraction and model training phases of our HAIM framework.Data modalities associated with small (or negative) Shapley values indicate either an absence of extracted information or error propagation leading to detrimental local effects on downstream model performance (Fig. 3b and Supplementary Fig. 2).This situation can be potentially addressed by removing such input data modalities or by selecting different pre-trained feature extraction models specific to that data modality.Nevertheless, we see that across all tasks, in our specific sample HAIM-MIMIC-MM demonstrations, every single-modality contributes positively to a monotonic trend with diminishing returns on the predictive capacity of the models (Fig. 3a and c), likely due to multimodal data redundancy.These observations attest to the potential value (and limitations) of using multimodal inputs and pretrained feature extraction modules in frameworks like HAIM, which could be used to generate predictive models for diverse clinical tasks more cost-effectively than previous strategies.A Fig. 1 Holistic Artificial Intelligence in Medicine (HAIM) framework.Under this framework, databases and tables sourced from specific healthcare institutions such as HAIM-MIMIC-MM combined from MIMIC-IV and MIMIC-CXR-JPG for this work are processed to generate individual patient files.These files contain past and present multimodal patient information from the moment of admission.For processing under the HAIM framework, every data modality is fed to independent embedding generating streams.In this work, tabular data is minimally processed using simple transformations or normalizations to produce encodings or embedding-like categorical numerical values (E Tabular(n,t) , where n = unique stay/hospitalization/patient and t = sampling time).Selected time-series are processed by generating statistical metrics on each of the time-dependent signals to produce embeddings representative of their trends (E TimeSeries(n,t) ) from the moment of admission until the sampling time.Natural language inputs such as notes are processed using a pre-trained transformer neural network to generate text embeddings of fixed size (E Text(n,t) ).Furthermore, image inputs such as X-rays are processed using a pre-trained convolutional neural network to also extract fixed-size embeddings out of the model output probability vectors and dense features (E Images(n,t) ).While not done in this work, thanks to the modularity of the embedding extraction process in the HAIM framework, other pre-trained models or systems could be added to generate embeddings from other types of data sources if needed (E Other(n,t) ).All generated embeddings are concatenated to generate a fusion embedding, which can be used to train, test, and deploy models for predictive analytics in healthcare operations.For this work, we tested and utilized only XGBoost as a canonical type of architecture for building the downstream predictive models based on fusion embeddings.CNN high-level schematic of the complete HAIM pipeline for training and evaluation of models throughout this work is described in Fig. 3d.The general process of HAIM-MIMIC-MM database preparation, as well as embedding extraction and fusion that serves as input for this pipeline, can be found in Fig. 1.
DISCUSSION
Inferring latent features from rich and heterogeneous multimodal EHR information could provide clinicians, administrators, and researchers with unprecedented opportunities to develop better pathology detection systems, actionable healthcare analytics, and recommendation engines for precision medicine.Our results directly illustrated that different data modes are more useful for different tasks, and thus a multimodal approach to construct a comprehensive pipeline for AI/ML in healthcare.In addition to leveraging multimodal inputs, our HAIM framework attempts to solve several bottleneck challenges in this kind of AI/ML pipeline for healthcare in a more unified and robust way than previous implementations, including the possibility of working with tabular and non-tabular data of unknown sparsity from multiple standardized and unstandardized heterogeneous data formats.The use of fusion embeddings obtained directly from individual patient files suggests that a HAIM framework can potentially facilitate the definition, testing, and deployment of AI/ML models that may be useful for managing complex clinical situations and day-to-day practice in healthcare systems.More specifically, if implemented across many predictive tasks while using the same patient embeddings, this approach could potentially help accelerate the advent of scalable predictive systems to improve patient outcomes and quality of care.From these observations, our work distinguishes itself from previously published systems in three main ways: (A) First, our work systematically investigates the value of progressively adding data modalities and sources to clinical multimodal AI/ML systems in much greater detail and larger combinatorial input space than any prior investigation of such class systems.Previous works in this field assume advantageous properties to multimodality without clear validation of the dynamics of such expected performance benefits as data modalities are added.Through our investigation by conducting 14,324 model experiments with different input modalities and data source combinations, we provide strong empirical evidence that supports the potential for reaching such positive monotonic trends in performance from multimodal AI/ML systems as data modalities are added.However, our investigation also unveils previously unreported local non-monotonic and diminishing return effects on the predictive capacity of these models under certain conditions of data source availability, error, and redundancy, which are relevant and can become interpretable through our use of aggregated Shapley values during analysis.(B) Second, our data pre-processing and modeling pipeline expands on the notion of high modularity from previously published work, that tend to employ ad-hoc multimodal architectures trained directly on fused data inputs, which are usually closed, less compatible After particular models are selected and trained, they can be benchmarked to test and report results.This process concludes by the selection of a model for deployment in a use case scenario.
effects of multiple data modality additions to our AI/ML framework was based on the MIMIC-IV dataset, this input was only used to exemplify our pipeline and to provide strong empirical evidence on the dynamics of performance from the use of different data modalities in a canonical HER scenario.The downstream trained models generated for this investigation could potentially be used in the future by people interested in predicting the demonstrated clinical tasks within intensive care units (ICUs) using multimodal data.However, we primarily encourage users to use our codebase to process their own EHR datasets and train predictive tasks of interest to them with the help of our pipeline.We envision a broad utility for the HAIM framework and its subprocesses focusing on driving cost-effective AI/ML activities for clinical and non-clinical operations.We hope that our HAIM framework can help reduce the time required to develop relevant AI/ML systems while efficiently utilizing human, financial, and digital resources in a more timely and unified approach than the current methods used in healthcare organizations.
Dataset
For this work, we utilize the Medical Information Cart for Intensive Care (MIMIC)-IV 32,33 , an openly accessible database that contains de-identified records of 383,220 individual patients admitted to the ICU or emergency department (ED) of Beth Israel Deaconess Medical Center (BIDMC) in Boston, MA, USA, between 2008 and 2019 (inclusive).MIMIC-IV's most recent version (v1.0) improves on MIMIC-III 34 to provide public access to the EHR data of over 40,000 hospitalized patients based on the BIDMC's MetaVision clinical information system.We selected MIMIC-IV due to its large-scale, detailed documentation, generalizable formatting, corroborated use in AI/ML applications 35 , and prior evaluations in terms of AI/ ML interpretability, fairness, and bias 36 .To augment BIDMC's MIMIC-IV v1.0, we used the MIMIC Chest X-ray (CXR) database v2.0.0 37 containing 377,110 radiology images with free-text reports representing 227,835 medical imaging events that can be matched to corresponding patients included in MIMIC-IV v1.0.Both databases have been independently de-identified by deleting all personal health information, following the US Health Insurance Portability and Accountability Act of 1996 Safe Harbor requirements.After getting credentialled access from PhysioNet, we combined MIMIC-IV v1.0 and MIMIC-CXR-JPG v2.0.0 into a unified multimodal dataset (HAIM-MIMIC-MM) based on matched patient, admission, and imaging-study identifiers (i.e., subject_id, stay_id, study_id from MIMIC-IV and MIMIC-CXR-JPG databases).We used HAIM-MIMIC-MM throughout this study to test all the presented ML use cases analyzing various combinations of structured patient information, time-series data, medical images, and unstructured text notes, as presented in the following sections.
Patient-centric data representation
We generated the individual files containing patient-specific information for single hospital admissions by querying the aggregated multimodal dataset HAIM-MIMIC-MM.Every HAIM-EHR file contains the details of current and previous patient admissions, transfers, demographics, laboratory measurements, provider orders, microbiology cultures, medication administrations, prescriptions, procedure events, intravenous and fluid inputs, sensor outputs, measurement events, radiological images, radiological reports, electrocardiogram reports, echocardiogram reports, notes, hospital billing information (e.g., diagnosis and procedure-related codes), as well as other time-stamped and charted information.The samples, therefore, include all available patient data collected within a specific admission and stay with all prior information occurring before the discharge or death time stamp.We stored all the individual patient files in HAIM-MIMIC-MM as "pickle" python-language object structures for ease of processing in subsequent sampling and modeling tasks.The code to generate the aggregated HAIM-MIMIC-MM dataset from credentialled access to MIMIC-IV v1.0 and MIMIC-CXR-JPG v2.0.0 datasets is available at our PhysioNet repository (https://doi.org/10.13026/dxcx-n572) 38 as well as our GitHub repository (https:// github.com/lrsoenksen/HAIM).In addition, samples of preprocessed pickle patient files of HAIM-MIMIC-MM can be found in our PhysioNet project page https://doi.org/10.13026/dxcx-n572) 38.A schematic of this patient-centric data representation as multimodal input for our HAIM framework is shown in Fig. 1.
Patient data processing and multimodal feature extraction
We processed each HAIM-EHR patient file individually to generate fixed-dimensional vector embeddings for each of the possible input types, including all patient information from the time of admission until the selected inference event (e.g., time of imaging procedure for pathology diagnosis or end-of-day for 48 h mortality predictions).The generated embeddings from input modalities include: tabular data such as demographics (E de = demographics), structured time-series events (E ce = chart events, E le = laboratory events, E pe = procedure events), unstructured free text (E radn = radiological notes, E ecgn = electrocardiogram notes, E econ = echocardiogram notes), single-image vision (E vp = visual probabilities, E vd = visual dense-layer features) and multi-image vision (E vmp = aggregated visual probabilities, E vmd = aggregated visual denselayer features).From these, patient signals used as time-series for embedding extraction (classified by type of event) can be found in Supplementary Table 1.We then implemented fixed embedding extraction procedures based on standard data modalities (i.e., tabular data, time-series, text, and images) to reduce its dependence on site-specific data architectures and allow for a consistent embedding format that may be applied to arbitrary ML pipelines.Note that throughout this work, we refer to data "modality" as a distinct term to data "source", where the former is used to define broad classes of data usually digitalized in different format types, while the latter simply refers to different input variables belonging to a data modality as defined in Supplementary Table 2.We extracted the embeddings based solely on tabulated demographics data (E de ) by querying normalized numerical values from the patient record.We obtained time-series embeddings using time-stamped data from the structured patient chart, laboratory, and procedure event lists (i.e., E c E le, E pe , respectively).We selected a set of key clinical signals for each type of event list and constructed the corresponding time sequences from the time of patient admission to the time-stamp allowable for each individual feature (see Supplementary Table 1).The embeddings encode the signal length, maximum, minimum, mean, median, SD, variance, number of peaks, and average time-series slope and piece-wise change over time of these metrics.The time-series signals for E ce include: heart rate (HR), non-invasive systolic blood pressure (NBP s ), non-invasive diastolic blood pressure (NBP d ), respiratory rate, oxygen saturation by pulse oximetry (SpO 2 ), Glasgow coma scales (GCS) for verbal, eye, and motor response (GCS V , GCS E , GCS M respectively).Moreover, time-series E le include: glucose, potassium, sodium, chloride, creatinine, urea nitrogen, bicarbonate, anion gap, hemoglobin, hematocrit, magnesium, platelet count, phosphate, white blood cells, total calcium, mean corpuscular hemoglobin (MCH), red blood cells, mean corpuscular hemoglobin concentration, mean corpuscular volume, red blood cell distribution width, platelet count, neutrophils, vancomycin.Lastly, time-series E pe procedures include: foley catheter, peripherally inserted central catheter (PICC), intubation, peritoneal dialysis, bronchoscopy, electroencephalogram (EEG), dialysis with continuous renal replacement therapy, dialysis with catheter, removed chest tubes, and hemodialysis.
We obtained embeddings for the unstructured free text (E radn , E ecgn , and E econ ) by concatenating all available text from each of these types of notes as continuous strings and then by processing them using Clinical BERT 39 , a transformer-based bidirectional encoder model pre-trained on a large corpus of biomedical and medical text.This transformer-based model generates a single 768-dimensional vector, or embedding, per unstructured text type.We split notes longer than the maximum input token size for Clinical BERT (i.e., 512 tokens) into the smallest number of processable text chunks to generate various embeddings sequentially, all of which are averaged to produce a single 768dimensional output embedding for the entire text.
Finally, we processed vision data included in this work using a pre-trained Densenet121 convolutional neural network (CNN) previously fine-tuned on the X-ray CheXpert dataset 40 (i.e., Densenet121-res224-chex) 41 .We selected this model because the availability of at least one time-stamped chest X-ray per patient file within the HAIM-MIMIC-MM database as its core visual component.Densenet121-res224-chex is part of TorchXRayVision, a unified library, and repository of datasets and SOTA pre-trained models for chest pathology classification using X-rays 41 .While other computer vision models pre-trained on large sets of medical imaging data may be utilized to extract embeddings within the HAIM framework, for the purpose of experimentally validating our pipeline, we used Densenet121-res224-chex as a canonical method to extract visual embeddings.We obtained the single-image embeddings per HAIM-EHR patient file by rescaling each image into 224 × 224 size using a standard interpolation method with resampling using pixel area relations, and then feeding it into the selected network to extract: (a) output class probabilities and (b) final dense-layer features.The output classes per image are the 18-dimensional diagnosis probability vector generated directly by Densenet121-res224-chex, which produces the embedding E vp .The dense network features per image are the 1024-dimensional vector generated by extracting the outputs of the last dense layer of the model, which produces the embedding E vd .Multi-image embeddings are also obtained by averaging feature-wise the output class probabilities and densefeature embeddings of all available images per HAIM-EHR patient file (e.g., X-ray studies with multiple planes and past X-ray studies).This produces an aggregated multi-image diagnosis probability embedding (E vmp ) and multi-image dense-layer embedding (E vmd ) per patient that considers all available X-rays and not only the most recent one.
There are various advantages of using SOTA pre-trained models specific to each data modality (i.e., tabular, time-series, text, and images) such as Clinical BERT 39 and Densenet121-res224-chex 41 as feature extractors in our HAIM framework.First, every single-data pre-trained SOTA model can be user-defined and easily exchanged with updated ones, as long as their respective dense features or embeddings are accessible.This departs from other multimodal AI/ML strategies that attempt to directly fuse heterogeneous input data, which makes these systems less modular and usually incompatible with the use of highperforming open-source single-data-type models produced by other organizations and researchers 10,29 .A second advantage of using SOTA feature extractors within our framework is that users can easily generate unified input vectors to focus primarily on downstream modeling and rapid training of their predictive systems of interest, which can accelerate deployment.
In our sample demonstration of the HAIM framework using the HAIM-MIMIC-MM database, the dimensionality of each of these embeddings is E de = 6, E ce = 99, E le = 242, E pe = 110, E radn = 768, E ecgn = 768, E econ = 768, E vp = 18, E vd = 18, E vmp = 1024, and E vmd = 1024.Detail on the presence and handling of missing input data is provided as part of Supplementary Table 3. Once all single-modality embeddings are generated, we flatten, normalize, and concatenate them into a single one-dimensional multimodal fusion embedding per HAIM-EHR patient file, which constitutes the input for all downstream modeling tasks in our HAIM framework (see Supplementary Fig. 3 for algorithmic detail of such process).This deep patient representation in vector form can be made of fixed size within or across healthcare institutions (4845-dimensional for this work), which can allow for rapid iteration in the development of generic ML systems for relevant predictive analytics in various applications.
Modeling
After we extracted all multimodal fusion embeddings for all HAIM-EHR patient files in the HAIM-MIMIC-MM database, we generated classification models across various clinical and operational tasks, including: (a) chest pathology diagnosis, (b) length-of-stay and (c) 48 h mortality predictions.For each of these modeling tasks, we split the available embeddings randomly into training (80%) and testing (20%) sets 5 times (with 5 different splits), stratifying by patients during our experiments to avoid data leakage of patientlevel information from training to testing, compute SDs, and to ensure adequate comparison of recorded predictive values.For the chest pathology diagnosis tasks, we applied an additional stratification by pathology to balance the target ratios.We then conducted experiments to compare the effect of all different combinations of input data modalities and sources using the extracted multimodal fusion embeddings as presented in further sections.An algorithmic formulation of our HAIM framework in the context of the data processing, feature extraction, and downstream predictive task modeling stages is provided as part of Supplementary Fig. 3. Detail on the sensitivity of missing input data to downstream predictions is also provided as part of Supplementary Fig. 4.
Tasks of interest
Chest pathology diagnosis prediction.Early detection of certain pathologies in CT scans and other diagnostic imaging modalities enables clinicians to focus on early intervention rather than delayed treatment for advanced stages of relevant pathologies.Within this task of interest, we chose to target the prediction of 10 common thorax-level pathologies (i.e., fractures, lung lesions, enlarged cardio mediastinum, consolidation, pneumonia, lung opacities, atelectasis, pneumothorax, edema, and cardiomegaly) that can be typically assessed by radiologists through chest X-ray, to demonstrate that HAIM outperforms image-only approaches.The ground-truth values for each chest pathology included in HAIM-MIMIC-MM are derived from MIMIC-CXR-JPG v2.0.0,where radiology notes were processed to determine if each of these pathologies was explicitly confirmed as present (value = 1), explicitly confirmed as absent (value = 0), inconclusive in the study (value = −1), or not explored (no value).We only selected samples with 0 or 1 values, removing the rest from the training and testing data.Thus, for this specific task, we utilized the multimodal fusion embeddings as input and the ground-truth chest pathology HAIM-MIMIC-MM values as the output target to predict.From these embeddings, we only excluded the unstructured radiology notes component (E rad ) from the allowable input to avoid potential overfitting or misrepresentations of real predictive value.We trained and tested independent binary classification models for each target chest pathology and input source combination as described in the general model training setup section.Length-of-Stay prediction.Projected patient length-of-stay plays a vital role for both patients and hospital systems in making informed medical and economic decisions.An accurate forecast of patient stay enhances patient satisfaction, hospital resource allocations, and doctors' ability to make more effective treatment planning 42 .Particularly, predicting next 48 h discharges is critical for physicians to identify and prioritize patients ready for discharge and for case management teams to accelerate discharge preparations, which ultimately reduces patient burden and direct operating costs in healthcare systems 43 .To demonstrate the HAIM framework for healthcare operations tasks, we predicted whether or not a patient will be discharged without expiration during the next 48 h as a binary classification problem: discharged alive ≤48 h (1) or otherwise (0).In case of patient death, we set the class label to 0. Each sample in this predictive task corresponds to a single patient-admission EHR time point where an X-ray image was obtained (N = 45,050).
48 h mortality prediction.Due to its time and outcome-critical environments, clinicians in ICU units often need to make rapid evaluations of patient conditions to inform treatment plans 44 .However, current standards of estimating patient severity, such as the Acute Physiologic Assessment and Chronic Health Evaluation score, fail to incorporate medical characteristics beyond acute physiology 45 .Accurate mortality prediction can give clinicians advanced warnings of possible deteriorations and share the burdens of making information-heavy decisions 44 .To further demonstrate the versatility of the HAIM framework, we also built models to predict the probability that a patient will expire during the next 48 h as a binary classification problem: expired ≤48 h (1) or otherwise (0).In the case of a patient whose hospital exit status is not expiration, we set the class label to 0. It should be noted that a patient can acquire different target class labels at different time points during their stay due to changes in status and proximity to the discharge or time of death.Similar to the length-of-stay modeling, each sample in this predictive task corresponds to a single patient-admission EHR time point where an X-ray image was obtained (N = 45,050).
General model training setup
We initially explored seven ML architectures, including logistic regression, classification and regression trees, random forest, multi-layer perceptron, gradient boosted trees (XGBoost), gradient boosting machines (LightGBM), as well as attentive tabular networks TabNet to heuristically decide on the best model choice for follow-up experiments.Since XGBoost supports fast computations for large-scale experiments and consistently outperformed other architectures during preliminary observations, we selected this canonical methodology for all further tests.Our XGBoostbased modeling experiments were conducted using every possible combination of input embeddings, extracted as described in previous sections, from the allowable 11 data sources (i.e., E de , E ce , E pe , E le , E ecgn , E econ , E radn , E vp , E vd , E vmp , and E vmd ) and 4 modalities (i.e., tabular, time-series, text, and images).In this process, we concatenated each data stream permutation to produce fusion embeddings and train XGBoost models using single-modality (N 1M = 52), double-modality (N 2M = 392), triplemodality (N 3M = 972) and quadruple-modality (N 4M = 630) combination of inputs.This corresponds to the generation of 2047 models (per predictive task) for the cases of length-of-stay and 48 h mortality.As previously mentioned, in the case of chest pathology diagnosis, the embeddings corresponding to all radiology notes (E radn ) are not included as part of the input fusion embeddings to allow for fair comparison with the output target, which was originally determined from examining notes in MIMIC-CXR-JPG.This reduced the total number of possible models per chest pathology diagnosis task to 1023 (N 1M = 26, N 2M = 196, N 3M = 486, N 4M = 315).Since there are ten chest pathologies, defined as binary classification problems for our experiments, we trained a total of 1023*10 = 10,230 models for chest pathology diagnosis prediction.As mentioned previously, all XGBoost models were trained five times with five different data splits to repeat the experiments and compute average metrics and SDs.
All defined models (N Models = 14,324) were trained and tested to evaluate the advantage of multimodal predictive systems, based on the HAIM framework, as compared to single modality ones for the aforementioned clinical and operational tasks.We capture average trends of model performance by reporting the average area under the receiver operating characteristic (AUROC) curve on the testing set (20%) over five consecutive iterations of randomized train-test data splitting and model training.The hyperparameter combinations of individual XGBoost models were selected within each training loop using a fivefold cross-validated grid search on the training set (80%).This XGBoost tuning process selected the maximum depth of the trees (5-8), the number of estimators (200 or 300), and the learning rate (0.05, 0.1, 0.3) according to the parameter value combination leading to the highest observed AUROC within the training loop.This model cross-validation strategy at the level of each data source combination ensures that the respective test sets are never used for model training, model selection, model comparison, or reporting across any of the 14,324 uniquely trained models.Thus, throughout this study, the test set remains unseen at the level of each model for all models, which minimizes the potential for data leakage or model selection overfitting.
The aggregated test set performance metrics (fivefold test averages and SDs) of all these models grouped by the number of data sources and modalities can be found in Fig. 2. We conducted all embedding generation and computational experiments using a parallelization strategy under MIT's Supercloud server (https:// supercloud.mit.edu) with 30GB RAM and 1 NVIDIA Tesla V100 Volta graphics processing unit per instance.A high-level schematic representation of the HAIM framework, from data sourcing to model benchmarking, can be found in Fig. 3.
Fig. 2
Fig.2Performance of the multimodal HAIM framework on various demonstrations for healthcare operations.a Average and standard deviation values of the area under the receiver operating characteristic (AUROC) for all demonstrations including pathology diagnosis (i.e., lung lesions, fractures, atelectasis, lung opacities, pneumothorax, enlarged cardio mediastinum, cardiomegaly, pneumonia, consolidation, and edema), as well as length-of-stay and 48 h mortality prediction.The number of modalities refers to the coverage among tabular, time-series, text, and image data.The number of sources refers to the coverage among available input data sources (10 for pathology diagnosis, while 11 for length-of-stay and 48 h mortality prediction).Thus, the position (Modality = 2, Sources = 3) corresponds to the average AUROC of all models across all input combinations covering any 2 modalities using any 3 input sources.Increasing gradients on average AUROC appear to follow from increasing the number of modalities and number of sources across all evaluated tasks.Decreasing gradients on AUROC standard deviations follow from less variability in performance as a higher number of modalities and data sources is used.b Receiver operating characteristic (ROC) curves for typical HAIM model across all use cases exhibiting input multimodal.c ROC curves for a best-performing model with single-modality inputs across the same use cases.Consistent averaged improvements across all tasks are observed in multimodality as compared to single-modality systems.AUROC Area under the curve, AUROC Area under the receiver operating characteristic curve, CM Cardiomediastinum.Dx Diagnosis, HAIM Holistic Artificial Intelligence in Medicine, Ops Operations, SD Standard deviation.
Fig. 3
Fig.3Multimodal HAIM framework is a flexible and robust method to improve predictive capacity for healthcare machine learning systems as compared to single-modality approaches.a Average percent change of area under the receiver operating characteristic curve (Avg.ΔAUROC) for all tested multimodality HAIM models as compared to their single-source single-modality counterparts.While different models exhibit varying degrees of improvement, all tested models show positive Avg.ΔAUROC percentages.The number of modalities refers to the coverage among tabular, time-series, text, and image data.The number of sources refers to the coverage among available input data sources (10 for pathology diagnosis, 11 for length-of-stay, and 48 h mortality prediction).Thus, the position (Modality = 2, Sources = 3) corresponds to the average AUROC of all models across all input combinations covering any 2 modalities using any 3 input sources.b Expanded Avg.ΔAUROC percentages for all tested multimodality HAIM models and ordered by the number of used modalities (i.e., tabular, time-series, text, or images) as well as the number of used data sources.c Waterfall plots of aggregated Shapley values for independent data modalities per predictive task.While Shapley values for all data modalities appear to be positively contributing to the predictive capacity of all models, different tasks exhibit distinct distributions of aggregated Shapley values.d High-level schematic of the HAIM pipeline developed to support the presented work.After data collection or sourcing (HAIM-MIMIC-MM for this work), a process of feature selection and embedding extraction is applied to feed fusion embeddings into a process of iterative architecture engineering (model and hyperparameter selection).After particular models are selected and trained, they can be benchmarked to test and report results.This process concludes by the selection of a model for deployment in a use case scenario.
HAIM-MIMIC-MM is a combination of MIMIC-IV and MIMIC Chest X-ray filtered to only include patients that have at least one chest X-ray performed with the goal of validating multimodal predictive analytics in healthcare operations.The number of samples and quantities of variables are described.Demographic features correspond only to a tabular data modality, while chart, laboratory, and procedure events correspond to time-series.X-ray variables correspond to types of medical images, while text note variables correspond to the test in radiology, electrocardiogram, and echocardiogram natural language reports. | 8,353.8 | 2022-02-25T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
A New Model of Interval-Valued Intuitionistic Fuzzy Weighted Operators and Their Application in Dynamic Fusion Target Threat Assessment
Existing missile defense target threat assessment methods ignore the target timing and battlefield changes, leading to low assessment accuracy. In order to overcome this problem, a dynamic multi-time fusion target threat assessment method is proposed. In this method, a new interval valued intuitionistic fuzzy weighted averaging operator is proposed to effectively aggregate multi-source uncertain information; an interval-valued intuitionistic fuzzy entropy based on a cosine function (IVIFECF) is designed to determine the target attribute weight; an improved interval-valued intuitionistic fuzzy number distance measurement model is constructed to improve the discrimination of assessment results. Specifically, first of all, we define new interval-valued intuitionistic fuzzy operation rules based on algebraic operations. We use these rules to provide a new model of interval-valued intuitionistic fuzzy weighted arithmetic averaging (IVIFWAA) and geometric averaging (IVIFWGA) operators, and prove a number of algebraic properties of these operators. Then, considering the subjective and objective weights of the incoming target, a comprehensive weight model of target attributes based on IVIFECF is proposed, and the Poisson distribution method is used to solve the time series weights to process multi-time situation information. On this basis, the IVIFWAA and IVIFWGA operators are used to aggregate the decision information from multiple times and multiple decision makers. Finally, based on the improved TOPSIS method, the interval-valued intuitionistic fuzzy numbers are ordered, and the weighted multi-time fusion target threat assessment result is obtained. Simulation results of comparison show that the proposed method can effectively improve the reliability and accuracy of target threat assessment in missile defense.
Introduction
Target threat assessment is the third level of the JDL information fusion model, and is a core issue of missile defense command decision-making. Methods to quickly and accurately evaluate the threat level of an incoming target represent a difficult technical problem that restricts the improvement of the combat effectiveness of missile defense weapon systems. Due to numerous characteristic attributes of ballistic targets and missile defense warfare being a continuous and dynamic process, the threat assessment of a target requires comprehensive consideration of multiple factors, which can be formulated as a type of uncertain dynamic multi-attribute group decision-making problem [1,2].
Commonly used threat assessment methods include Bayesian networks [3], rough set theory [4], D-S evidence theory [5], fuzzy reasoning [6], and multi-attribute decisionmaking [7]. Although the Bayesian networks can effectively handle uncertain information, the selection of transition probability depends on expert experience, the reliability is poor and it is only applicable to situations where there is a large amount of sample data, and the solution accuracy for small sample data is poor. Thus, it is not suitable for missile defense warfare with a lack of warfare sample data. Although rough set theory does not require prior information outside the data set, it needs to build a large knowledge base to support the construction of relevant rules, and it's difficult to adapt to the high requirements of missile defense warfare for decision-making timeliness. The D-S evidence theory is weak in processing conflicting information, and when the problem space is large the problem of the combinatorial explosion will occur. Considering the large threat of ballistic missiles, one missed interception will cause great damage to surface defense assets and require high threat assessment accuracy. Therefore, a variety of factors need to be considered. Fuzzy set theory combined with dynamic multi-attribute group decision theory can be used to solve this problem.
Fuzzy sets have been widely used to describe and process fuzzy uncertain decision information since Zadeh [8] proposed it. Atanassov [9] extended the fuzzy set to the intuitionistic fuzzy set by removing the constraint condition that the sum of membership and non-membership is one. Intuitionistic fuzzy sets can describe the uncertainty of decisionmaking information in more detail, so they have been widely used in intelligence reasoning, decision-making, and other fields. In recent years, a large number of multi-attribute group decision-making methods based on intuitionistic fuzzy sets have been proposed. Pamucar [10] provided multi-criteria decision-making that combines interval grey numbers and normalized weighted geometric Dombi-Bonferroni mean operator. Shen et al. [11] proposed an extended intuitionistic fuzzy TOPSIS method. In et al. builds an intuitionistic fuzzy multi-attribute group decision-making model based on the consistency method. Jin et al. [12] proposed an intuitionistic fuzzy preference relation group decision-making method based on multiplicative consistency. Wu et al. [13] used a multi-criteria decisionmaking model of triangular intuitionistic fuzzy number correlation. Based on this, Kong et al. [14] proposed a quantification method of multi-attribute threat indicators that addresses the issue of threat assessment indicators of ground combat targets being diverse and difficult to quantify, and unified the quantitative results of indicators in the form of intuitionistic fuzzy sets. Wang et al. [15] considered the preferences of decision makers and studied threat assessment methods with unknown target attribute weights.
In order to overcome the problem of threat assessment methods relying too much on expert knowledge, Xiao et al. [16] proposed an air target threat assessment method based on intuitionistic fuzzy hierarchical analytic processes. In [17], Zhang et al. used intuitionistic fuzzy entropy to calculate attribute weights and constructed a target threat assessment model. In the context of intuitionistic fuzzy multi-attribute decision-making, [18] proposed a target threat assessment method based on tripartite decision-making.
The above methods are all useful attempts to combine fuzzy set theory with multiattribute decision-making theory. However, for missile defense warfare with dynamic, time-sensitive, and strong antagonism, the existing threat assessment methods still have a number of shortcomings. First, due to the missile defense combat environment, the high complexity, and the limitations of sensor detection performance, the battlefield information present is incomplete and uncertain. The multi-attribute group decision-making method based on intuitionistic fuzzy sets uses a certain "point value" to represent the assessment data, which will lead to the excessive deviation between the assessment result and objective reality. Therefore, in the threat assessment of ballistic targets, it is necessary to expand the point value data into interval-valued intuitionistic fuzzy numbers, and to smooth the detection data parameters to control the error within a certain range in order to improve the accuracy and reliability of the threat assessment. Second, although a large number of interval-valued intuitionistic fuzzy operators [19] have been proposed in recent years, and most of them are defined in the traditional interval-valued intuitionistic fuzzy operation rules, they ignore the relationship between aggregated data when performing information fusion. Because of this, there will be results contrary to intuitive analysis and the ambiguous nature of the complex battlefield will not be meticulously and flexibly reflected. Third, most of the literature [1][2][3][4][5][6][7][14][15][16] only focuses on the information in the current moment when conducting the threat assessment, ignoring the timing of target in- formation, making it difficult to obtain objective and comprehensive threat assessment results. Fourth, although the existing interval-valued intuitionistic fuzzy entropy has various forms [20][21][22], when the deviation of the degree of membership and the degree of non-membership is equal there is inconsistency with the intuitive facts.
Based on the above analysis, the contributions of the work in this paper are as follows: (1) Based on a new interval-valued intuitionistic fuzzy operation rules that retain the properties of classical algebra, the new interval-valued intuitionistic fuzzy arithmetic weighted average (IVIFWAA) operator and the interval-valued intuitionistic fuzzy geometric weighted average (IVIFWGA) operator are proposed to perform nonlinear aggregation operation on interval intuitionistic fuzzy numbers. (2) The threat assessment index system for typical missile defense combat elements is constructed, and the interval-valued intuitionistic fuzzy entropy based on cosine function (IVIFECF) is proposed to solve the problem of inconsistency with intuitive facts when the deviation between membership and nonmembership are equal. (3) Considering the objective, subjective and time series weights, a comprehensive weight model of target attributes based on IVIFECF is constructed to process multi-time situation information. (4) An ordering method of interval-valued intuitionistic fuzzy numbers based on improved TOPSIS is proposed to improve the discrimination ability between decision-making results. (5) The evaluation model based on IVIFEC-IVIFWA-TOPSIS is constructed by aggregating the multi-target attributes and multi-expert decision-making information multiple times, which improves the reliability and accuracy of missile defense target threat evaluation.
We conclude this section by outlining the remainder of the article. Section 2 introduces the related concepts of interval-valued intuitionistic fuzzy sets(IVIFSs). Section 3 defines new interval-valued intuitionistic fuzzy operation rules. Section 4 defines new interval-valued intuitionistic fuzzy weighted average operators and proves their algebraic properties. In Section 5, we present a missile defense dynamic multi-time fusion target threat assessment method based on IVIFECF-IVIFWA-TOPSIS. Numerical examples and an analysis of the performance of the proposed algorithm are given in Section 6. We summarize the results and provide conclusions in the seventh part.
Interval-Valued Intuitionistic Fuzzy Sets
Definition 1. [23]. Let X = {x 1 , x 2 , · · · , x n } be a non-empty set. Given X, the interval-valued intuitionistic fuzzy set Acan be represented as 1] are the A membership and non-membership intervals, respectively associated with x in X, Further, for any x ∈ X, the condition 0 ≤ sup µ A (x) + sup v A (x) ≤ 1 is satisfied. For ease of explanation, the interval-valued intuitionistic fuzzy set can be written as: The intuitionistic fuzzy interval can be expressed as Generally, the ordered pair Then, the interval-valued intuitionistic fuzzy weighted average operator IFWAA: Θ n → Θ and the intervalvalued intuitionistic fuzzy weighted geometric operator IIFWG: Θ n → Θ are respectively:
The Order of Interval-Valued Intuitionistic Fuzzy Numbers
The aggregated result of interval-valued intuitionistic fuzzy information is still an interval-valued intuitionistic fuzzy number, so the ordering of interval-valued intuitionistic fuzzy numbers is of great significance to fuzzy decision-making. The score function and the exact function are classic methods to realize the ordering of interval-valued intuitionistic fuzzy numbers.
New Operations of Interval-Valued Intuitionistic Fuzzy Numbers
Definition 5 gives the basic rules of interval-valued intuitionistic fuzzy operations. Based on these operating rules, a large number of interval-valued intuitionistic fuzzy aggregation operations can be defined. It should be noted though that the rules for aggregation of interval-valued intuitionistic fuzzy information are not unique. At present, the operation rules based on the Einstein t-norm and the interactive operation rules based on the degree of membership and non-membership have been successively proposed [25][26][27] and used to solve the problem of interval-valued intuitionistic fuzzy multi-attribute decisionmaking. However, the relationship between new operations and classic operations is not clear. Some interactive operations lack the algebraic properties of classic operations, which has an impact on decision analysis. Therefore, this paper proposes a new interval-valued intuitionistic fuzzy operation rule based on algebraic operations and analyzes its operational characteristics in depth.
) be any two interval-valued intuitionistic fuzzy numbers and respectively define the addition operation " ⊕" and multiplication operation "⊗ " as Explanation 1. For functions f (x, y) = x+y−2xy 1−xy and g(x, y) = xy x+y−xy , we can obtain Therefore, Definition 6 can be expressed as
Proof of Theorem 1:
By Definition 6 we obtain: Then, it follows that: By the same logic:
Proof of Theorem 2:
We employ mathematical induction to prove the result as follows: (1) When n = 2 Hence, when n = 2 the result holds.
(2) Let n = m, and assume Letting n = m + 1, we obtain: This completes the proof. □
Proof of Theorem 3:
By Theorems 1 and 2 we obtaiñ This completes the proof. □ By extending Theorems 2 and 3 to any non-negative real number λ ≥ 0, a new intervalvalued intuitionistic fuzzy operation rule can be obtained, as shown in Definition 7. [c, d]) and real numbers λ ≥ 0, we can define the following operations:
Explanation 3.
When λ > 1, the denominators are not equal to 0, so the case where 0 ≤ λ ≤ 1 needs to be focused on. Define the two functions p(x, y) = xy We note that p(x, y) is undefined at (0,1) and q(x, y) is undefined at (0,0).
Proof of Theorem 4:
Since By the same logic, λ ⊙ α = α C ⊙λ C This completes the proof. □
Proof of Theorem 5:
Theorem 5 follows trivially form Theorems 1-4. Based on Theorem 5 we obtain It can be seen from Theorems 1-5 that the new interval-valued intuitionistic fuzzy operation satisfies idempotence, which is essential for the fusion of interval-valued intuitionistic fuzzy information. □
New Interval-Valued Intuitionistic Fuzzy Weighted Average Operator Model
Based on Theorems 1-5, new interval-valued intuitionistic fuzzy aggregation operations can be defined, and new models of interval-valued intuitionistic fuzzy weighted arithmetic averaging (IVIFWAA) operators and interval-valued intuitionistic fuzzy weighted geometric averaging (IVIFWGA) operators can be proposed.
Interval-Valued Intuitionistic Fuzzy Arithmetic Weighted Average Operator
Then, IVIFWAA is an interval-valued intuitionistic fuzzy arithmetic weighted average operator.
Proof of Theorem 6:
The result follows from Definition 6 and 7. □
Interval-Valued Intuitionistic Fuzzy Geometric Weighted Average Operator
Then, IVIFWGA is an interval-valued intuitionistic fuzzy geometric weighted average operator.
Proof of Theorem 7:
The result follows directly from Theorem 6. Based on the new interval-valued intuitionistic fuzzy operation properties defined in this document, it can be deduced that IVIFWAA and IVIFWGA operators have the following properties: (1) Idempotence If the interval-valued intuitionistic fuzzy numbers are all Based on basic operations of monotonicity we obtain: For the two sets of interval-valued intuitionistic fuzzy numbers α 1 , α 2 , · · · , α n and β 1 , β 2 , · · · , β n , if α i ≤ β i , ∀i ∈ {1, 2, · · · , n}, then by the basic operations of monotonicity we obtain Since the new interval-valued intuitionistic fuzzy operation defined in this paper satisfies commutativity and associativity, IVIFWAA and IVIFWGA also satisfy commutativity. □
IVIFECF-IVIFWA-TOPSIS Assessment Model
The IVIFEC-IVIFWA-TOPSIS assessment model is an effective combination of interval-valued intuitionistic fuzzy set theory and dynamic multiple attribute group decision-making theory. First, the factors that affect the threat assessment of missile defense targets are broken down at multiple levels to establish a threat assessment index system. Then, considering the subjective and objective weights, a comprehensive weight model of target attributes based on IVIFECF is proposed, and the Poisson distribution method is used to solve the weights of the time series to process multi-time situation information. Furthermore, in order to reflect the ambiguity of complex decision-making problems, the decision information is described by interval-valued intuitionistic fuzzy numbers, and a weighted interval-valued intuitionistic fuzzy (WIVIF) decision matrix is constructed. Next, the IVIFWAA/IVIFWGA operator is used to aggregate the decision information of multiple times and multiple decision makers, and the weight of the time series is combined to determine the dynamic multiple time fusion WIVIF decision matrix. Finally, the intervalvalued intuitionistic fuzzy numbers are sorted based on the improved TOPSIS method and the threat assessment results are obtained. The flowchart of the IVIFECF-IVIFWA-TOPSIS assessment model is shown in Figure 1. The missile defense dynamic fusion target threat assessment is based on multipl experts quantifying, evaluating, and sorting the attribute values of each incoming targe at multiple moments to provide a basis for firepower allocation; that is, a typical dynam
Description of the Problem
The missile defense dynamic fusion target threat assessment is based on multiple experts quantifying, evaluating, and sorting the attribute values of each incoming target at multiple moments to provide a basis for firepower allocation; that is, a typical dynamic multi-attribute group decision-making problem. Each threat target can be regarded as an alternative plan. Suppose m incoming targets form a solution set X = { x i |i = 1, 2, · · · , m}. Target information selects p time points to collect data, and we denote the time series by T = {t k |k = 1, 2, · · · , p }. Each target has n attributes, and the attribute set is denoted by C = c j |j = 1, 2, · · · , n . The set of decision makers is D = {D s |s = 1, 2, · · · , q }, and the ) is the assessment information of the decision maker D s on the target x i at the moment t k based on the attribute 1] indicates that the decision maker D s determines that the target x i satisfies the membership interval of the attribute c j at the moment 1], which indicates that the decision maker D s determines that the target x i satisfies the non-membership interval of the attribute c j at the moment t k . Lastly, we note that µ U
Construction of Threat Assessment Index System
The target characteristics of ballistic missiles can be described by a variety of indicators. Based on the detection information of the incoming ballistic target by the missile defense system sensors, this article starts with target status, target characteristics, and key characteristics, then considers the five aspects of target speed, distance, Radar Cross Section (RCS), interference intensity, and the defense capability of key areas, as secondary index threat factors to construct a missile defense combat target threat assessment index system, as shown in Figure 2.
, which indicates that the decision maker s D determines that the target i x satisfies the non-membership interval of the attribute j c at the moment k t . Lastly, we note that
Construction of Threat Assessment Index System
The target characteristics of ballistic missiles can be described by a variety of indicators. Based on the detection information of the incoming ballistic target by the missile defense system sensors, this article starts with target status, target characteristics, and key characteristics, then considers the five aspects of target speed, distance, Radar Cross Section (RCS), interference intensity, and the defense capability of key areas, as secondary index threat factors to construct a missile defense combat target threat assessment index system, as shown in Figure 2.
Quantification of Threat Assessment Indicators
With the attribute characteristics of the target group of ballistic missiles and the types of assessment indicators in mind, this paper uses a semi-S-shaped distribution, a semi-Z-shaped distribution and the G. A. Miller 9-level theory to quantify the threat assessment indicators.
(1) Target distance, RCS attributes The closer the distance between the target and our defense point, the shorter the time to reach the axis of the route shortcut of our position, and, therefore, the greater the threat of the target. RCS is used as an indicator of stealth performance, the smaller the target RCS, the more likely the radar will find the target, and the greater the threat of the target. Therefore, the target distance and RCS attributes obey the semi-S-shaped distribution. The solution methods of membership and non-membership are shown in Equation (9) and Equation (10), respectively.
(2) Target speed attribute Flight speed is an important attribute of the target. The faster the flight speed, the smaller the interception time window, and the greater the threat to our strategic location. Thus, the target speed obeys the semi-Z-shaped distribution. Its membership and non-membership degree solution methods are shown in Equation (11) and Equation (12), respectively.
(3) Target interference intensity, key defensive ability attributes The stronger the target interference intensity is, the stronger its penetration capability is, the more difficult it is for the missile defense system to intercept, and the greater the threat of the target. The critical defense capability is closely related to our weapon system combat capability and operational deployment. The stronger the critical defense capability, the smaller the target threat. The above two indicators are qualitative indicators, which can be described quantitatively using G. A. Miller 9-level quantification theory [13]. The corresponding relationship between the quantification results and the interval-valued intuitionistic fuzzy number is shown in Table 1.
Integrated Weight Model of Target Attributes Based on IVIFECF
In the multi-attribute group decision-making process of dynamic multi-time fusion target threat assessment involving multiple decision makers, it is necessary to comprehensively consider the objective weights caused by the differences in the attributes of the targets and the subjective weights caused by the decision maker's subjective experience and knowledge structure to determine the comprehensive weight of target attributes.
The decision entropy method is a typical objective weight determination method, which can indicate the relative importance of target attributes. The smaller the intervalvalued intuitionistic fuzzy entropy, the greater the uncertainty of the information, and the larger the weight of the corresponding solution should be. For existing methods for measuring interval-valued intuitionistic fuzzy entropy when the deviation of the degree of membership and the degree of non-membership are equal, there is inconsistency with the intuitive facts [20,21]. To overcome this problem, this paper proposes an integrated weight model of target attributes based on IVIFECF. We define IVIFECF and verify its effectiveness below.
Definition 10. Let A ∈ IV IFS(X), then the interval-valued intuitionistic fuzzy entropy E A
based on the cosine function can be defined as: Theorem 8. Interval-valued intuitionistic fuzzy entropy E A based on the cosine function has the following properties: Proof of Theorem 8: Therefore, E A = 0. be two IVIFSs in the universe X. The degree of ambiguity of A is greater than that of B, which follows from Equation (13): Thus, E A > E B , which is consistent with intuition. Table 2 shows the calculation results of Example 1 with entropy of IVIFSs in reference [20] and entropy measures of IVIFSs in reference [21]. It can be seen that the interval-valued intuitionistic fuzzy entropy based on the cosine function proposed in this paper can effectively describe the uncertainty of the fuzzy set, and overcome the problem of the existing entropy method being inconsistent with intuition when the deviation of membership degree and non-membership degree is equal.
Entropy of IVIFSs
Entropy measures of IVIFSs This paper's method On this basis, a nonlinear programming model based on minimizing IVIFECF is established to solve the objective weights of target attributes. The steps are as follows: Step 1: Determine the IVIF decision matrix at the moment t k : Step 2: Using Equation (13), calculate the target attribute interval-valued intuitionistic fuzzy entropy E j (t k ) at the moment t k : Step 3: Establish a nonlinear programming model based on minimizing IVIFECF, as in Equation (14): where ω (1) j (t k ) is the weight value of the objective attribute of the target at the moment t k .
Step 4: Solve the objective attribute weight of the target ω (1) (t k ) = (ω (1) n (t k )). To perform this, establish a Lagrange function for Equation (14), Take the derivative with respect to ω (1) j (t k ) and λ respectively, and set them equal to 0 to obtain Solve Equation (15) to obtain the objective attribute weight of the target at the moment t k to obtain Step 5: Suppose the subjective attribute weight vector provided by the decision-maker We use the product rule to obtain the comprehensive weight of the target attribute of the decision-maker D s at the moment t k , as shown in Equation (17
Time Series Weight Model Based on Poisson Distribution Method
In missile defense operations, the threat level of the target will dynamically change with time and the battlefield situation. In order to improve the accuracy of target threat assessment, it is not only necessary to consider the target information at the current moment, but also to take into account different points in a time series. In actual missile defense combat, the closer to the current moment, the greater the impact of target information acquisition on the results of the threat assessment. Therefore, this paper selects the target information collected at the current time p and the previous time p − 1, and uses the Poisson distribution method to solve the time series weight η = (η 1 , η 2 , · · · , η p ) in the inverse form, as shown in Equation (18): where η k ≥ 0 satisfies p ∑ k=1 η k = 0 and 0 < φ < 2.
Multi-Source Information Aggregation Based on IVIFWAA/IVIFWGA Operators
The missile defense dynamic target threat assessment in an uncertain environment needs to be based on a comprehensive weight calculation result of the target attribute. Starting with the WIVIF decision matrix R s (t k ) of the decision maker D s at the moment t k , we perform interval-valued intuitionistic fuzzy information aggregations on the decision information of multiple target attributes, multiple moments, and multiple experts. The IVIFWAA and IVIFWGA operators proposed in this paper have algebraic characteristics, such as idempotence, boundedness, monotonicity, and commutativity, and they have im-portant advantages in information fusion. Therefore, this article uses IVIFWAA/IVIFWGA operators to aggregate multi-source information.
Suppose the WIVIF decision matrix of the decision maker D s at time t k is recorded as R s (t k ) = (r s ij (t k )) m×n . That is, After determining the WIVIF decision matrix, use the IVIFWAA/IVIFWGA operator for each decision maker to aggregate the assessment results of each solution on all attributes, and then the assessment results of each decision maker for all solutions can be obtained: If we use the IVIFWAA operator for aggregation, we obtain Entropy 2022, 24, 1825 If we use the IVIFWGA operator for aggregation, we obtain: Furthermore, the IVIFWAA/IVIFWGA operator is used to aggregate the assessment results of all decision makers in the same solution, and Z i (i = 1, 2, · · · , m) is used to represent the final assessment result of the solution X i .
If we use the IVIFWAA operator for aggregation, we obtain If we use the IVIFWGA operator for aggregation, we obtain Using Equations (20)-(35), multi-target attributes, multi-decision makers, and multitime interval-valued intuitionistic fuzzy information aggregation can be realized, laying the foundation for dynamic fusion target threat assessment.
Ordering Method of Interval-Valued Intuitionistic Fuzzy Numbers Based on Improved TOPSIS
The data obtained from aggregation using the IVIFWAA/IVIFWGA operator are still interval-valued intuitionistic fuzzy numbers. Thus, to obtain the threat ordering of the incoming target, it is necessary to compare the magnitudes of the interval-valued intuitionistic fuzzy numbers. Tan et al. [28] verifies that the interval-valued intuitionistic fuzzy number ordering method based on TOPSIS is highly useful for classification. In order to improve the ability to distinguish and differentiate between decision-making results, this paper, based on results in [28], considers the influence of hesitation on distance measurement and proposes an improved interval-valued intuitionistic fuzzy number distance measurement model.
) be two interval-valued intuitionistic fuzzy numbers. The improved interval-valued intuitionistic fuzzy number distance measurement model can be defined as
In the dynamic multi-time fusion WIVIF decision matrix H = ( µ L ik , µ U ik , v L ik , v U ik ) m×p , the positive ideal solution is the solution with the greatest threat degree among all targets, and the negative ideal solution is the solution with the least threat degree.
The positive ideal solution of H is where The negative ideal solution of H is According to Definition (11), the respective distances between each target xi and the positive and negative ideal solutions of the dynamic multi-time fusion WIVIF decision matrix are d , as shown below: Based on the TOPSIS principle, the relative closeness of the target xi is the threat of the target, as shown below:
Algorithm Flow
The specific steps of the missile defense dynamic fusion target threat assessment method based on IVIFECF-IVIFWA-TOPSIS are as follows: Step 1: Construct a threat assessment index system and use Equations (9)-(12) to quantify each threat assessment index.
Step 2: Calculate the comprehensive weight ω s (t k ) = (ω s 1 (t k ), ω s 2 (t k ), · · · , ω s n (t k )) of the target attribute of the decision maker D s at the moment t k according to Equations (13)- (17).
Step 3: Calculate the WIVIF decision matrix R s (t k ) = (r s ij (t k )) m×n of the decision maker D s at the moment t k according to Equation (19).
Step 4: Use the IVIFWAA/IVIFWGA operator to aggregate the assessment results of each solution for each attribute and calculate the WIVIF decision matrix R(t k ) = (r s i (t k )) m×q of all decision makers at the moment t k according to Equations (20)- (27).
Step 5: Use the IVIFWAA/IVIFWGA operator to aggregate the decision information of q decision makers and obtain the intuitionistic fuzzy value Z i (t k ) of the target interval at a single moment according to Equations (28)-(35).
Step 6: Determine the weight η = (η 1 , η 2 , · · · , η p ) of the time series according to Equation (18) and construct a dynamic multi-time fusion WIVIF matrix H = [h ik ] m×p from time t 1 to t p , where h ik = η k ⊙ Z i (t k ).
Step 7: Obtain H = [h ik ] m×p positive and negative ideal solutions, and calculate the distances d + i and d − i between the target x i and the H positive and negative ideal solutions, as well as the degree of threat ζ i according to Equations (36)-(38), to obtain the final threat assessment result of the target.
Simulation and Result Analysis
Suppose that in a certain missile defense exercise, the sensors of the missile defense system observed four batches of incoming targets x i (i = 1, 2, 3, 4). After obtaining the attribute data in the form of the interval values of the target at three consecutive times t k (k = 1, 2, 3), three experts D s (s = 1, 2, 3) are assigned to determine the five attributes c j = (j = 1, 2, 3, 4, 5) of speed, distance, RCS, interference strength, and defense capability so that the target threat degree can be evaluated.
Algorithm Feasibility Test and Analysis
Step 1: Use Equations (9)-(12) to quantify each threat assessment index, and apply the method proposed in [29] to convert the interval value into an interval-valued intuitionistic fuzzy value, as shown in Table 3. Step 2: Solve the comprehensive weight of the target attribute based on IVIFECF. First, the interval intuitionistic fuzzy entropy of each target speed attributes at t 1 is calculated according to Equation (13).
Then, the interval intuitionistic fuzzy entropy of each attribute of the target at t 1 − t 3 is shown in Table 4. By Equation (16), the objective weights of the target attributes at t 1 − t 3 are: After obtaining the objective and subjective weights of the target attributes from t 1 − t 3 , the comprehensive weights of the target attributes of experts D 1 , D 2 , D 3 at t 1 − t 3 can be obtained by Equation (17): Step 3: Using Equation (19), we obtain the WIVIF decision matrix R s (t k ) of experts Step 4: Use R s (t k ) and the IVIFWAA operator to aggregate the assessment results of each solution for each attribute, and calculate the WIVIF decision matrix R(t k ) of three experts at t 1 − t 3 using Equations (20)- (23). Step 5: Use the IVIFWAA and IVIFWGA operators to aggregate the assessment results of the three experts to obtain the target interval-valued intuitionistic fuzzy value at a single time from t 1 − t 3 , as shown in Tables 5 and 6. Specifically, according to Equation (36), the positive ideal distance between target x 1 and H IVIFWAA is calculated as follows: It can be seen from Table 8 that the target multi-time fusion threat ordering obtained by the IVIFWAA operator and the IVIFWGA operator is Target 1 > Target 3 > Target 4 > Target 2, which verifies the feasibility of this algorithm.
Algorithm Superiority Test and Analysis
In the missile defense dynamic fusion target threat assessment, time is an important factor that affects the decision result. Table 9 and Figure 3 show the target threat degree and ordering results of the single times t 1 − t 3 and dynamic multi-time fusion. We see that the target threat ordering results at different moments are roughly the same, and the targets with the largest and smallest threat levels are consistent. However, even if the target threat degree order is the same, the threat degree of each target is different at different times, and Target 4 and Target 3 are more sensitive to changes in time. At t 1 and t 2 , the threat of Target 4 is higher than that of Target 3 and the opposite is true at t3. It can be seen that the static threat assessment method at a single moment cannot reflect the timing of the target and the dynamic changes of the battlefield. The method proposed in this paper not only considers the tendency of Target 4 to decrease in speed, distance, and RCS threat degree during the entire assessment process, but also considers the threat of a sudden increase in speed, RCS, interference intensity, and defense capability of Target 3 at t 3 . Therefore, we can obtain more reliable threat assessment results. Target 2, which verifies the feasibility of this algorithm.
Algorithm Superiority Test and Analysis
In the missile defense dynamic fusion target threat assessment, time is an important factor that affects the decision result. Table 9 and Figure 3 show the target threat degree and ordering results of the single times 1 3 tt − and dynamic multi-time fusion. We see that the target threat ordering results at different moments are roughly the same, and the targets with the largest and smallest threat levels are consistent. However, even if the target threat degree order is the same, the threat degree of each target is different at different times, and Target 4 and Target 3 are more sensitive to changes in time. At 1 t and 2 t , the threat of Target 4 is higher than that of Target 3 and the opposite is true at t3. It can be seen that the static threat assessment method at a single moment cannot reflect the timing of the target and the dynamic changes of the battlefield. The method proposed in this paper not only considers the tendency of Target 4 to decrease in speed, distance, and RCS threat degree during the entire assessment process, but also considers the threat of a sudden increase in speed, RCS, interference intensity, and defense capability of Target 3 at 3 t . Therefore, we can obtain more reliable threat assessment results. Additionally, in the ever-changing missile defense combat of the battlefield, the accuracy of the dynamic fusion target threat assessment method mainly depends on the difference between target threat degrees in the same assessment method. The more obvious the difference, the better one is able to select the optimal solution, that is, the stronger the superiority of the method. Therefore, the superior degree (SD) of target i over target j can be defined as: where ζ i and ζ j are the threat degree {i = 1, 2, · · · , m; j = 1, 2, · · · , m; i ̸ = j} of different targets. In order to verify the effectiveness of the method proposed in this paper, the method is compared with UDIFWA operator, DINFWAA operator, CIIFA operator, IVIFPWA operator, and D-S-P operator proposed in [30][31][32][33][34]. The target threat degree ordering results and superiority of different methods are shown in Figure 4 and Table 10. The target multi-time fusion threat ordering result obtained by using the IVIFWAA operator and the IVIFWGA operator in this paper is the same as in references [30][31][32][33], which shows the effectiveness of the method proposed in this paper. The ordering results are slightly different from those in reference [34] but the final decision of the optimal solution is the same.
Conclusions
This paper combines interval-valued intuitionistic fuzzy set theory with dy multi-attribute group decision-making theory, the dynamic fusion target assessment method for missile defense is proposed. By comparison with static assessment and existing dynamic threat assessment methods, the feasibilit superiority of the method in this paper is verified. The main contributions of the pr model are as follows: (1) Note that in the method proposed in this article, the superiority gap between the targets is the largest. When comparing the superiority of Target 1 and Target 3, the superiority of the algorithm in this paper (IVIFWGA) is 1.67, 1.68, 1.92, and 1.69 times that of the methods proposed in [30][31][32][33], respectively. In the superiority comparison of Target 3 and Target 4, the superiority of the algorithm in this paper (IVIFWGA) is 1.53, 1.61, 1.92, and 2.41 times that of the methods proposed in [30][31][32][33], respectively. In the superiority comparison of Target 4 and Target 2, the superiority of the algorithm in this paper (IVIFWGA) is 1.41, 1.74, 1.53, and 1.71 times that of the methods proposed in [30][31][32][33], respectively. The greater the superiority gap, the easier it is for the commander to make decisions. This demonstrates that the method in this paper, by integrating the subjective and objective weights of each attribute of the target and the weight of the time series, considers the degree of change in the relative difference of each attribute and uses the IVIFWAA/IVIFWGA operator to combine decision-making information of multiple target attributes, multiple moments, and multiple experts. This effectively avoids the problem of decision-making errors due to unclear superiority under the influence of subjective factors.
Conclusions
This paper combines interval-valued intuitionistic fuzzy set theory with dynamic multi-attribute group decision-making theory, the dynamic fusion target threat assessment method for missile defense is proposed. By comparison with static threat assessment and existing dynamic threat assessment methods, the feasibility and superiority of the method in this paper is verified. The main contributions of the proposed model are as follows: (1) The new interval-valued intuitionistic fuzzy weighted average operator based on the definition of new interval-valued intuitionistic fuzzy operation rules are proposed. (2) An integrated weight model of target attributes based on IVIFECF is proposed to solve the problem of inconsistency with intuitive facts when the deviation between membership and non-membership are equal. (3) Ordering method of interval-valued intuitionistic fuzzy numbers based on improved TOPSIS is proposed to improve the ability to distinguish and differentiate between decision-making results. (4) In order to improve the reliability and accuracy of missile defense target threat assessment, an assessment model based on IVIFEC-IVIFWA-TOPSIS is constructed, and the result of dynamic fusion target threat assessment considering the fusion of multi-target attributes, multiple times, and multi-expert decision information is obtained.
In this paper, the influence of the battlefield situation on target threat assessment has not been considered, and the intelligence level of target threat assessment method needs to be improved. Therefore, future research will focus on the following aspects: (1) Carefully considering the impact of the battlefield situation on target threat in complex battlefield environment.
(2) Applying intelligent simulation technologies, such as reinforcement learning and deep learning, to threat assessment. | 9,164.4 | 2022-12-01T00:00:00.000 | [
"Computer Science"
] |
Controllable deposition of organic metal halide perovskite films with wafer-scale uniformity by single source flash evaporation
Conventional solution-processing techniques such as the spin-coating method have been used successfully to reveal excellent properties of organic–inorganic halide perovskites (OHPs) for optoelectronic devices such as solar cell and light-emitting diode, but it is essential to explore other deposition techniques compatible with large-scale production. Single-source flash evaporation technique, in which a single source of materials of interest is rapidly heated to be deposited in a few seconds, is one of the candidate techniques for large-scale thin film deposition of OHPs. In this work, we investigated the reliability and controllability of the single-source flash evaporation technique for methylammonium lead iodide (MAPbI3) perovskite. In-depth statistical analysis was employed to demonstrate that the MAPbI3 films prepared via the flash evaporation have an ultrasmooth surface and uniform thickness throughout the 4-inch wafer scale. We also show that the thickness and grain size of the MAPbI3 film can be controlled by adjusting the amount of the source and number of deposition steps. Finally, the excellent large-area uniformity of the physical properties of the deposited thin films can be transferred to the uniformity in the device performance of MAPbI3 photodetectors prepared by flash evaporation which exhibited the responsivity of 51 mA/W and detectivity of 9.55 × 1010 Jones.
Flash evaporation method has gained attention as a candidate for evaporating two or more precursors from a single thermal source by rapidly raising the temperature in a very short time 20,30,31,[38][39][40][41] . In principle, the rapid vaporization of the precursors induces complete and uniform evaporation of the precursors, while maintaining the same ratio between the different components in OHP. Solar cells with flash evaporated OHP films have exhibited over 10% of power conversion efficiency 39,41 , which is comparable to the early stage spin-coated OHP films 17,42 . Furthermore, the flash evaporation method has been expanded to deposit OHP films with mixed cation and halide species 30 , which is challenging for the aforementioned other evaporation methods 28 . Although this aspect of flash evaporation presents a prospect of exploring a diverse compositional range of OHPs, there has been relatively a few reports which have systematically studied the controllability of the flash evaporation method and the uniformity of OHP films produced by this method. Especially, flash evaporated OHP films have only been reported to be uniform in small areas, but wafer-scale uniformity has rarely been investigated to assess its applicability for mass-producing devices with uniform performance. In this paper, we demonstrate that OHP films with wafer-scale uniformity can be formed by flash evaporation. In addition, it is difficult to monitor the deposition rate and control the resulting film thickness with flash evaporation due to the rapid nature of the evaporation process, unlike other methods. For optoelectronic devices, the thickness of the active layer is critical in determining the device performance 43,44 . Therefore, a reliable deposition of OHP films with controllability over a wide range of target thicknesses is desired for meeting different requirements in terms of film characteristics for various device applications. Our study directly shows that the thickness of flash evaporated OHP films can be controlled by simply adjusting the mass of the source material. Similarly, we discovered that the grain size of the flash evaporated OHP films varied with the mass of the source materials loaded, and that the grain size could even be controlled by introducing multi-step depositions.
Results and discussion
In this study, we focused on the deposition of MAPbI 3 films (see Fig. 1a for the crystal structure) by flash evaporation. Figure 1b shows a schematic image of the flash evaporation process adopted in this work. The pre-synthesized MAPbI 3 single crystal powder was used as the source instead of PbI 2 and MAI precursors (see the inset of Fig. 1b) in order to obtain better quality films owing to an exact stoichiometric ratio between the 30,45 . The exact amount of single crystal powder was loaded on the tungsten boat which is located inside of vacuum chamber. The source-to-substrate distance was designed to be 30 cm which is the longest distance among source-to-substrate distances of flash evaporation reported so far 20,31,38,40,41 . This is so that we could achieve a uniform deposition of MAPbI 3 over a large area at the substrate end. The MAPbI 3 single crystal powder was heated by rapidly ramping up the heater current to 100 A in 3 s at a constant voltage of 0.31 V. The powder was then evaporated within 60 s and deposited on substrates which were located on specific locations of the holder. Throughout this paper, we will refer to different sample locations in the 4-inch wafer size substrate holder as labeled in Fig. 1c (substrate location A to F) to assess the uniformity of the deposited MAPbI 3 film. We checked the film quality of flash evaporated MAPbI 3 films by probing their structural and optical properties as shown in Figs. 2 and 3. An optical micrograph of the flash evaporated MAPbI 3 film patterned by a shadow mask showed a smooth and clean film with a clearly distinguishable boundary at the edge (see Fig. 2a). The top-surface images of the films measured by field emission scanning electron microscope (FE-SEM) and atomic force microscope (AFM) are presented in Fig. 2b,c, respectively. A typical grain size determined from the FE-SEM image is 40 nm which we will discuss further later in the paper. A smooth and pinhole-free surface was observed with the roughness of approximately 5 nm (2.86 nm locally, Fig. 2c). Figure 3a shows the X-ray diffraction (XRD) results. The green line shows the XRD result of the single crystal powders of MAPbI 3 used as the source, which closely resembles the calculated XRD results. It signifies that a high purity MAPbI 3 single crystal powders were successfully synthesized. The blue and red lines show the XRD results of the flash evaporated and spin-coated MAPbI 3 films, respectively. The positions of the (110) and (220) peaks were the same for all the XRD results (14.1° and 28.5°, respectively), confirming the identical crystal structure of the flash evaporated MAPbI 3 film with those prepared by other methods. As no peaks other than (110) and (220) peaks appeared, the deposited MAPbI 3 films exhibit a strong preferred orientation along the (110) surface 30,32,46,47 . In addition, the high purity of the flash evaporated film is indicated by the absence of diffraction peaks that correspond to PbI 2 (asterisk marks (12.6°)). Note that this is an interesting observation because many previous studies 31,38,40,41 have demonstrated that the addition of excess MAI was necessary to deposit pure MAPbI 3 films without PbI 2 impurities (detailed discussion could be found in the Supplementary Information Sect. 1).
UV-visible absorbance and photoluminescence (PL) spectra were taken to investigate the optical properties of the flash evaporated MAPbI 3 film (see Fig. 3b). The estimated optical bandgap from the absorbance spectrum by using the Tauc plot 48 www.nature.com/scientificreports/ When compared with the spin-coated MAPbI 3 film produced as a reference sample, it showed similar absorbance and PL spectra (see Fig. S1 in the Supplementary Information). From the structural and optical characterizations, we could safely confirm that our flash evaporated MAPbI 3 films had a high film quality without a significant amount of impurities formed. We checked that the evaporated perovskite films had a uniform thickness and the same optical properties over the whole wafer. Before testing wafer-scale film uniformity, we compared the film uniformity between the flash evaporated perovskite film to spin-coated perovskite film (reference) on the 1.5 × 1.5 cm 2 substrate. The thickness values of both films were measured by randomly selecting 20 points on cross-sectional FE-SEM images (see Fig. S2 in the Supplementary Information). The average thickness values of the flash evaporated and spin-coated films were similar (207.1 nm and 225.0 nm, respectively), while the standard deviation for the spin-coated film was about 10 times larger (30.2 nm compared to 3.0 nm for the flash evaporated film). Given that the standard deviation value of 3.0 nm for the flash evaporated film is similar to the surface roughness value measured by AFM, the variation in the sampled thickness values can be assumed to be due to the morphology, not the variation in the actual thickness within the film. It can be seen that the film made by flash evaporation has a much uniform thickness and a smooth surface.
In order to investigate whether there was a change in the thickness depending on the location over the 4-inch wafer, cross-sectional FE-SEM images were taken for the evaporated films at each substrate location labeled according to Fig. 1c (Fig. 4a). The thickness values were measured at 20 points of the film for each substrate in order to carry out statistical analysis. Figure 4b is a graph summarizing the thickness values extracted from each substrate location drawn as a box and whisker diagram. The dots within the boxes represent the average values and boxes show the first and third quartile range of each distribution. The lines inside the box represent median values and the whiskers show the minimum and maximum values. The box and whisker diagrams show the similarity in the distribution of the thickness values at different locations. Figure 4c shows the distribution for all the measured 120 thickness values from the different locations shown in Fig. 4b plotted together in one histogram. The thickness values did not significantly deviate from the average value of 115.6 nm (the standard deviation was 3.1 nm) at all substrate locations. More importantly, there were no multiple peaks in the normal distribution fit, which suggests that all the thickness values belong to a single distribution. Tukey-Kramer honest significant difference test (Tukey test) 50 was performed to quantitatively determine whether the distributions of the thickness values at the six different substrate locations (shown in Fig. 4b) can be judged as the same distribution. Tukey test is a statistical test that compares multiple distributions simultaneously and shows how different they are from each other, which can be used to categorize similar distributions into separate groups. The detailed descriptions and raw data are presented in Sect. 4 in the Supplementary Information. Figure 4d is a graphical visualization of the Tukey test results. The comparison circles are shown in Fig. 4d have their centers each aligned with the average thickness values and the radii proportional to the standard deviation values of each distribution. The more the comparison circles overlap, the more similar the distributions are. Here, the comparison circles are all overlapped and therefore all the distributions can be judged as the same distribution sampled from the same population. Analysis of variance (ANOVA) test 51 was also run to support whether the average values of two or more distributions are statistically identical (see Sect. 4 in Supplementary Information). Thus, all the average thickness values at each substrate location can be considered statistically identical. To visualize the uniformity in the film thickness over the whole 4-inch wafer, we used a color map to plot the average values of the film thickness at each substrate location from A to F (Fig. 4e). The average thickness values at each substrate location differed by less than 2 nm which is smaller than the standard deviation value of 3.1 nm (Fig. 4c). Figure 4f shows simulation results obtained by the Gaussian process regression with the whole 120 thickness data. The variation of the predicted thickness across the wafer was as small as approximately 2 Å. In addition to the thickness www.nature.com/scientificreports/ measurement, UV-visible absorbance and PL spectra were measured for the films deposited at each substrate location to confirm that they all have the same absorbance and PL responses regardless of location (see Fig. 4g and Fig. S3 in the Supplementary Information). All these results consistently support the wafer-scale uniformity of the flash evaporated perovskite film over the 4-inch wafer. The controllability of the flash evaporation method was demonstrated by depositing various thicknesses of perovskite films by varying the weight of the source materials. The thicknesses of the films were measured by using a cross-sectional FE-SEM as in the uniformity measurement. The thickness increased linearly with increasing the weight of the source from 50 to 750 mg (see the red triangle points in Fig. 5a). However, as the weight of the source exceeded 750 mg, the increase in the thickness became sub-linear. In order to mitigate the nonlinear relationship above the threshold weight of the source of 750 mg, we introduced a multi-step deposition (i.e. the perovskite films were successively deposited multiple times). For example, to deposit a target thickness of 250 nm, 500 mg of the source perovskite powders were deposited twice (a total of 1000 mg), which could then be described by a linear relationship again (see the blue diamond points in Fig. 5a). Figure 5b shows the representative cross-sectional SEM images of MAPbI 3 films deposited with different weights of the source. Flash evaporation with 1500 mg of the source powders does not yield twice the thickness of the MAPbI 3 film with 750 mg of the source powders. However, successively evaporating 750 mg of the source twice gives a MAPbI 3 film twice the thickness (See Fig. 5b).
We discovered that the grain size could also be controlled by varying the weight of the source powders. The grain size tended to increase as the source mass increased (Fig. 5c,d). We also discovered that the grain size did not vary significantly depending on the number of deposition steps while the thickness increased linearly for a double-step (390 nm) and triple-step (620 nm) evaporated films for the source mass of 750 mg (see Fig. S4 in the Supplementary Information for more details), which potentially provides a way for controlling the grain size independently with the thickness (see the inset of Fig. 4d for the predicted range of grain size for each thickness). The grain size of crystals in perovskite films, along with its thickness, is an important parameter that determines the device performance of optoelectronic devices. In the case of solar cells, the carriers should be able to move freely from the active layer (the point of generation within) to the electrodes (where they are extracted), so the larger the grain, the better the collection efficiency 40 . In the case of LEDs, a higher rate of recombination is desired, and therefore a smaller grain size would be required to fabricate LEDs with higher emission efficiencies 52 . Therefore, our findings can be highly relevant for investigating the relationship between the grain size and device performance of optoelectronic devices based on flash evaporated perovskite films.
In order to demonstrate how the wafer-scale film uniformity discussed so far can be transferred to the uniformity in the optoelectronic device performance, we fabricated photodetectors which are one of the most suitable devices due to their simple structures that require only the deposition of two top contact electrodes on evaporated perovskite films (see the inset of Fig. 6a for the device structure). For performance comparison, a photodetector using spin-coated MAPbI 3 film was also fabricated. The data for the photodetector with Figure 6a shows typical current-voltage curves of the photodetector with the evaporated film under light illumination with 532 nm wavelength and various laser intensities. The photocurrent gradually increased with increasing the laser intensity due to increased photogenerated carrier concentrations (see Fig. S6(a) in the Supplementary Information). The responsivity (R) which is the ratio of the www.nature.com/scientificreports/ excess current generated by light illumination to the incident light power was studied. The responsivity decreased as the light power increased (see Fig. S6(b) in the Supplementary Information). This can be attributed to the increase of carrier-carrier scattering or filling the deep trap states with a longer lifetime, which tends to provide a higher photocurrent at a lower light power [53][54][55] . The estimated responsivity is 51 mA/W for the photodetector with the flash evaporated film and 137 mA/W for the photodetector with the spin-coated film at a bias of 20 V and light power of 0.84 μW. Detectivity (D*) which is another parameter to characterize the sensitivity of photodetection was calculated according to D * = R 2eI dark A − 1 2 , where I dark is the dark current, A is the area of the photosensitive region and e is the electric charge (see Fig. S6(c) in the Supplementary Information). The highest value of detectivity was found to be 9.55 × 10 10 Jones within the measured range for the photodetector with the flash evaporated film. This is a comparable value to the detectivity of 1.53 × 10 11 Jones for the device with the spin-coated film. This is a comparable value to the detectivity of 6.14 × 10 11 Jones for the device with the spincoated film. These device performance parameters are comparable to the previously reported MAPbI 3 -based photodetectors 31,56-58 and commercial Si photodetectors (< 0.2 A/W) 47,59 . Figure 6b displays repeated on/off operation of the photodetector with the flash evaporated MAPbI 3 film. The device showed relatively fast photoresponses (< 1 s), stable and reproducible operation during the measurement cycles. Finally, in order to demonstrate how the wafer-scale film uniformity discussed above can be transferred to the uniformity in the photodetector device performance, we fabricated photodetectors with flash evaporated films at different locations (see Fig. 6c). The measured device characteristics were nearly identical regardless of the sample substrate locations (B, C, and F), which shows that we can achieve the wafer-scale uniformity in the device performance by our flash evaporation method.
Conclusions
We designed a single-source flash evaporation setup with a long source-to-substrate distance to deposit MAPbI 3 films directly over 4-inch wafer. The thicknesses of the films were measured at various locations of the 4-inch wafer and statistically analyzed to demonstrate that the thicknesses of the films were constant throughout the whole 4-inch wafer. The optical properties of the flash evaporated films were also identical throughout the wafer. The correlation between the amount of the single crystal perovskite powders loaded to the source and the thickness of the deposited film was studied to demonstrate the controllability of the evaporation. We observed that the deposited MAPbI 3 film thickness was proportional to the source mass until a critical point, above which the film thickness started to saturate. The proportionality was recovered by introducing the multiple numbers of deposition steps which additionally provided a way for controlling the grain size by varying the source mass and number of deposition steps. The wafer-scale uniformity was preserved for photodetector devices fabricated with flash evaporated MAPbI 3 films. The fabricated devices showed the responsivity of 51 mA/W and detectivity of 9.55 × 10 10 Jones which are comparable to the previously reported MAPbI 3 -based photodetectors. Our results demonstrate that single-source flash evaporation can be a promising route towards controllably and reliably depositing large-area perovskite films, and therefore producing perovskite-based optoelectronic devices in large-scale. were sequentially cleaned with acetone, 2-propanol, and deionized water in a sonicator for 10 min at each step. SiO 2 and glass substrates were exposed to 50 W, 30 sccm condition of O 2 plasma for 120 s.
Deposition of MAPbI 3 film by flash evaporation. Prepared MAPbI 3 powder was placed into a tungsten boat.
After the pressure in a chamber pumped down to below 1 × 10 −6 Torr, the substrate holder was rotated in 24 rpm for film uniformity, and the current of tungsten boat was rapidly increased to 100 A in 3 s. Then, the temperature of the tungsten boat was raised rapidly and MAPbI 3 powder sublimated. The nominal deposition rate read by the sensor was approximately 50-80 Å/s. When the deposition rate decreased to 0.1 Å/s, the process was terminated and the total deposition time was within 60 s.
Deposition of MAPbI 3 film by spin-coating. Spin-coating was conducted according to the known hot-casting method 2 . 0.5 M of perovskite precursor solution was prepared by dissolving the prepared MAPbI 3 powder in DMF. The cleaned substrate was heated at 120 °C on the hot plate. Then, the heated substrate was quickly moved to the spin-coater and the precursor solution was spin-coated on the substrate for 40 s at 5000 rpm.
Fabrication of photodetector. The Au top electrode lines with 50 μm width and 50 nm thickness were deposited using a patterned shadow mask on prepared perovskite film. The electron-beam evaporator pressure was 1 × 10 −6 Torr and the value of the Au deposition rate on the sensor was approximately 1 Å/s. www.nature.com/scientificreports/ Film characterization. SEM measurements. The thickness and surface morphology of the perovskite film were analyzed by FE-SEM (JSM-7800F Prime) using an electron beam accelerated at 5 kV for surface morphology study and 10 kV for thickness study.
Steady-state PL measurements. Steady-state PL spectra of the thin film samples (glass/MAPbI 3 film) were measured using a spectrofluorometer (JASCO FP-8500). The excitation wavelength was 520 nm and used Xenon arc lamp (150 W).
Absorbance measurements. The absorbance of the thin film samples (glass/MAPbI 3 film) was measured using a UV/Vis spectrophotometer (PerkinElmer LAMBDA 45).
AFM measurements. Characterization of the perovskite layer surface was performed by an atomic force microscope system (NX 10 AFM, Park Systems).
Device measurement. The photodetector characteristics of the devices were measured using a semiconductor parameter analyzer (Keithley 4200 SCS) and a probe station system (JANIS Model ST-500). All the measurements were performed in a vacuum environment.
Data analysis. All data analyzed by the statistical analysis program (JMP software). | 5,020.4 | 2020-11-02T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Plant Extracts in Probiotic Encapsulation: Evaluation of their Effects on Strains Survivability in Juice and Drinkable Yogurt During Storage and an in-vitro Gastrointestinal Model
The present study concerned with the evaluation of the adding value from the addition of plant extracts, including those from moringa, fennel, sage and green tea, during alginate encapsulation on the viability of probiotic bacteria (L. plantarum DSM 20205 and P. acidilactici DSM 20238) in fruit juice (i.e., kiwi, prickly pear and carrot juice) and drinkable yoghurt throughout storage at 4°C. The results revealed that the survival rates of L. plantarum DSM 20205 and P. acidilactici DSM 20238 cells encapsulated with 0.05% (w/v) moringa extract were significantly higher than those of cells encapsulated with fennel and saga after storage for 30 days. The In vitro digestibility behaviour and survival of the novel capsules were studied in terms of the survival of L. plantarum DSM 20205 and P. acidilactici DSM 20238 based on sequential exposure to simulated salivary, gastric and intestinal fluids. This novel encapsulation additive significantly increased the survival of L. plantarum DSM 20205 and P. acidilactici DSM 20238 compared with the control capsules cells in simulated digestive fluids. Therefore, the appropriate amount of moringa extract for use in culture encapsulation was determined after the addition to fruit juices and drinkable yoghurt, and the effect of this extract was compared with the effect of adding green tea extract (a standard plant extract). Green tea and moringa extracts enhanced the stability of probiotic beads in all products compared to the controls after storage. Encapsulated L. plantarum DSM 20205 and P. acidilactici DSM 20238 showed better survivabilities than the control capsules. The studied strains showed better survival in prickly pear juice and drinkable yoghurt throughout storage
iNtRODUCtiON
The increasing number of innovative foods that promote the consumers health has been taken a priority in the field of food industry over the last decades. The very wide acceptance of functional food products due to their health benefits 1 . According to FAO/WHO, 2001 2 , "probiotics arel ive microorganisms that when administered in sufficient amounts gives the host a health benefit". In the early 1900s Elie Metchnikoff connected the longevity of Bulgarian peasants with their high consumption of fermented milk and the probiotics term has been used from this time. This is due to the bacteria present in yogurt which protect the gastrointestinal tract against the damaging effects of harmful bacteria 3 . In this study, survey have been done on several microorganisms and have revealed the benefits,such as decreasing cholesterol levels, reducing lactose intolerance, stimulating of the immune system, increasing mineral absorption, and relieving constipation as well as anti-hypertensive, anti-mutagenic and anticarcinogenic effects, available to humans through the use of probiotics 4,5,6 . The impact of probiotics on human health has been a great advantage for the food industry because these microorganisms describe a significant division within the functional food industry 7 .
The demand for functional foods has grown considerably in recent years, now accounting for 5% of the international food market 3 . This increase is correlated with consumer attention, as these products are a source of nutrients and act as a promoters of wellness and health 8,9 . Fermented milks and yogurts are the most famous food vehicles for the delivery of probiotics due to their high acceptance by consumers and superior nutritional value 10,11 .
The demand for functional foods has grown considerably in recent years, now accounting for 5% of the international food market 3 . This increase is correlated with consumer attention, as these products are not only as a source of nutrients but also as promoters of wellness and health 8,9 . Fermented milks and yogurts are the most famous food vehicles for the delivery of probiotics due to their high acceptance by consumers and superior nutritional value 10,11 .
Encapsulation has been shown to be an alternative method for the safeguarding of probiotics from detrimental environmental factors 12 . Sodium alginate iscommonly used for this purpose because of its simplicity, low cost and biocompatibility 13 . Many researchers have reported that alginate combined with other materials, for example, Hi-maize starch 14 , inulin, galactooligosaccharides and fluctooligosaccharides 15,16 , gelatine 12 , chitosan, pectin, and glucomannan 17,18 , can be used to improve the survival of different probiotic strains under gastrointestinal environments and in food products during storage.
Currently, the influence of alginate blended with plant extracts on the survival of probiotics in different beverages and foods is not well understood. There are many evidence in the literature regarding, the potential of some antioxidant extract from natural plants containing high contents of phenolic and flavonoids compounds (Maisuthisakul,et al. 19 ,Siriwatanametanon,et al. 20 , Abdel-Razek, et al. 21 (2017), Badr et al. 22 and Shehata et al. 23 ). Consequently, the objective of this investigation was to determine the impact of alginate encapsulation with certain plant extracts on the stability of probiotic bacteria, including L. plantarum DSM 20205 and P. acidilactici DSM 20238, in fruit juices and drinkable yogurt during storage at 4°C for 28 days.
Probiotic cultures
Probiotic bacteria, including Lactobacillus plantarum DSM 20205and Pediococcus acidilactici DSM 20238, were purchased from Egypt Microbial Culture Collection (EMCC), Ain shams, Egypt. Cell pellets of L. plantarum DSM 20205 and P. acidilactici DSM 20238, lyophilized cells were subculture in autoclaved MRS broth at 37 °C for 20h. Activated cells were centrifuged at 3000 xg for 20 min, washed twice with 0.1% (w/v) autoclaved water peptone. Cells log were adjusted to 1010 cfu/ml prior encapsulation.
Encapsulation of probiotics
Probiotic capsules were produced with some modifications in accordance with the procedure described by Chaikham, et al. 12 . Cells were mixed with Sodium alginate sterile solution (Sigma-Aldrich, UK) and plant extracts were added in different concentrations (0.05-0.2%, w/v). Capsules were formed using 0.5 mm sterile needle by injecting the previous mixture in to 0.5 M sterilized calcium chloride solution and then kept for gelation for 30 min. After gelation the beads were washed with sterile saline at 0.85 percent (w / v) and kept at 4 ° C.
Enumeration of immobilized probiotics
Briefly, one gram of probiotic capsules was diluted with 99 ml 0.1M autoclaved phosphate buffer (pH7) (Merck, Germany) and grinded for 10 min, decimal dilutions were done then plating on MRS agar. Plates were anaerobically incubated at 37 °C for 24-72 h.
Viability of probiotic encapsulated with herbal extracts during storage
Ten grams of L. plantarum DSM 20205 and P. acidilactici DSM 20238 immobilized with plant extracts were kept in a glass bottle and stored at 4 °C for 30 days. Survival rates were monitored weekly.
Preparation of simulated digestive fluids and invitro digestion of encapsulated probiotics
Simulated salivary fluid (SSF), simulated gastric fluid (SGF) and simulated intestinal fluid (SIF) were prepared according to the method proposed by Minekus et al. 25 , and the details of the solutions including the stock solutions are presented in Table 1.
Survival of encapsulated probiotics in fruit juices and yogurt during storage
Assessing the viability of encapsulated probiotics in fruit juices and drinkable yogurt, 10 g ofencapsulated probiotics were inoculated aseptically into 90 ml of pasteurized kiwi, prickly pear and carrot juices or 90g of milk before storing at 4 °C. post inoculation, samples were taken at 0, 7, 14, 21 and 28days in order to quantify viable numbers as CFU/mlon MRS agar 12 . Product pH changes were monitored.
Data analysis
Results presented as standard deviation ± average. Variance analysis (ANOVA) was performed using (SPSS 16. Inc., USA). Duncan's multiple range tests (P<0.05) determined the significant differences between the means of treatment groups.
Viability of L. plantarum DSM 20205 and P. acidilactici DSM 20238 encapsulated with plant extracts during storage
This study is investigating the potential for using plant extracts to increase the stability of probiotic strains during storage at 4°C. Plant extracts, including fennel, moringa, saga and green tea, typically include high quantity of antioxidant components 1, 26,27 . In this study, encapsulated probiotic cells and free cells were assessed ( Table 2). The results showed that 0.05% moringa extract could noticeably increase the survival rates of L. plantarum DSM 20205 and P. acidilactici DSM 27 ; they concluded thatthe addition of these extracts during encapsulation had a positive effect on the viability of probiotic cells. This positive effectcanbe attributed to the antioxidant activity of these plant extracts, and this activity is important to the viability and stability of probiotic bacteria.
In the present investigation, during refrigerated storage, the addition of 0.05% moringa extract to calcium alginate as the encapsulating material had a positive effect on probiotic cells. To date, it has been reported to improve probiotic stability by encapsulation with moringa extract. Referring to previous studies in this area, Coz-Bolaסos et al. 28 and Wang et al. 29 revealed that Moringa oleifera has a high content of antioxidant compounds. Hence, this extract can therefore create an anaerobic environment that promotes probiotic survival due to its oxygen -scouring properties 30 . This finding was confirmed by Shah et al. 31 , who found that fruit juices containing antioxidant compounds showed superior probiotic bacteria stability during 6 weeks of storage compared with the control sample, which was consistent with the positive effect of moringa extract on the survival and stability of probiotic strains. In summary, L. plantarum DSM 20205 and P. acidilactici DSM 20238 encapsulated with 0.05% moringa extract shown the highest survivability throughout storage at 4°C. Consequently, this amount of extract was elected for evaluating the effects of refrigerated storage on novel encapsulated probiotic cells in fruit juices and drinkable yogurt compared to the negative control treatment and treatments with green tea extract as the positive control.
Survival of herbal extract-encapsulated probiotics and control capsules cells in simulated digestive fluids
Probiotics can survive under acidic stomach conditions and throughout the intestines in suitable numbers between 106 and 108 cfu/g. However, their viability must be conformed early in any study 18,32,33 These s trains showed a steadily loss of viability in simulated digestive fluids, but their sensitivity to SSF, SGF and SIF differed considerably. Dramaticaly reductions in the number of probiotic cells after exposure to simulated mouth fluid (SSF) were observed for both cells control capsules (CC) and novel capsules (NC) L. plantarum DSM 20205 and P. acidilactici DSM 20238 cells ( Fig. 1 and 2), and a significant loss in (CC) viability corresponds to ~ 0.5 log cfu/ ml. During the second step, exposure to SGF, which is a two-hour process and includes normal ingested foods also present in the stomach 34 , there was a remarkable decrease in both (CC) and (NC) numbers compared to the initial state after the first hour of incubation, but the cell numbers increased in the second hour. Novel capsules (NC) showed significantly better survivability (p < 0.05) than was seen with (CC), and the NC counts were 1.3 and 1 log cfu/ml for moringa extract and green tea extract-encapsulated cells), respectively, compared to 1.76 log cfu/ml for (CC). The last step, which is exposure to SIF, was the most important part and the target of this work as this step determines if these beneficial bacteria can reach the intestine in reasonable live numbers to be able to exert their effects. Our novel capsules (NC) successfully improved the survival relative to ordinary (CC). The same results were obtained by Mandal et al. (2006) in their study on free Lactobacillus casei NCDC-29 cells revealed to different concentrations of bile salts; they noticed a decrease in the cell counts from 9.34 to 5.60 log cfu ml -1 . These results show that the presence of plant extracts will protect bacterial cells throughout the simulated gastric-intestinal system. This effect may be due to their strong antioxidant activity, which has been confirmed by many previous studies 27 . As mention above, these results strongly support the improvement of the survival of probiotic bacteria in the human digestive system by encapsulating probiotic bacteria with plant extracts.
Survival of probiotics encapsulated with plant extracts in fruit juices and drinkable yogurt during storage at 4°C
The survivals of L. plantarum DSM 20205 and P. acidilactici DSM 20238 encapsulated with 0.05% moringa extract and green tea extract during storage at 4°C for 30 days were evaluated after aseptically transferring the encapsulated substance into various fruit juices, like kiwi, prickly pear and carrotjuices. The results in Table 3 show that the number of cells of all encapsulated cultures in each fruit juice decreased continuously (P<0.05) with increasing storage time. Moreover, all cultures survived better in prickly pear juice than in kiwi and carrotjuices.
Nualkaekul et al. 35 Our results show that encapsulation with 0.05% moringa extract or green tea extract significantly increased the stability of L. plantarum DSM 20205 and P. acidilactici DSM 20238 compared to control in fruit juices during storage (Table 3). Similar to fruit juices, during storage, the surviving populations of probiotics with plant extracts suspended in drinkable yogurt and without them tended to decrease (Fig. 3). This investigation established that probiotics entrapped with 0.05% moringa extract or green tea extract survived better than probiotics encapsulated without plant extracts. The survival of L. plantarum DSM 20205 seemed to be higher than that of P. acidilactici DSM 20238 after 30 days of storage (Fig. 4). Our findings were consistent with the findings of Krasaekoopt and Watcharapoka 15 who reported the survivability of microencapsulated probiotics in a simulated digestive system, fruit juice and drinkable yogurt. Brinques and Ayub 37 studied the effects of immobilization techniques on the survival of lactobacilli in yogurt during refrigerated storage. The addition of green tea extracts has been positive impact on survival of B. animalis spp. lactis LAFTI-B94, L. acidophilus LAFTI-L10 and L. paracasei LAFTI-L26 during incubation for 72h at 37°C 27 . They found that the addition of green tea extract could lead to a favourable an anaerobic environment for probiotic bacteria due to the oxygen-scavenging and antioxidant characteristics.
CONClUsiONs
This study evaluated the effect of a novel encapsulation technique using calcium alginate and plant extracts on the stability of probiotic bacteria, including L. plantarum DSM 20205 and P. acidilactici DSM 20238,in fruit juices and drinkable yogurt during storage at 4°C for 28days. After 4 weeks of the storage period, the survivability of cells encapsulated with 0.05% moringa extract was significantly higher than those of probiotics encapsulated with fennel and sage extracts. Upon refrigerated storage, the extracts of both green tea and moringa improved the constancy of probiotic capsules in fruit juices and drinkable yogurt compared to the control capsules. Overall, the novel capsules improved the survival of L. plantarum DSM 20205 and P. acidilactici DSM 20238 in prickly pear juice and drinkable yogurt throughout storage. The novel capsules were sequentially subjected to simulated digestive fluids (SSF, SGF, and SIF) In vitro, and the results showed that the extracts enhanced the survival and intestinal adhering capacity and supported to keep a higher balance of probiotics in the human digestive system. | 3,484.6 | 2019-03-31T00:00:00.000 | [
"Agricultural And Food Sciences",
"Engineering"
] |
Online Learning during Pandemic: Students' Motivation, Challenges, and Alternatives
*Corresponding Author<EMAIL_ADDRESS>Abstract: Pandemic had intrigued the change of EFL learning mode in higher education. The students had to adjust with the situation that created learning challenges that might obstruct learning progress. Furthermore, students’ motivations and creativities would lead them to find alternatives. Hence, the present study was aimed to investigate the students' challenges, motivations, and alternatives. It was a descriptive qualitative research case study by gathering students’ responses using questionnaire and interview with 10 students who joined KSP (short course) English Syntax using WhatsApp group at English Education Study Program of UIN Mataram in July 2020. The results showed that students’ motivations were increasing scores in remedy class (90%) and gaining more science understanding (10%). The challenges faced by students during the online course were poor internet connection (50%), simultaneous agendas (30%), limited WhatsApp feature (10%), and anxiety (10%). Regarding challenges, students did alternatives such as preparing phone credits (30%), praying for the end of the pandemic (10%), learning extensively (10%), searching for the best place to get an internet signal (20%), setting alarm (10%), and creating on-going motivation (20%). The research showed that online mode learning needed adequate preparations mentally, physically, and financially to back up their learning deficiency.
INTRODUCTION
In 2020, education all over the world has experienced massive changes because of the emergence and spreading of the Covid-19 virus that was initially popular in Wuhan, China in late 2019 (Onyema, 2020). People across the world have criticized China for its nontransparency and interruption in getting the disease people. Covid-19 is wreaking havoc on the lives of these people and the system of numerous nations across this world. In this case of Covid-19 outbreaks, lockdown is one way to reduce the meeting with others to reduce the possibility of Covid-19 spread.
Emergency remote learning (ERL) is intended as a temporary shift from normal learning modes (Rahiem, 2020). The situation happens when learning becomes remote (or distant), it takes what is supposed to be face-to-face teaching and transform it to become digital education. When a crisis occurs that requires schools to close, emergency remote learning may be in the form of online lessons, radio, or mixed learning solutions. In current times, most of the world finds itself dealing with emergency remote learning because of the Covid-19 virus pandemic. The world faces unprecedented international health and socio-economic crisis sparked by the Covid-19 epidemic (Chriscaden, 2020). Indonesia has recently witnessed the impacts of the virus on the education sector. Ministries in various countries have taken steps in every school and university to conduct learning through the internet. This online learning aims to increase awareness and the process of stopping the spread of viruses through direct interaction among people. The transition from face to face learning to online classes, has forced various parties to be able to do it online so that the learning system continues to run well, but apparently, this system did not work as effectively as we imagined, in fact, all parties experienced difficulties, not only students, parents, teacher, and the government also felt it.
In a time of crisis, distance education is the method of remote training without frequent face-to-face tutor communication (Burns, 2011). Emergency remote learning (ERL) is a temporary shift of instructional delivery to an alternate delivery mode due to crisis circumstances. At one point are courses presented completely online, never meet the teacher or fellow student face-to-face, and may take online tutorials rather, interact with the teacher by social media or learning management system (LMS) and learn from texts as well as online resources.
Successful distance education students are autonomous learners and take initiative at their training (Fidyati, 2016). They want assistance from different resources, other students, and their teachers. They read outside the core materials to see issues and they actively attempt to change their learning skill. Teachers must improvise as a result even though conditions are not as much as perfect (Meiers, 2007). All online education activities that are planned and provided in response to the new situation are not just the same as those of the wellplanned online education system. KSP (Short Course) English Syntax as one of the courses that were offered by English Education Study Program, Faculty of Education and Teachers Training, State Islamic University (UIN) Mataram to the students also adopted the ERL system. The course had been set to run for 14 meetings in two weeks. The teacher and students' interactions were directed to avoid a face-to-face meeting and applied online learning to use WhatsApp. This application was chosen as the agreement between teacher and students who had enrolled their names to participate the course. Furthermore, during the course run, many stories appeared regarding students' obstacles to join the course, their motivation to maintain their course attendance, and their creativity to seek alternatives to break the obstacles.
Based on the rationales above, documenting the students' motivations, challenges, and the alternatives they put forward, especially with respect to teaching and learning process in KSP English Syntax is important to provide significant information for the schools and teachers as well as other English students on how to deal with such a critical learning situation. Therefore, this study aims to explore the motivations of students to join KSP English Syntax, the students' challenges during the Emergency Remote Learning (ERL), and alternatives they deal with such learning barriers amid the Covid-19 outbreak.
METHOD
The present research used qualitative approach and employed descriptive case study type. Yin (2009) defines case study as a scientific inquiry that thoroughly investigates current contextual issues, particularly when there is a lack of understanding between the context and a certain phenomenon. For that reason, this study sees the case of COVID-19 that results in the emergence of ERL as a phenomenon worthy of investigation in order to establish an understanding of the challenges and possible alternatives that can be learnt from that phenomenon. The context where this research was carried out was in the Syntax KSP (Short Course program) held by English Education (TBI) Study Program, Faculty of Education and Teachers Training, UIN Mataram on July 2020. The reason for choosing this course was due to its access ability to the target participants, the TBI students who registered their names to join the course. Then, the researcher was the lecturer. Secondly, it is because the course could represent the case of EREL in UIN Mataram the program was held by all departments in the university. The sources of the data were the 10 students' answers on the open-ended questionnaires that were administered at the end of the Syntax KSP (Short Course program).
Because of the use of open-ended questionnaire and interview to collect the data, the data collected need to be categorized into certain emerging themes (Presser, 2010). In order to figure out the themes in the data, Braun and Clarke's thematic data analysis procedures will be used. Thematic data analysis refers to a method for identifying, analyzing, and reporting patterns (themes) within data. The phases of the thematic analysis include self-familiarity with the data, coding the data, themes searching, reviewing themes, themes definition and naming, and reporting (Braun & Clarke, 2006).
RESULT AND DISCUSSION
The present research was aimed to investigate the students' motivations, challenges, and alternatives in the KSP (short course) English Syntax online learning during pandemic Covid-19. According to the data that was acquired from the questionnaire, the motivations underlying students to join the KSP English syntax were both integrative and instrumental motivations. The proportions of each are depicted as in the following figure: The dominant motivation was students needed to improve their English syntax score as they failed to pass the regular schedule English syntax class. The small portion of the chart, one student said that she required gaining more knowledge by recourse the English syntax.
During the implementation of ERL English syntax, students faced some obstacles or challenges that might obstruct their learning progress. Students said that the most often occurred challenge was about the stability of internet connections (50%). Most of the students said that their houses are located in remote or hilly areas that made signals unable to be transmitted normally to their hand phone. Then, it was about 30% of responses said that simultaneous agenda during the course influenced the frequency of their attendance. At the time they joined KSP class, they were joining the KKP (Participatory Internship) program which was organized by Community Service Bureau of UIN Mataram.
Sometimes, they had difficulties to keep contact and interactions during WhatsApp class as they also had a meeting in village office.
The other challenge was WhatsApp did not provide flexible features to interact responsively and fast by typing (10%). Students had to wait for each other's responses to avoid jumbled chat between teacher-students and students-students. Furthermore, students found that it also became a factor which triggered laziness to give responses during the lesson. The last challenge that was faced by students was they still had anxiety during the course (10%). Students thought that their chatting should be in perfect grammar or zero error which made them unconfident as they think they were still learning. The detail portion of challenges is pictured as follows: Regarding the challenges faced by the students during the KSP English Syntax, as mentioned in the background that ERL would be successful if the students were autonomous to find the solutions. Then, the students had searched for alternatives to avoid being a failure to pass this course. They mentioned that they did some alternatives.
Most of the students believed that internet connection was the main challenge during the ERL in KSP English Syntax course. Hence, the most frequently applied alternatives were preparing the phone credits for keeping internet connection (30%) and finding the best place to get a good internet transmission (20%). The other important alternative was creating on-going motivation to strengthen their initial motivations which were only about improving score and acquiring more knowledge (20%). The details of on-going motivations are described in the figure 4. Moreover, 10% response said that WhatsApp did not help students a lot to grab a lot of English syntax knowledge as they expected, they preferred to fill this deficiency by learning extensively. They learned more from ebook and links which were shared by the teacher. The same portion of response (10%) was also applied for setting the alarm in the hand phone as students' anticipation for being late and forgetful on the KSP English Syntax course schedule. The last alternative that was implemented by students was praying sincerely to God for ending the Covid-19 virus pandemic. They believed when God blesses their prayer, they will enjoy face-to-face meetings that they used to experience.The detail percentage is visualised as follows: The last alternatives that were created by the students who joined KSP course were the on-going motivation. They remembered some items that might trigger their participation in the course such as: they remembered that they had to improve their score (30%), they had to remember that there was KSP English Syntax course rules that they had to obey (25%), they had to finish the tasks during the course if they wanted to get participation score (25%), and they believed on some benefits or practicality attending the KSP course such as no need to go to university and saving more energy and time to learn (20%). The data from the questionnaire are tabulated in the following figure: According to the result of the present research and the research objectives respectively, the discussion would elaborate the relations of motivation that the students had before joining the KSP English Syntax, challenges that students faced during the course, and the alternatives as to the proof of students' autonomy in ERL during Covid-19 pandemic. Furthermore, they would be discussed based on theories and previous studies.
Motivation is a factor that influences learners' failure or success (Chalak & Kassaian, 2010). Regarding this idea, all students who joined the KSP English Syntax had already had the integrative motivation that was about increasing the English syntax comprehension and instrumental motivation which was the intention to get higher English syntax score. The findings showed that instrumental motivation surpassed highly about 80% higher than integrative motivation. These situations depicted similar results in some other L2 learning contexts as in the research by Liu (2015), Muftah &Rafic-Galea (2013), andHussain &Masum (2016). This emphasizes that in most of the countries where English is taught as a second language, learners are more instrumentally motivated rather than integrative. Moreover, regardless of having one of those or both of the motivations, the students would make themselves readier to follow the class though with sudden unexpected situation or challenges. Motivation became the most fundamental issue for students' learning success in L2 context as it influences the requirement to prepare precondition before some efforts to trigger effective learning (Dornyei, 2007). Then, the students' motivation is highly correlated with students' autonomy (Almusharraf, 2020) The implementation of ERL in the form of KSP English Syntax had experienced some challenges which required students who joined to pass. The most influential challenge that was faced during the online class was the lack of internet connections (Octaberlina & Muslimin, 2020). Internet as the main ingredient of internet-based learning in ERL context with synchronous model becomes the most pivotal factor to made the learning process run as intended (Algahtani, 2011). Luckily, the students had search for alternative by being prepared with sufficient hand phone credits and searching best place which help them to find best internet signal connection. Terrell & Brown (2006) states that students' motivation to find alternatives to solve the problems depict the characteristic of being an autonomous learner that is having self-efficacy.
Another challenge that students faced was the schedule of KSP English Syntax was at the same time with students' KKP (Participatory Internship) agenda in the village office. Some students said to the teacher that they were asked by the head of the village to manage meeting with the people and they were unable to avoid. Some others said that they mentioned earlier before the online class start that they would response slowly as they had to keep their attention to two simultaneous agendas. Unfortunately, the rest said that they forgot that they had synchronous online class. As the students' belief on their integrative and instrumental motivations, they did set the alarm to remind themselves that they had course. Students may take benefit of technology to help them in learning (Naranjo, 2014). Due to their lack of understanding on the day online class material, students read extensively the e-book and links that the teacher had distributed in the WhatsApp group (Lin, 2014). One of students, MAA thought that learning extensively would improve his comprehension toward incomplete materials as well as answering his curiosity. Dornyei in Muslimin (2018) stated that the normally and ideally, a good language learner are triggered to learn since they are forced by their inner curiosity to know more and explore everything.
WhatsApp is mainly developed as a texting and document sharing application (Baidowi et al., 2014). Though later, this application has been developed to enable users to do video call, but the students who joined KSP English Syntax made an agreement to run the class only through texting and document sharing in order to save their internet credit. Hence, the teacher designed the course outline to suit that agreement. Unfortunately, there were some difficulties that hindered interaction during the synchronous course, such as: 1) students' found themselves hard to make concise WhatsApp text while their ideas as response to the explanation were many; 2) students' had too limited time to encode their idea into the written text as their idea transfer and texting speed was not so fast; 3) students found that teacher's explanation was too limited as they think that the teacher also wanted to make the explanation simpler. DNJ, one of the students, said "the teacher's explanation is too limited and I need more". To respond to these challenges, students decided to learn English syntax extensively and prayed for the Covid-19 pandemic end soon.
The last challenge that was mentioned by few students was having anxiety. AA and DNJ, students who joined KSP English Syntax, said that sometimes they were shy to ask and comments as his grammar was not good enough and their responses were observed by all in WhatsApp group. This showed that anxiety became the challenge or problem for 20% of students in the course. A similar finding was stated by Saadé et al. (2017) that 30% of students experienced anxiety during online learning. Another cause of anxiety was that the student was cautious all the time during class to think if the internet connection suddenly dropped and she lost a chance to learn (Mone of students). In order to solve this challenge students created on-going motivation that triggered the increase of their engagements during the course. The motivations were about: 1) remembering their initial instrumental motivation that was for getting a better English Syntax score. They had read the course outline containing the course assessment criteria which meant that the students should fulfill all the criteria to get a better score; 2) considering the course rules as something good to manage the course and good to follow. N, one of the student, said that the course rule was good to keep students' discipline and respect the learning schedule; 3) thinking that tasks were the aid to deepen the material comprehension not as a burden. M, N, and DNJ, three of the students, said that tasks helped them in learning and receiving lessons; and 4) some students believed that they found it was better to have online learning compared with faceto-face learning since they did not need to travel to the university, saved time and money, and consumed less energy. Online learning provides flexibility for students to find the most convenient place to learn (Smedley, 2010).
Discussing the three variables in the present research, Turturean (2013) mentions that students' learning motivation has a relationship with students' ability to cope with the problems. His further explanation, in higher education, success involves the achievement of pre-established goals, and adaptation to the changes imposed by the know-how society. Hossain (2018) stated that successful solvers on an innovation intermediary platform had a certain motivation that underlying their efforts to find opportunities to face some challenges. Those previous research have shown similar ideas with the present research findings and discussion. All students successfully passed the KSP English Syntax as they were motivated and found alternatives to solve the challenges. They changed their negative point of view on the online learning challenges into something motivating or positive.
CONCLUSION
The present research presents the preliminary result of the students' motivation, challenges, and alternatives when joining KSP English Syntax course. The research found that the students successfully passed the course because of they had instrumental motivation as the dominant and integrative motivation. Those motivations helped students to conquer the challenges during learning such as experiencing bad internet connection, joining simultaneous agendas, finding limitation of WhatsApp feature, and having online learning anxiety. More than that, the students also did some anticipations before the challenges appeared such as preparing phone credits, finding the best place to get an internet connection, setting alarm, creating ongoing motivation, and praying for God's blessings. To reduce their comprehension deficiency as well as to meet their curiosity, they learned extensively. This conclusion strengthens the results of other research proofing that there is a relationship among motivation, learning challenges, and the ability to find alternatives to solve challenges during an unconducive learning situation. Hence, the teachers, students, and stakeholders may look at the importance of motivation as capital to triggers students' creativity to overcome learning challenges in various contexts. Furthermore, future research is expected to do statistical calculations on the correlation among those variables which were not discussed in the present research. | 4,638.2 | 2020-12-15T00:00:00.000 | [
"Education",
"Computer Science"
] |
KnetMaps: a BioJS component to visualize biological knowledge networks
KnetMaps is a BioJS component for the interactive visualization of biological knowledge networks. It is well suited for applications that need to visualise complementary, connected and content-rich data in a single view in order to help users to traverse pathways linking entities of interest, for example to go from genotype to phenotype. KnetMaps loads data in JSON format, visualizes the structure and content of knowledge networks using lightweight JavaScript libraries, and supports interactive touch gestures. KnetMaps uses effective visualization techniques to prevent information overload and to allow researchers to progressively build their knowledge.
Introduction
Networks have been widely used to visually represent complex information in many disciplines, ranging from social sciences (Szell et al., 2010) to engineering, physics, biology, computer science, design and manufacturing (Wang & Alexander, 2015). They fulfil the need to present a system, not only as individual entities but as a whole, by capturing the myriad inter-linked components within the system (Pavlopoulos et al., 2011). Networks are represented as graphs comprising a set of nodes connected by edges. Networks can be homogeneous with all the nodes within the network being of the same type, or heterogeneous with nodes and edges of various types (Sun & Han, 2012). Recently, the term knowledge network or graph has been used frequently in research and business, usually in close association with Semantic Web technologies and linked data. Knowledge networks are increasingly used to model diverse knowledge domains by acquiring and integrating information into an ontology and applying a reasoner to derive new knowledge (Ehrlinger & Wöß, 2016).
A challenge when visualizing knowledge networks is to avoid information congestion and overload that could hinder user experience. The potential richness of data captured in the attributes and density of connections makes it a greater challenge to use standard network visualization tools which often focus on simply visualizing the structure of the network itself (Becker et al., 1995). In molecular biology, there is a wealth of available information, and visualizing all of it at once reduces the value of a visualization or makes it even unusable for analytical purposes, and therefore requires the development of special approaches when visualizing such data (Vehlow et al., 2015).
Previously, our group developed a web-based tool, Ondex Web (Taubert et al., 2014), for visualising knowledge networks generated with the Ondex data integration platform (Kohler et al., 2006). It supported the Ondex exchange language (OXL) and was predominantly used to visualise a biological knowledge domain. However, being a Java-applet and using legacy web technologies, it constantly led to compliance concerns on different web browsers, which hindered its reusability. The advance of modern JavaScript-based data visualisation libraries such as cytoscape.js (Franz et al., 2016) and jQuery (Benedetti & Cranley, 2011) has made it possible for us to learn from our experience with Ondex Web and to develop a new lightweight and reusable component optimised for the visualisation of contentrich knowledge networks.
In this paper we describe KnetMaps (Singh & Hassani-Pak, 2018), an interactive BioJS component to visualise integrated knowledge networks. It is well-suited for applications that require scientists to visualise complementary types of evidence in a single interactive view. KnetMaps is an important visualisation component of KnetMiner where it is used for visualising knowledge networks of crop genomes (Hassani-Pak et al., 2016) and supporting scientists to make informed decisions in gene and trait discovery research. It uses a generic design and hence can be readily embedded in other knowledge discovery applications.
Methods
The KnetMaps component has been developed as part of the KnetMiner software suite and follows the standards set by the BioJS registry. KnetMaps employs a variety of network visualization techniques such as interactive controls, information juxtaposition and data filters. Using effective visualization techniques it prevents information overload and allows researchers to progressively explore and reveal the inter-connected entities within the larger knowledge network (Figure 1).
Visualising knowledge networks
Visualisation of knowledge networks needs to consider two key criteria: i) the heterogeneous and interconnected nature of the network and ii) the content-rich attributes of nodes and edges that cannot be easily displayed as part of the actual network.
Nodes in a biological knowledge network represent entities such as genes, proteins, phenotypes, pathways, publications and ontology terms; connected by edges of various types such as "encodes", "published_in" and "ortholog" ( Figure 1C). KnetMaps visualises each node type using a customized combination of shape and colour. Edge types are rendered using a combination of distinct size and colour attributes. The position of nodes and the length of edges is calculated using a force-directed layout that enables connectedness, separation and pattern-based clustering of closely inter-linked entities (Dogrusoz et al., 2009). Labels can be added to nodes and/or edges to enable easier understanding of the underlying data.
To view the potentially rich set of key-value attributes on nodes and edges, we have developed the Item Information panel ( Figure 1G). It displays all textual (e.g. abstract, title of a publication) and numeric properties (e.g. accessions, scores and weights) of a selected node or edge in table format, including annotations, detailed descriptions, secondary labels and links to external websites and databases about the selected entity. Users can also use the information displayed in this panel to customize the rendered visualization of node and edge labels to their needs.
Intuitive user interaction
On relevant devices, KnetMaps supports basic touch gestures such as tap, tap-hold, tap-drag and pinch and zoom for user interaction. Users can interact with individual nodes and edges in the rendered network by using standard mouse or touch gestures such as click or tap gesture on a specific node or edge to get a summary of its properties or use the mouse wheel or pinch gesture to manipulate the zoom settings on the network.
Users can right-click or tap-hold on a node or edge to activate a radial context menu ( Figure 1F) that provides a range of easy-to-use mechanisms for exploring or manipulating the selected entity. Users can click or tap on a node or edge and view further information such as type, description and annotations, summarised in a dialog box. Users can also tap-drag individual nodes or edges to re-align them within the visible network or tap-hold and reposition the rendered network as a whole. The visualized network can be further explored and exported ( Figure 1A1) using a variety of menu functions. For example, networks can be exported from KnetMaps as images (in png format) or as cytoscape-compatible JSON which can then be opened in the Cytoscape desktop application (Kohl et al., 2011) for further downstream analysis.
Incremental approach to exploratory analysis KnetMaps controls information overload in the visible network by providing means to overlay data and extend it in incremental steps, thereby adopting a progressive approach where a subnetwork of interest from the underlying knowledge network is initially visualized and end-users are given the means to add more related information to the visible network. The subnetwork of interest is determined by the application using KnetMaps and passed to it through a "display" attribute in the API/JSON. KnetMaps generates a summary of the number of visible/hidden entities in the knowledge network so that end-users have an overview of what information might be present in the knowledge network but is currently hidden in the visible network. This information is automatically updated each time the user reveals or hides entities from the visible network ( Figure 1E).
The first way of adding additional information to the network is by using the interactive legend that gives a summary of all node types present in the network, along with a numerical count of the total number of nodes of each type ( Figure 1D). For example, clicking on a "Publication" symbol in the legend will add publications linked to visible gene and protein nodes, thereby enabling users to expand the visible knowledge network in real-time.
The second way to add or hide information is by using the context menu ( Figure 1F). It allows users to hide individual nodes and edges, or hide all nodes and edges of a particular type. This can be useful for removing irrelevant or noisy information from the visible network. Additionally, it allows in-and out-going relations to be added to a selected node when these were initially hidden. This can be useful when a node acts as a knowledge hub, but only a small subset is initially visualized to intentionally prevent information overload, or if a node is part of a larger, more intricate knowledge pathway. In such cases, users can rapidly overlay connected entities within the selected node's neighbourhood onto the visible network to effectively connect the dots and explore the myriad relationships between the network entities.
Implementation
KnetMaps leverages CytoscapeJS v.2.4.7 and jQuery v1.11.2 to visualise knowledge networks. It has been designed in a modular fashion and made available in NPM and BioJS, making it a reusable plug-and-play component within dynamic web applications.
Input data model
KnetMaps loads JSON input data (streamed or locally stored) and renders it as a knowledge network. It uses the cytoscapeJS JSON format specification in which the network is modeled as nested "nodes" and "edges" array objects. Each node or edge entity has a set of required properties such as colour, shape, size, identifier, label, border and visibility. We have extended the cytoscapeJS schema with an additional JSON object to store optional node/edge properties, e.g. abstract and title of a "Publication" node. The separation of required visual properties and optional data specific information, provided a more efficient way of rendering the general network while displaying node/edge specific information on demand, e.g. when the user clicks on a node.
Network rendering
Networks are rendered using a cytoscapeJS-based network stylesheet that maps the set of required JSON properties to the network object. The KnetMaps generator stylesheet sets the shape and colour of a node based on parameters provided for it in the JSON input dataset, e.g., 'shape: data(conceptShape)' and 'background-color: data(conceptColor)' where 'conceptShape' and 'conceptColor' are properties with set values in the input dataset. Developers can customize the stylesheet to replace the supported static cytoscapeJS shapes (such as triangle, roundrectangle, ellipse, pentagon and star) with images. CytoscapeJS selectors have been incorporated in the network stylesheet to filter nodes and edges based on these interactions and add functions that toggle their visual attributes such as highlighting a node or edge when selected and toggling visibility of labels accordingly.
Interactive knowledge display KnetMaps provides various features for interactive and incremental data exploration by incorporating useful Javascript libraries, such as the cytoscape.js-cxtmenu widget and various force-directed layout libraries to render the knowledge network, including the CoSE layout (Dogrusoz et al., 2009), which is the default network layout used in KnetMaps. Other layouts that can be used by end-users include the physics-based force layout, the CoSE-Bilkent layout that provides additional network topology and geometrical constraints, or static in-built cytoscapeJS layouts, such as the pattern-based circular layout or the concentric layout. KnetMaps packages cytoscapeJScompatible extensions to these layouts within the application distribution and incorporates optimised settings for each layout within the application itself.
Scalability and performance
Networks of up to 1000 nodes and up to 3000 edges can be visualized in KnetMaps without significant performance degradation or visual delay in layout animations. Visualizing much larger networks (i.e. networks with over 10,000 nodes) increases the initialisation time and can cause jerky or delayed layout animation effects. Some of the rich visual styles used by KnetMaps can be somewhat expensive to render by cytoscapeJS, for example, rendering bezier curved edges.
The KnetMaps code addresses this by providing developers with flexible options to reduce the rendering complexity of the networks. All visual display settings have been made fully customizable to allow developers to tweak element styles such as node shape, edge curve and node border. Network container settings such as pixel ratio and motion blur can also be similarly easily altered, as can layout parameters such as reducing animation time, decreasing the number of layout iterations to run and disabling animation when rendering very large networks. The default parameters and settings work well in KnetMiner, based on the average sizes of the biological networks (between 300-1000 nodes) that it visualizes. However, customizing these parameters to employ simpler visual settings for larger networks can mitigate performance degradation during rendering.
Operation
KnetMaps has been published to NPM and the BioJS (Gómez et al., 2013) registry, which provides a centralized portal of JavaScript tools and widgets used to analyse and visualize biological data, making it easy for research software users to install KnetMaps and embed within the HTML of their own web pages. The minimum system requirement is a PC with npm (part of Node.js) installed, a modern web browser with JavaScript enabled and a JSON sample file (see knetmaps/sampleFiles).
Use cases
KnetMaps is used as a network visualization component within tools and platforms that visualize biological knowledge as an interactive network, such as the KnetMiner (Hassani-Pak, 2017) and Daisychain. In KnetMiner, there is a need to visualise query related subsets of a genome-scale knowledge network (Hassani-Pak et al., 2016) in the web-browser. KnetMaps is one of the key components in KnetMiner to visualize and explore integrated information of inter-linked biological entities and processes to help in hypothesis generation and validation, and to accelerate candidate gene discovery. KnetMaps is also part of the KnetMiner Web API and therefore enables collaborators to view knowledge networks for specific genes and keywords from their own applications.
Daisychain is a web application that links genome annotations, aiming to enable researchers working on certain species genes to investigate homologs in other published assemblies via a web interface called Daisychain-Web. The application can be queried using keywords or FASTA sequences with statistical cut-offs and the search results, i.e., links between genes and annotations across similar or identical species or cultivars are visualized as a network using KnetMaps. Daisychain uses KnetMaps out-of-the-box for rendering and visualization and adds further annotation and filtering options to the Item Information panel.
Conclusion
Visualizations are a useful mechanism employed in many disciplines to present information in an intuitive representation that enhances user cognition and helps identify unique patterns and important trends in data. Network formalisms are becoming an increasingly popular means to combine data from inter-connected sources into a concise representation for easier and intuitive exploratory analysis. KnetMaps has been implemented as a fast and lightweight touch-friendly tool for visualizing content-rich, heterogeneous knowledge networks. The implementation uses cytoscapeJS, jQuery and JavaScript extensions for interactive functionality to ensure that low-memory, touchcompatible networks can be rendered in web browsers without the need to write extensive and unwieldy server-side code. Usage of JavaScript ensures rendering compatibility with most web browsers without the need to install any additional software, e.g., Java Applet or Adobe Flash. KnetMaps provides an interactive means to display, filter and overlay networked knowledge, and visually traverse the relationships connecting information within the rendered network. It incorporates a host of visualization techniques such as juxtaposition and superposition to encourage a step-by-step exploration of larger volumes of disparate data, thereby enabling end-users to investigate and analyse inter-linked knowledge in an incremental and intuitive manner.
Data availability
All data underlying the results are available as part of the article and no additional source data are required.
Grant information
This work was funded by the Biotechnology and Biological Sciences Research Council (BBSRC) grants Designing Future Wheat (DFW) (BB/P016855/1) and DiseaseNetMiner (BB/N022874/1). The functionality provided by KnetMaps is very useful for molecular geneticists (and the bioinformaticians who support them), whose aim is to identify the best candidate genes to use in trait improvement (I am coming from the bioinformatician/geneticist side). Using heterogenous evidences (omics experiment results, literature, function by association inference from sequence identity, GWAS results) is a common exercise in order to get a more informed list of candidate genes to work with. This is the strength of the visualization of Knetmaps: the ability to display the evidences incrementally, and to visualize the connection of evidences into an intuitive network display, which is very helpful for biologists in order to make decisions on which genes to select for further (expensive) experimental validation. Network rendering and response is fast, thus providing an enjoyable end user experience. I do share one reviewer's feedback on how to revert the network view back to the original state prior to clicking on an icon in the interactive legend, I also seem to miss the step on how to do this.
Open Peer Review
A suggestion on the implementation section, it would be nice to have a graphical block diagram that shows the steps and inputs required for KnetMaps to be installed in an end-user's system. A statement reporting on installation and functionality of the system under 3 dominant OSs would be appreciated (Linux, Unix/MacOS, MSWindows?), as upfront knowledge for users who wish to install this in their own systems.
Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes
Are the conclusions about the tool and its performance adequately supported by the findings 1.
Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes
Are the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly
Chia-Yi Cheng
Center for Genomics and Systems Biology, New York University, New York, NY, USA This article describes a package KnetMaps that provides a JavaScript-based tool to visualize content-rich biological data. KnetMaps is suited for a website with heterogeneous biological information by integrating results in a network format. The examples listed on the demo page ( ) allows users to taste the flavor of KnetMaps. The interface http://knetminer.rothamsted.ac.uk/KnetMaps/ is intuitive and straightforward. Below are suggestions to further enhance the clarity and completeness of the user experience: It would be beneficial for both the developers and end users if the author could provide a step-by-step guide to reproduce one example dataset as on the demo page. The 'interactive legend' feature allows users to add an additional layer to the default network. It is not completely clear to me: i) how was the visible/invisible edges/nodes set in the first place? Ii) is it possible to remove the once added information? For example, clicking on a 'Domain' symbol in the legend will add 'Domains' linked to visible nodes. A user may review and find the information not needed and want to hide those edges. I may have missed it but I did not find a way to cherry pick the symbols once the associated edges displayed. The PNG is working on the demo page ( ) but not the http://knetminer.rothamsted.ac.uk/KnetMaps/ use case pages ( ) ( http://knetminer.rothamsted.ac.uk/Zea_mays/ ). A blank window popped up when I hit the PNG http://daisychain.appliedbioinformatics.com.au/ icon.
Is the description of the software tool technically sound? Yes
Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly Are the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes No competing interests were disclosed.
Competing Interests:
Referee Expertise: bioinformatics software user and tester I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
Ian Dunham Gareth Peat
Open Targets, EMBL-EBI, Cambridge, UK This is a useful article and software package to help apply cytoscape.js to heterogeneous knowledge graphs. KnetMaps.js adds several features surrounding the cytoscape.js graph visualization, such as the legend, information panel and download functionality. Following the setup guide to run the sample application locally was fairly straightforward.
The software is available online to test at both and http://knetminer.rothamsted.ac.uk/KnetMaps/ . Developers can obtain the source through npm and http://daisychain.appliedbioinformatics.com.au/ biojs, where the package is named knetmaps. KnetMaps.js requires that input data be in a specific format and there are examples of this in the setup guide. The format is easy to understand, but combines data and styling information per node or edge. Separation of these concerns would add clarity and conciseness.
A user wishing to display heterogeneous data, i.e. data containing different node and edge types, might find KnetMaps.js could save them development time. Several features that one might want to add on top of cytoscape.js are provided out of the box, such as the interactive legend, node/edge information panel, PNG/JSON export functionality, filtering and a variety of layout algorithms. However, for any substantial deviation from the UI design provided, a developer might be tempted to start with cytoscape.js directly, which has many online examples and is well documented. | 4,835.6 | 2018-10-17T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Checkerboard patterns in E3SMv2 and E3SM-MMFv2
. An unphysical checkerboard pattern is identified in E3SMv2 and E3SM-MMF that is detectable across a wide range of timescales, from instantaneous snapshots to multi-year averages. A detection method is developed to quantify characteristics of the checkerboard signal by cataloguing all possible configurations of the eight adjacent neighbors for each cell on the model’s cubed sphere grid using daily mean data. The checkerboard pattern is only found in cloud-related quantities, such as precipitation and liquid water path. In-stances of pure and partial checkerboard are found to occur more often in E3SMv2 and E3SM-MMF when compared to satellite data regridded to the model grid. Continuous periods of partial checkerboard state are found to be more persistent in both models compared to satellite data, with E3SM-MMF exhibiting more persistence than E3SMv2. The checkerboard signal in E3SMv2 is found to be a direct consequence of the recently added deep convective trigger condition based on dynamically generated CAPE (DCAPE). In
Introduction
The representation of moist convection is a critically important feature of an atmospheric general circulation model (GCM), but these processes are often parameterized with simplified models because explicitly simulating all scales of moist convection is too computationally expensive.The multi-scale modeling framework (MMF), or super- parameterization, was conceived as an economical way to include an explicit representation of some scales of moist convection in a GCM by embedding a cloud-resolving model (CRM) in each column of the parent GCM (Grabowski and Smolarkiewicz, 1999;Grabowski, 2001;Randall et al., 2003;Khairoutdinov et al., 2005).The embedded CRM significantly increases the model's overall computational cost, but the overall cost is still orders of magnitude lower than a global convection-resolving model.The way in which the two models of an MMF are coupled allows unique algorithmic and hardware acceleration methods that bring the cost in line with traditional GCMs (Hannah et al., 2020).
A notable trade-off of the MMF method is that there is a "scale gap" between the resolved scales of the GCM and CRM where neither model represents the relevant processes (see Fig. 1).The MMF method couples the two models through forcing and feedback tendencies formulated such that the domain mean thermodynamic state of the CRM and its parent GCM columns cannot drift apart.A related conse-Published by Copernicus Publications on behalf of the European Geosciences Union.
W. Hannah et al.: Checkerboard patterns in E3SMv2 and E3SM-MMFv2 quence of the scale gap is that the internal spatial variability of the CRM cannot be advected by the GCM flow and remain "trapped" in the CRM.Thus, the propagation of signals organized within the CRM can only happen indirectly through the coupling of the CRM domain mean (Pritchard et al., 2011).
Most MMF results in the literature use a global host model with a finite-volume grid that produces a smooth solution (Khairoutdinov et al., 2005;Benedict and Randall, 2009).However, results from the MMF configuration of the DOE Energy Exascale Earth System Model (E3SM-MMF) revealed a strong grid-imprinting signal related to the use of the spectral element grid (Hannah et al., 2020).This problem is hypothesized to be related to "cusps" in the solution that can form due to discontinuous derivatives at the shared spectral element edges.These occasional cusps lead to noise in the vertical velocity field (Herrington et al., 2019b), which the embedded CRMs in E3SM-MMF are notably more sensitive to compared to traditional convective parameterizations.The analysis of Hannah et al. (2020) did not relate the gridimprinting signal to the trapping of CRM fluctuations, but a connection could not be ruled out.
The grid-imprinting issue in E3SM-MMFv1 is associated with the heterogeneous nature of the spectral element grid, in which element edge nodes exhibit slightly different behavior compared to interior nodes.The effects of this heterogeneity can be alleviated by putting the physics calculations on a quasi-regular finite-volume grid and mapping tendencies back to the dynamics grid similar to Herrington et al. (2019a), colloquially known as "physgrid".The physgrid method can also make the model more efficient by using a physics grid that is coarser than the underlying dynamics grid (i.e., a 2 × 2 finite-volume mesh in each element), which does not qualitatively alter the model solution.A version of the physgrid that allows for regional mesh refinement was recently implemented in E3SM as described by Hannah et al. (2021), although their analysis does not include results from E3SM-MMF.
Despite the fact that E3SM-MMF running with the physgrid produces a smoother solution when compared to the previous physics grid configuration, further analysis revealed a new type of noise pattern emerges on the physics grid.This pattern resembles a "checkerboard" with alternating positive and negative differences relative to a localized area mean in fields related to convection on the physics grid.The checkerboard pattern in E3SM-MMF precipitation can be seen alongside satellite data in the 1-month mean maps from an arbitrarily chosen January in Fig. 2a, b.Visual inspection of many fields and averaging windows reveals that the pattern is most apparent in subtropical regions and is detectable on many timescales, including, alarmingly, averages of 5-10 years.The checkerboard signal also depends on the vertical level, with the strongest signals occurring at the levels where shallow clouds are present.Note that the checkerboard pattern is often obscured in data that have been regridded to a traditional equiangular grid for analysis, and thus it is important to consider data on the native cube sphere grid.
The robustness of the checkerboard in E3SM-MMF suggests that it is not related to a realistic physical process.The MMF is unlike a typical convective parameterization, in that the CRM exhibits stochastic behavior since it does not rely on an equilibrium assumption (Jones et al., 2019).Therefore, it may not come as a surprise if the MMF solution is noisier than a traditionally parameterized model, but it is unclear how long it might take to average out a noisier solution from this type of model.The MMF scale gap described above may also be playing a role in effectively trapping CRM fluctuations and causing the checkerboard since these fluctuations cannot be advected on the global grid.However, this explanation must account for how the global model dynamics drive the processes required to sustain the pattern.
Numerous sensitivity tests have been conducted to rule out early hypotheses such as erroneous code in the physgrid mapping and unstable parameter values for hyperviscosity in the spectral element dynamical core.The results of these tests are difficult to explore thoroughly because they all yield a null result in which the checkerboard signal does not appear to diminish significantly.Therefore, in order to probe the nature of this issue more deeply we will focus here on developing a method to objectively quantify various aspects of the checkerboard pattern rather than relying on visual inspection to detect and compare the prevalence of the checkerboard between different model configurations.
Interestingly, the recently released version 2 of E3SM (E3SMv2) has also been found to produce a similar checkerboard pattern as E3SM-MMF, albeit one that is much less severe (see Fig. 2c).The previous version of E3SM does not exhibit any noticeable systematic unphysical patterns in long-term means.Sensitivity experiments revealed that the E3SMv2 checkerboard is a direct consequence of a new convective trigger that relies on CAPE generated by the largescale dynamics (Xie et al., 2019), known as the "DCAPE trigger", and thus we will include additional analysis of E3SMv2 with this option disabled to quantify its impact.
The goal of this paper is to quantitatively document the nature of the checkerboard pattern in E3SM-MMF and E3SMv2.To do this we devise a method to objectively detect and catalog patterns of adjacent neighbors on a grid and compare the occurrence of these patterns to satellite observations.The pattern detection method and model data are detailed in Sect.2, followed by the results of the detection analysis in Sect.3. Conclusions are presented in Sect. 4.
Methods
An especially difficult aspect of examining the checkerboard problem is that the signal is generally weak compared to realistic weather variations.This makes it impossible to cleanly separate the checkerboard signal from synoptic-scale fea- tures, which are often superimposed.So rather than trying to isolate the occurrence of a clear checkerboard pattern, we choose to take a broader approach and catalog all possible patterns of relative values in a local neighborhood of adjacent points for every point on the model grid.This allows us to objectively determine if any number of patterns are occurring more frequently than what we find for observed data remapped to the same grid.
Adjacent neighbor identification
The first step to cataloguing patterns in the data is to identify the adjacent neighbors of each cell to define each local "neighborhood".This is done on the quadrilateral cells of the finite-volume physics grid using connection information to identify cells that share a cell edge or corner.Initially, a distance-based nearest-neighbor method was employed, but this was problematic in regions where the cube-sphere grid is distorted by the projection onto the sphere.The cell connection information can be generated through a brute force comparison of cell corner locations to identify cells that share a corner.A shared edge can then be easily defined when two cells share two corners.
After identifying the adjacent neighbors to a given cell, the neighbors are sorted by the great circle bearing between the central point and each neighbor, putting the northernmost edge point first (see Fig. 3a).This ordering ensures consistency when comparing local neighborhoods across different areas of the global grid that experience different amounts of distortion from the spherical projection (Fig. 3b).Ordering the neighbors by bearing in this manner is also useful for defining the neighbor states as a sequence (see Sect. 2.2).The same method works trivially for equiangular grids.Note that for cubed sphere data we ignore points located to the cube corners because these only have seven adjacent neighbors and cannot be directly compared to the rest of the grid using the methods described below.
Pattern detection
Once we have identified the local adjacent neighborhood of a given center point, we need a way to catalog the various neighborhood state patterns.In order to make the pattern detection tractable we simplify the neighborhood state by calculating differences from the center cell and then encode the adjacent neighbor differences as binary values, with 0 for values less than or equal to the center value and 1 otherwise.Note that the center point is excluded from the binary sequence for convenience, which would otherwise complicate the pattern interpretation and partial checkerboard identification (see below).Our experience suggests that these methodology choices are arbitrary and do not affect our conclusions.
For a given neighborhood state we are left with a sequence of eight binary values corresponding to the adjacent neighbors ordered in a clockwise fashion.A pure checkerboard pattern can now be easily identified as an alternating binary sequence.The smoothness of a pattern can also be inferred from the variations of this sequence.Examples of different neighborhood patterns are shown in Table 1 with 1.Cells are labeled with a "0" for values less than or equal to the center value and "1" otherwise.Note that a logarithmic spacing is used for the color levels.
liquid water path from the E3SM-MMF simulation described below.
Our pattern detection method gives us a simple way to catalog patterns in a local neighborhood, including a pure checkerboard.However, there are several sets of unique pat-terns that are functionally equivalent.For example, the sequences [00001111] and [00011110] both describe a smooth gradient across the neighborhood, but in most cases we do not need to distinguish these as distinct patterns because they are equivalent if we allow the pattern to be rotated.If we Table 1.Examples of binary sequences that describe the relative states of the eight adjacent neighbors relative to a given center point on a rectilinear grid (see text).
One case where we want to ignore rotational symmetry is when exploring the pure checkerboard pattern.The patterns [01010101] and [10101010] represent different "phases" of the pure checkerboard pattern, which should occur with roughly the same frequency at all points if the model solution is translationally invariant.Compositing all points with either phase may also be useful for exploring the mechanisms that drive the signal (not shown).
As we will see later in Fig. 7, despite the seemingly widespread checkerboard in long-term means, the occurrence of a pure checkerboard pattern is surprisingly infrequent in daily mean data when compared to other possible neighborhood patterns.This makes sense given that the checkerboard pattern will often coexist with synoptic weather features that mask the signal over short timescales.
To overcome this complication it is insightful to focus on patterns that contain only part of the full checkerboard pattern.
To do this we identify neighbor state patterns that contain an alternating binary sequence of length four or more and consider these to be "partial checkerboard" cases (see Table 1).A stricter definition of partial checkerboard that requires a longer alternating sequence does not qualitatively change our results (not shown).
The occurrence of any pattern will change depending on the timescale of the data.This may seem obvious if we were to compare monthly and daily means, but differences are also noticeable when comparing the results of sub-daily and daily data.In order to facilitate comparison with satellite observations we will only use daily data for the pattern detection.
Model description
E3SM was originally forked from the NCAR CESM (Hurrell et al., 2013), but all model components have undergone significant development since then (Golaz et al., 2019;Xie et al., 2018).The dynamical core uses a spectral element method on a cubed-sphere geometry (Ronchi et al., 1996;Taylor et al., 2007).Physics calculations, including the embedded CRMs in E3S-MMF, are performed on a finite-volume grid that is slightly coarser than the dynamics grid but more closely matches the effective resolution of the dynamics (Hannah et al., 2021).
In a similar fashion to E3SM, the MMF configuration of E3SM (E3SM-MMF) was originally adapted from the superparameterized CAM (SP-CAM; Khairoutdinov et al., 2005).E3SM-MMF has also undergone significant development, but the model qualitatively reproduces the general results previously published studies (Hannah et al., 2020).The embedded CRM in E3SM-MMF is adapted from the System for Atmospheric Modeling (SAM) (Khairoutdinov and Randall, 2003).Microphysical processes are parameterized with a single-moment scheme, and sub-grid scale turbulent fluxes are parameterized using a diagnostic Smagorinsky-type closure.Aerosol concentrations are prescribed with present day values.The embedded CRM in E3SM-MMF uses a twodimensional domain with 64 CRM columns in a north-south orientation and 1 km horizontal grid spacing.Note that various sensitivity tests have shown that the details of the CRM domain configuration do not qualitatively affect our results (not shown).
Aside from the difference in how convection is treated, the configurations of E3SM-MMF and E3SMv2 differ in several ways.The stability of E3SM-MMF is noticeably improved by reducing the global model physics time step from 30 to 20 min.The 72 layer vertical grid of E3SMv2 was also found to be problematic for the performance of E3SM-MMF because thin layers near the surface necessitate a 5 s CRM time step for numerical stability.Therefore, the E3SM-MMF simulation shown here uses an alternative 50-layer vertical grid that allows a longer 10 s CRM time step.A final stability concern has to do with high-frequency oscillations of various atmospheric quantities near the surface, such as wind and temperature.Both models exhibit these oscillations, but they render E3SM-MMF much more susceptible to crashing.A temporal smoothing of surface fluxes with a 2 h timescale is used to address this problem, which does not have any notable impact on the model climate.These configuration choices and others, such as the CRM grid parameters, have been explored in numerous sensitivity tests, but in all cases they were found to have a negligible impact on the checkerboard signal in E3SM-MMF (not shown).
Model simulations
All simulations are run for 5 years using 85 nodes of the NERSC Cori-KNL computer (5400 MPI ranks).While hardware threading can be utilized outside of the CRM calculations, we did not employ threading in the simulations presented here.The use of 5-year simulations is common practice in model evaluation and is a trade-off between computational cost and signal-to-noise ratio.The global cubed sphere grid was set at ne30pg2 (30 × 30 spectral elements per cube face and 2×2 finite-volume physics cells per element), which roughly corresponds to an effective grid spacing of 150 km.The model input data for quantities such as solar forcing, https://doi.org/10.5194/gmd-15-6243-2022 Geosci.Model Dev., 15, 6243-6257, 2022 aerosol concentrations, and land surface types, are derived from a 10-year climatology over 1995-2005 to be representative of climatological conditions around 2000.Sea surface temperatures were similarly prescribed using monthly climatological values that are temporally interpolated to give a smooth evolution (Taylor et al., 2000).
Satellite data
We are interested in characterizing the checkerboard patterns in satellite data as a way to determine the degree of realism in the model data, and thus the specific time period of satellite data used for analysis is arbitrary.We choose to use daily mean data over 2005-2009.Since the checkerboard pattern is most visible in cloud liquid water path and precipitation fields, we use comparable satellite estimates of these fields to provide a baseline of the spatial distribution of these quantities.Satellite estimates of cloud liquid water path are provided by the Multisensor Advanced Climatology of Liquid Water Path (MAC-LWP) data product (Elsaesser et al., 2017).We use a daily resolution version of the product (McCoy et al., 2020), with LWP estimates provided on a 1.0 × 1.0 • equiangular grid that is then regridded to the ne30pg2 grid used by the model.MAC-LWP additionally provides total (cloud plus precipitating) liquid water path estimates (TLWP), and we use TLWP to create a gridded quality control mask that hashes regions for which the ratio of LWP to TLWP is less than 0.6, broadly following the recommendation in Elsaesser et al. (2017).Hashed regions envelope grid boxes for which LWP estimates exhibit substantial uncertainty (and potential systematic bias) due to errors in isolating and quantifying the cloud liquid water radiometric signature from that of the total liquid water radiometric signature in microwave retrievals.
The Global Precipitation Measurement (GPM) mission, the successor to the Tropical Rainfall Measurement Mission (TRMM), was launched in 2014 with the goal of producing accurate and reliable estimates of global precipitation with all available data TRMM and GPM eras (Hou et al., 2014).The Integrated Multi-satellite Retrievals for GPM (IMERG) combines several satellite data sets to produce an integrated rainfall data product that has proven to perform well in various regions (Anjum et al., 2018;Kim et al., 2017).Daily mean IMERG data is available on a 0.1 × 0.1 • grid, which is much finer than the grid used for the model simulations used here.To facilitate direct comparison, we regrid the IMERG data to the ne30pg2 model grid, as well as a 1.0×1.0• equiangular grid to match the MAC-LWP data.
Results
In this section we will present the results of the pattern detection algorithm described in Sect.2.2.We will focus on providing a broad comparison of how various patterns occur in each data set, as well as an assessment of persistence of partial checkerboard patterns.
Checkerboard climatology
Figures 5 and 6 show 5-year average maps of precipitation and cloud liquid water path centered over the tropical Pacific using all data sets on the ne30pg2 grid.The Pacific region was intentionally used because it is often where the most obvious checkerboard signal can be seen in E3SM-MMF.The satellite data from IMERG and MAC-LWP do not indicate any systematic noise on the ne30pg2 grid as we expect (Figs.5-6a).Hashed regions in Fig. 6a indicate where time-averaged MAC-LWP data are more uncertain due to the prevalence of deep convection and increased precipitation water that makes it difficult to determine accurate estimates of cloud-only liquid water paths.The checkerboard pattern is immediately evident in E3SM-MMF data, along with all the standard climatological features we expect, such as the tropical convergence zones (Figs.5-6b).
It is not immediately obvious that either E3SMv2 case exhibits any checkerboard signal in the long-term means, but there are slight indications that the case with the DCAPE trigger disabled produces a smoother climatology .Part of what hides the checkerboard signal in Figs.5-6c is the choice of color bar, along with the fact that the checkerboard signal in E3SMv2 is weak compared to E3SM-MMF.The checkerboard in E3SMv2 can be made more visually apparent in both of these fields when using a color bar with a logarithmic scale (not shown).
The results of the pattern detection algorithm contain a wealth of information that is challenging to condense.Visualizing the fractional occurrence of each separate pattern is very difficult to parse and understand, even after accounting for rotational symmetry.Alternatively, we can combine patterns based on the number of local extrema in the binary neighborhood pattern sequence.This approach simply counts the number of ones surrounded by zeros and vice versa.A pure checkerboard pattern has eight local extrema, and a lower number of extrema indicates a less noisy state in the local neighborhood.Note that local extrema counts of six and seven are not possible in a binary sequence of length eight.
Figure 7a, c shows the result of combining patterns by the number of local extrema using liquid water path and precipitation data from the northwest tropical and subtropical Pacific.For each satellite data set, we have included results from both ne30pg2 and 0.1 × 0.1 • grids to reveal any influence of the remapping.The difference in each fractional occurrence value relative to the satellite data on the ne30pg2 grid is shown in Fig. 7b, d.
Figure 7 makes it clear that the occurrence of the pure checkerboard (eight local extrema) is quite rare relative to the other patterns, and E3SM-MMF produces the most frequent occurrence of this pattern.However, E3SM-MMF has an even larger prevalence of patterns with three, four, and five local extrema relative to all other data sets.Inversely, smooth patterns with no local extrema are produced much less often in E3SM-MMF than any other data set.This indicates that E3SM-MMF has a less smooth solution in general, and also illustrates the importance of considering partial checkerboard patterns rather than only looking for a pure checkerboard.
Figure 7 shows an interesting distinction between our two E3SMv2 simulations.The E3SMv2 case with the DCAPE trigger shows a higher occurrence of noisier patterns with https://doi.org/10.5194/gmd-15-6243-2022 Geosci.Model Dev., 15, 6243-6257, 2022 more local extrema and a lower occurrence of pattern with no local extrema.Thus, the results are similar to E3SM-MMF but with smaller differences relative to the satellite data.Conversely, E3SMv2 has a much smoother solution without the DCAPE trigger, as it has a relatively low occurrence of noisier patterns and a relatively high occurrence of the smoother patterns compared to satellite data.
Figure 8 shows a similar analysis to Fig. 7 using model data for various other quantities.These variables were chosen because they do not appear to exhibit any checkerboard signal from visual inspection of map plots of various averaging timescales (not shown), and Fig. 8 shows that the pattern detection algorithm can quantitatively confirm this observation.Ice water path is a slight exception because E3SM-MMF does exhibit a weak amount of checkerboard signal in this field.However, the occurrence of the noisier patterns is much smaller than that in Fig. 7. Despite this analysis not being able to tell us anything about the checkerboard pattern, it is interesting to note that it supports our previous observation that both E3SMv2 and E3SM-MMF are noisier than E3SMv2 without the DCAPE trigger, although we cannot say which result is more realistic.
Figure 9 shows maps of fractional occurrence for partial checkerboard patterns in liquid water path data.The more prevalent occurrence of partial checkerboard is seen in the subtropical regions from E3SMv2 and E3SM-MMF.The regions that stand out are in line with what we expect from how the checkerboard pattern is revealed in long-term averages, such as Fig. 6.Interestingly, the E3SMv2 case without DCAPE also shows that the subtropics are very slightly noisier than other regions, but the significance of these regional differences is difficult to assess since the occurrence of partial checkerboard patterns is so low.
Translational invariance of the pure checkerboard
A curious property of the checkerboard pattern in E3SM-MMF is that it seems to be spatially "locked", allowing it to be clearly seen in multi-year averages.This suggests that the localized statistics of the model state are not translationally invariant, such that certain columns exhibit fundamentally different behavior from their immediate neighbors.Such a discontinuity in statistics should be especially alarming in regions with roughly homogeneous large-scale dynamics and surface boundary conditions, such as the subtropical regions of the central Pacific.To illustrate this more clearly, Fig. 10 shows the fractional occurrence of each unique phase of the pure checkerboard pattern for E3SM-MMF and E3SMv2.A similar plot that combines both checkerboard phases (not shown) reveals subtropical regions of elevated occurrence with a smooth spatial texture.However, when the phases are plotted separately for E3SM-MMF, we see that the pattern of occurrence itself reveals a checkerboard pattern (Fig. 10a, c).Furthermore, comparing the inset maps of Fig. 10 reveals that the checkerboard patterns of the pure checkerboard phase occurrence are out of phase with each other.This shows that the model solution is indeed not translationally invariant in the regions where checkerboard is detected.Figure 10b, d illustrates how the checkerboard signal is less persistent in E3SMv2 and exhibits less of a departure from a translationally invariant solution.The checkerboard phase occurrence still exhibits a degree of checkerboard pattern itself, but this signal is less robust than E3SM-MMF.This suggests that the processes associated with the DCAPE trigger that conspire to produce the checkerboard pattern are less prone to becoming spatially locked.
Checkerboard pattern persistence
The mere existence of a partial checkerboard pattern does not necessarily mean that the signal is unphysical, but an unnaturally persistent pattern should not be considered realistic for a moist, convecting atmosphere.To investigate how persistent the partial checkerboard patterns are we consider all valid oceanic data points (points with oceanic neighbors) between 60 • S and 60 • N and identify periods where a local neighborhood stays in a state of partial checkerboard.shows a histogram of the length of all these events for liquid water path and precipitation.In both variables we see that E3SM-MMF and E3SMv2 show a larger number of events of any length when compared to satellite data and E3SMv2 without the DCAPE trigger.E3SM-MMF exhibits events that last nearly 100 d, which is not seen in any other data set.E3SMv2 without DCAPE behaves similar to satellite observations in this respect, further supporting the conclusion that the DCAPE trigger is the sole cause of the checkerboard signal in E3SMv2.The tendency to produce relatively long-lived partial checkerboard events in E3SMv2 and E3SM-MMF illustrates how the checkerboard becomes imprinted onto the climatology through persistent checkerboard signals superimposed on the typical fluctuations from weather.
Variance trapping in E3SM-MMF
The analysis thus far is sufficient to confirm that the checkerboard pattern in E3SMv2 is a direct result of the DCAPE trigger.We believe this is due to a feedback mechanism in which convectively active cells of the checkerboard pattern experience resolved upward motion and further CAPE generation from dynamics, and the adjacent neighbors experience a stabilizing effect from the subsiding portion of the local circulation that causes the DCAPE trigger to suppress convection.Without the DCAPE trigger the deep convection scheme is notorious for launching convection too often, which prevents this feedback from becoming established.This hypothesis is loosely supported by experiments with alternate calculations of CAPE generation for the trigger condition, such as including radiation (not shown).Finding an explanation for the checkerboard signal in E3SM-MMF is less straightforward.Figure 12 shows a representative snapshot of liquid water path on the global grid and a localized group of CRM water vapor fields arbitrarily selected from a region exhibiting a checkerboard pattern shown as anomalies from the horizontal mean at each level of the CRM domain.There is a clear correspondence between the liquid water path on the global grid and amplitude of the CRM-scale fluctuations.The relatively dry cells of the checkerboard pattern exhibit very little variation in the water vapor field and similarly exhibit very little variation in CRM wind anomalies.This contrast of CRM fluctuations between adjacent neighbors is evident across checkerboard regions even when a synoptic weather system is moving through.
The persistence of fluctuations in one CRM and suppression of fluctuations in a neighboring CRM suggest that these fluctuations have become "trapped" in such a way that they are not easily dissipated.It is reasonable to speculate that these trapped fluctuations can have a perpetual influence on neighboring cells.In general, moist convection often produces heating and drying to balance cooling and moistening tendencies produced by other processes, such as largescale dynamics, radiation, and surface fluxes.Thus, a relatively "active" CRM might produce a sufficiently dry, stable state that could suppress convective activity in a neighboring CRM when advected by the dynamics.Similarly, a relatively "inactive" CRM might allow convective instability to increase through cooling and moistening by other processes as this air mass was being advected.
Conclusions
In this study we have presented a novel pattern detection method to investigate a checkerboard pattern in cloud-related variables over subtropical ocean regions in E3SM-MMF and E3SMv2.Using satellite data of liquid water path and precipitation as a baseline, our analysis shows that certain patterns associated with a noisier state occur too often in localized regions and are too persistent.These results support the conclusion that the checkerboard is clearly unphysical.The signal in E3SMv2 is caused by the recently added convective trigger based on dynamically generated CAPE (DCAPE), whereas the source of the checkerboard in E3SM-MMF is seemingly related to "trapping" of cloud-scale fluctuations within the embedded cloud resolving model (CRM).
We have stopped short of providing a detailed analysis of the feedback mechanisms that perpetuate the pattern for several reasons.An examination of the vertically resolved moisture budget would likely help us to understand why the checkerboard persists, but this is quite difficult to do on the native grid, especially given the fact that dynamics calculations are done on a different grid (i.e., the np4 spectral element grid).A simple composite of CRM forcing and feedback terms in E3SM-MMF also seems like it would be illu-minating, but given the contamination of weather variations it is very difficult to isolate the moment that the checkerboard comes into existence.Thus, a composite of synoptic-scale processes for checkerboard regions can only show the balance of processes between relatively cloudy and non-cloudy cells that make up the checkerboard, without being able to clearly isolate how one cell influences its adjacent neighbors.
Despite not being able to fully understand the fundamental mechanism behind the checkerboard signal, there are several outstanding questions to which we can provide a speculative answer.The CRM instances of E3SM-MMF are completely independent, and thus the dynamics of the global model are clearly important for setting up the pattern via advection, and the physics calculations must be responsible for making it persist locally.The prevalence of checkerboard signals in subtropical regions suggests that unrelenting intensity of the trade winds might be providing the ideal environment for these feedbacks to persist.The subtropical regions might also be ideal because there is less influence from synoptic systems relative to other weather regimes.
Interestingly, the checkerboard signal is not detected over land regions, and the reason for this is unclear.The use of prescribed sea surface temperature might seem like a potential complication because the surface temperature cannot respond to the local convection like it does over land, but tests with a fully coupled ocean still exhibit checkerboard patterns (not shown).Presumably, the land vs. ocean contrast has something to do with smaller heat capacity of the land surface and stronger diurnal cycle of surface fluxes, but more work is needed to clarify this hypothesis.
An obvious question to ask is whether the checkerboard problem is isolated to E3SM-MMF or if all MMF models exhibit a version of the same problem.Additional experiments were done with the NCAR super-parameterized CAM (SP-CAM) to investigate this question (not shown).While SP-CAM and E3SM-MMF share a lot of features, the dynamical cores have diverged significantly over recent years, including the one used for the spectral element grid.Experiments with SP-CAM used both the finite-volume and spectral element dynamical core options and the checkerboard pattern was detected, but with a much lower frequency of occurrence.This result is very puzzling, but we suspect it has something to do with the difference in the dynamical cores, perhaps related to the choice of whether to use "dry" or "full" pressure.
The final outstanding question to pose is how this problem should be addressed.For E3SMv2, the DCAPE trigger needs to be revisited, and a simple solution of adjusting the trigger threshold might provide a way to address the issue, but additional sensitivity experiments are needed.Preliminary experiments that modify the DCAPE trigger to include CAPE generation by radiation show a notable reduction in the checkerboard signal (not shown).Presumably, this is due to radiative cooling being able to more efficiently generate CAPE in the less cloudy cells of the checkerboard.
Our current hypothesis is that the checkerboard pattern in E3SM-MMF is due to the "trapping" of CRM fluctuations, which is essentially a "design flaw" of the MMF concept associated with the scale gap illustrated in Fig. 1.In the real atmosphere, these relatively small-scale fluctuations on the scale of individual clouds would be advected by the largerscale flow in which they are embedded, but this process is missing from the MMF.We cannot fully include this process without discarding the scale gap and producing a global CRM, which would eliminate the computational advantages of the MMF.Alternatively, we can transport CRM fluctuations by encoding this information into a bulk variance tracer that can be advected on the global grid.A method for this "CRM variance transport" is presented in another publication (Hannah and Pressel, 2022), which demonstrates that it is effective at eliminating the checkerboard patterns in the E3SM-MMF climatology.
Figure 1 .
Figure 1.Schematic illustration of the scale gap created by the MMF paradigm, in which two models are coupled across a range of scales that neither can represent.
Figure 2 .
Figure 2. Single-month mean maps for an arbitrarily chosen January of precipitation data from IMERG, E3SM-MMF, and E3SMv2.Data are plotted on the native ne30pg2 physics grid using shaded polygons in order to see signals at the grid scale.
Figure 3 .
Figure 3. Examples of the nearest-neighbor detection algorithm (see text).Numbers indicate the ordering of adjacent neighbors such that the northernmost edge neighbor is first in the sequence with a clockwise order.
Figure 4 .
Figure 4. Examples of the patterns identified in daily snapshots of liquid water path from E3SM-MMF corresponding to the pattern examples in Table1.Cells are labeled with a "0" for values less than or equal to the center value and "1" otherwise.Note that a logarithmic spacing is used for the color levels.
Figure 6 .
Figure 6.Maps of 5-year mean liquid water path for MAC-LWP, E3SM-MMF, E3SMv2, and E3SMv2 with the DCAPE trigger disabled.Hashing indicates regions for which MAC cloud liquid water path is more uncertain, as described in Sect.2.2.
Figure 7 .
Figure 7. (a, c) Fractional occurrence of neighborhood patterns combined according on the number of local extrema (see text) over the region 0-30 • N, 140-220 • E (see inset map) for 5 years of satellite and model data.Results for IMERG precipitation and MAC liquid water path are shown on a 1 × 1 • grid and the ne30pg2 grid used by the model for direct comparison.(b, d) Difference in fractional occurrence relative to satellite data on the ne30pg2 grid.
Figure 8 .
Figure 8. Similar to Fig. 7, showing fractional occurrence of neighborhood patterns over the region 0-30 • N, 140-220 • E for various model variables that do not exhibit a checkerboard pattern, specifically 850 mb temperature (a), 850 mb zonal wind (b), ice water path (c), and surface latent heat flux (d).
Figure 11 .
Figure11.Histogram of the event length, with events defined as continuous periods with partial checkerboard neighborhood state.Data were restricted to oceanic points equatorward of 60 • latitude in both hemispheres for all 5 years that were available.
Figure 12 .
Figure 12.Instantaneous snapshot of tropical western Pacific liquid water path and CRM water vapor field for a select region exhibiting a checkerboard pattern for an arbitrarily selected day in boreal summer. | 8,402.8 | 2022-08-12T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Nonparametric Regression via StatLSSVM
We present a new MATLAB toolbox under Windows and Linux for nonparametric regression estimation based on the statistical library for least squares support vector machines ( StatLSSVM ). The StatLSSVM toolbox is written so that only a few lines of code are necessary in order to perform standard nonparametric regression, regression with correlated errors and robust regression. In addition, construction of additive models and pointwise or uniform confidence intervals are also supported. A number of tuning criteria such as classical cross-validation, robust cross-validation and cross-validation for correlated errors are available. Also, minimization of the previous criteria is available without any user interaction.
Introduction
Nonparametric regression is a very popular tool for data analysis because it imposes few assumptions about the shape of the mean function. Therefore, nonparametric regression is quite a flexible tool for modeling nonlinear relationships between dependent variable and regressors. The nonparametric and semiparametric regression techniques continue to be an area of active research. In recent decades, methods have been developed for robust regression (Jurečková and Picek 2006;Maronna, Martin, and Yohai 2006;De Brabanter et al. 2009), regression with correlated errors (time series errors) (Chu and Marron 1991;Hart 1991;Hall, Lahiri, and Polzehl 1995;Opsomer, Wang, and Yang 2001;De Brabanter, De Brabanter, Suykens, and De Moor 2011b), regression in which the predictor or response variables are curves (Ferraty and Vieu 2006), images, graphs, or other complex data objects, regression methods accommodating various types of missing data (Hastie, Tibshirani, and Friedman 2009;Marley and Wand 2010), nonparametric regression (Györfi, Kohler, Krzyzak, and Walk 2002), Bayesian meth- space in explicit form, i.e., one only needs to replace the inner product in the feature space ϕ(X k ) ϕ(X l ), for all k, l = 1, . . . , n, with the corresponding kernel K(X k , X l ). This result is known as Mercer's condition (Mercer 1909). As a consequence, to fulfill Mercer's condition one requires a positive (semi-)definite kernel function K. LS-SVM for regression (Suykens, Van Gestel, De Brabanter, De Moor, and Vandewalle 2002b) are related to SVM (Vapnik 1999) where the inequality constraints have been replaced by equality constraints and the use of a squared loss is employed. Let D n = {(X 1 , Y 1 ), . . . , (X n , Y n )} where X ∈ R d and Y ∈ R be a given training data set, consider the model class F n,Ψ defined in (1) and let γ > 0 be a regularization parameter. Then, LS-SVM for regression is formulated as follows (2) The squared loss in (2) can be replaced by any other empirical loss. By using an L 2 loss function (and equality constraints) in LS-SVM, the solution is obtained in a linear system instead of using quadratic programming, see e.g., SVM, which speeds up computations. The problem is that LS-SVM lacks sparseness and robustness. For specialized literature on other loss functions and their properties, consistency and robustness, we refer the reader to Christmann and Steinwart (2007); Steinwart and Christmann (2008) and Steinwart and Christmann (2011). Suykens et al. (2002b) provides a benchmarking study on LS-SVM.
In Equation 2, it is clear that this model is linear in the feature space. This principle is illustrated in Figure 1. Consider a nonlinear relationship in the input space (Figure 1 left panel). Then, the inputs (X) are mapped into a high dimensional space by means of ϕ (Figure 1 right panel). In this space, a linear model is fitted given the transformed data Since ϕ is in general unknown, problem (2) is solved by using Lagrange multipliers. The Lagrangian is given by where α i ∈ R are Lagrange multipliers. Then, the conditions for optimality are given by After elimination of w and e, parameters b and α are estimated in the following linear system: with Y = (Y 1 , . . . , Y n ) , 1 n = (1, . . . , 1) and α = (α 1 , . . . , α n ) . By using Mercer's condition, Ω is a positive (semi-)definite matrix and the kl-th element of Ω is given by Hence, the kernel function K is a symmetric, continuous positive definite function. Popular choices are the linear, polynomial and radial basis function (RBF) kernel. In this paper we take K(X i , X j ) = (2π) −d/2 exp(− X i − X j 2 2 /2h 2 ). The resulting LS-SVM model is given bŷ
Model selection
In practical situations it is often preferable to have a data-driven method to estimate learning parameters. For this selection process, many data-driven procedures have been discussed in the literature. Commonly used are those based on the cross-validation criterion (Burman 1989) (leave-one-out and v-fold), the generalized cross-validation criterion (Craven and Wahba 1979), the Akaike information criterion (Akaike 1973), etc. Several of these criteria are implemented in the toolbox (see the user's manual and the next sections).
Although these model selection criteria assist the user to find suitable tuning parameters or smoothing parameters (bandwidth h of the kernel and the regularization parameter γ), finding the minimum of these cost functions tends to be tedious. This is due to the fact that the cost functions are often non-smooth and may contain multiple local minima. The latter is theoretically confirmed by Hall and Marron (1991).
A typical method to estimate the smoothing parameters would define a grid over these parameters of interest and apply any type of model selection method for each of these grid values. However, three disadvantages come up with this approach (Bennett, Hu, Xiaoyun, Kunapuli, and Pang 2006;Kunapuli, Bennett, Hu, and Pang 2008). A first disadvantage of such a grid-search model selection approach is the limitation of the desirable number of tuning parameters in a model, due to the combinatorial explosion of grid points. A second disadvantage is their practical inefficiency, namely, they are incapable of assuring the overall quality of the produced solution. A third disadvantage in grid-search is that the discretization fails to take into account the fact that the tuning parameters are continuous.
In order to overcome these drawbacks, we have equipped the toolbox with a powerful global optimizer, called coupled simulated annealing (CSA) (de Souza, Suykens, Vandewalle, and Bollé 2010) and a derivative-free simplex search (Nelder and Mead 1965;Lagarias, Reeds, Wright, and Wright 1998). The optimization process is twofold: First, determine good initial starting values by means of CSA and second, perform a fine-tuning derivative-free search using the previous end results as starting values. In contrast with other global optimization techniques CSA is not slow and can easily escape from local minima. The CSA algorithm based on coupled multiple starters is more effective than multi-start gradient descent optimization algorithms. Another advantage of CSA is that it uses the acceptance temperature to control the variance of the acceptance probabilities with a control scheme that can be applied to an ensemble of optimizers. This leads to an improved optimization efficiency because it reduces the sensitivity of the algorithm to the initialization parameters while guiding the optimization process to quasi-optimal runs. Because of the effectiveness of the combined methods only a small number of iterations are needed to reach an optimal set of smoothing parameters (bandwidth h of the kernel and the regularization parameter γ).
Standard nonparametric regression
In this section we illustrate how to perform a nonparametric regression analysis on the LIDAR data (Holst, Hössjer, Björklund, Ragnarson, and Edner 1996) and a two dimensional toy example with StatLSSVM in MATLAB.
Step-by-step instructions will be given on how to obtain the results. All the data sets used in this paper are included in StatLSSVM.
Univariate smoothing
First, load the LIDAR data into the workspace of MATLAB using load('lidar.mat'). After loading the data, one should always start by making a model structure using the initlssvm command. This model structure contains all the necessary information of the given data (xtrain and ytrain), data size (nb_data), dimensionality of the data (x_dim and y_dim) and the chosen kernel function (kernel_type). StatLSSVM currently supports five positive (semi-)definite kernels, i.e., the Gaussian kernel ('gauss_kernel'), the RBF kernel ('RBF_kernel'), the Gaussian additive kernel ('gaussadd_kernel'), a fourth order kernel based on the Gaussian kernel ('gauss4_kernel') (Jones and Foster 1993) and the linear kernel ('lin_kernel'). Note that we did not specify any value yet for the smoothing parameters, i.e., the bandwidth of the kernel (bandwidth) and the regularization parameter (gam) in the initlssvm command. We initialized these two parameters to the empty field in MATLAB by [] in initlssvm. The status element of this structure contains information whether the model has been trained with the current set of smoothing parameters (Equation 3 is solved or not). If the model is trained (Equation 3 is solved) then the field 'changed' will become 'trained'. The last element weights specifies the weights used with robust regression (see Section 5).
Any field in the structure can be accessed by using model.field_name . For example, if one wants to access the regularization parameter in the structure model, one simply uses model.gam. The next step is to tune the smoothing parameters. This is done by invoking tunelssvm and StatLSSVM supports several model selection criteria for standard nonparametric regression such as leave-one-out cross-validation ('leaveoneout'), generalized cross-validation ('gcrossval') and v-fold cross-validation ('crossval'). We illustrate the code for the v-fold cross-validation. By default, 'crossval' uses 10-fold cross-validation and the L 2 residual loss function. We will not show the complete output of the optimization process but only show the model structure output. The fields gam and bandwidth are no longer empty but contain their tuned value according the 10-fold cross-validation criterion. Note that the field status has been altered from 'changed' to 'trained'. Also the Lagrange multipliers alpha and bias term b have been added to the model structure. The last line in the structure denotes the time needed to solve the system of equations (3)
Bivariate smoothing
In this example of bivariate smoothing, the NBA data set (Simonoff 1996) is used (available in StatLSSVM as nba.mat). Since the workflow is exactly the same as in the previous example we only give the input script and visualize the results. In this example it holds that the vector x ∈ R 2 and y ∈ R. The fitted regression surface is an estimate of the mean points scored per minute conditional on the number of minutes played per game and height in centimeters for 96 NBA players who played the guard position during the 1992-1993 season. As a model selection method we choose leave-one-out cross-validation. The relevant MATLAB commands are
Robust nonparametric regression
Regression analysis is an important statistical tool routinely applied in most sciences. However, using least squares techniques, there is an awareness of the dangers posed by the occurrence of outliers present in the data. Not only the response variable can be outlying, but also the explanatory part, leading to leverage points. Both types of outliers may totally spoil an ordinary least squares analysis. We refer to the books of Hampel, Ronchetti, Rousseeuw, and Stahel (1986), Rousseeuw and Leroy (2003), Jurečková and Picek (2006) and Maronna et al. (2006) for a thorough survey regarding robustness aspects.
A possible way to robustify (2) is to use an L 1 loss function. However, this would lead to a quadratic programming problem and is more difficult to solve than a linear system. Therefore, we opt for a simple but effective method, i.e., iterative reweighting (De Brabanter et al. 2009;Debruyne, Christmann, Hubert, and Suykens 2010). This approach solves a weighted least squares problem in each iteration until a certain stopping criterion is satisfied. StatLSSVM supports four weight functions: Huber, Hampel, Logistic and Myriad weights. Table 1 illustrates these four weight functions.
A robust version of (2) is then formulated as follows (Suykens, De Brabanter, Lukas, and Vandewalle 2002a): where v i denotes the weight of the i-th residual. The weights are assigned according to the chosen weight function in Table 1. Again, by using Lagrange multipliers, the solution is given by 0 1 n . . , Y n ) , 1 n = (1, . . . , 1) , α = (α 1 , . . . , α n ) and D γ = diag{ 1 γv 1 , . . . , 1 γvn }. Suppose we observe the data D n = {(X 1 , Y 1 ), . . . , (X n , Y n )}, but the Y i are subject to occasional outlying values. An appropriate model is for a smooth function m and the ε i come from the gross-error model (Huber 1964) with symmetric contamination. The gross-error model or -contamination model is defined as where F 0 is some given distribution (the ideal nominal model), G is an arbitrary continuous symmetric distribution and is the contamination parameter. This contamination model describes the case, where with large probability (1 − ), the data occurs with distribution F 0 and with small probability outliers occur according to distribution G. In our toy example we generate a data set set containing 35% outliers, where the distribution F 0 is taken to be the Normal distribution with variance 0.01 and G is the standard Cauchy distribution. In order to obtain a fully robust solution, one must also use a robust model selection method (Leung 2005). Therefore, StatLSSVM supports a robust v-fold cross-validation procedure ('rcrossval') based on a robust LS-SVM smoother and robust loss function, i.e., the L 1 loss ('mae') or Huber's loss ('huber') instead of L 2 ('mse'). Figure 4 provides an illustration of StatLSSVM fitting via the next script for simulated data with n = 250, = 0.35, m(X) = sinc(X) and X ∼ U[−5, 5]. Note that this example requires the MATLAB Statistics Toolbox (generation of t distributed random numbers). The relevant MATLAB commands are >> X = -5 + 10 * rand(250, 1); >> epsilon = 0.35; >> sel = rand(length(X), 1) > epsilon; >> Y = sinc(X) + sel .* normrnd(0, .1, length(X), 1) + ...
(1 -sel) .* trnd ( Table 1) can be called as 'whuber', 'whampel' or 'wlogistic' as last argument of the command tunelssvm. More information about all functions can be found in the supplements of this paper or via the MATLAB command window via the help function. For example help robustlssvm. Table 1 and the complete robust tuning procedure can be found in De Brabanter (2011).
Confidence intervals
In this section we consider the following nonparametric regression setting where m is a smooth function, E[ε|X] = 0, VAR[ε|X] = 1 and X and ε are independent. Two possible situations can occur: (i) σ 2 (X) = σ 2 (homoscedastic regression model) and (ii) the variance is a function of the explanatory variable X (heteroscedastic regression model). We do not discuss the case when the variance function is a function of the regression function. Our goal is to determine confidence intervals for m.
Pointwise confidence intervals
Under certain regularity conditions, it can be shown that asymptotically (De Brabanter 2011) where b(x) and V (x) are respectively the bias and variance ofm(x). With the estimated bias and variance given in De Brabanter et al. (2011a), an approximate 100 where z 1−α/2 denotes the (1 − α/2)-th quantile of the standard Gaussian distribution. This approximate confidence interval is valid if This in turn requires a different bandwidth used in assessing the bias and variance (Fan and Gijbels 1996), which is automatically done in the StatLSSVM toolbox.
Uniform confidence intervals
In order to make simultaneous (or uniform) statements we have to modify the width of the interval to obtain simultaneous confidence intervals (see also multiple comparison theory). Mathematically speaking, we are searching for the width of the bands c, given a confidence level α ∈ (0, 1), such that for some suitable class of smooth functions F n . In StatLSSVM the width of the bands c is determined by the volume-of-tube formula (Sun and Loader 1994). We illustrate the concept of simultaneous confidence intervals with two examples. First, consider the following heteroscedastic regression example. The data generation process is as follows: As a second example, the LIDAR data set is used. Uniform and simultaneous confidence intervals are shown in Figure 6.
Additive LS-SVM models
Suppose a sample of observations (X i , Y i ) (X i ∈ R d and Y i ∈ R) is generated from an additive model where the error term ε i is independent of the X (j) i , E[ε i |X i ] = 0, VAR[ i |X i ] = σ 2 < ∞ and m j is a smooth function of the regressor X (j) i . We consider the following model class The optimization problem (2) can be rewritten w.r.t. the new model class as follows (Pelckmans, Goethals, De Brabanter, Suykens, and De Moor 2005) . . , n. As before, by using Lagrange multipliers, the solution is given by where Ω = d j=1 Ω (j) and Ω (j) l ) for all k, l = 1, . . . , n (sum of univariate kernels). The resulting additive LS-SVM model is given bŷ We illustrate the additive LS-SVM models on two examples. First, we construct a classical example as in Hastie and Tibshirani (1990). The data are generated from the nonlinear regression model with 12 variables: where the 12 variables are uniformly distributed on the interval [0, 1], ε ∼ N (0, 1) and n = 300. By using the option 'multiple' StatLSSVM tunes the bandwidth of the kernel for each estimated function. By setting this option to 'single' one bandwidth is found for all estimated functions. The relevant MATLAB commands are >> X = rand(300, 12); >> Y = 10 * sin(pi * X(:,1)) + 20 * (X(:, 2) -0.5).^2 + ... 10 * X(:, 3) -5 * X(:, 4) + randn(300, 1); Figure 7 shows the fitted function for the additive LS-SVM model applied to our simulated data set. In general, the scales on the vertical axes are only meaningful in a relative sense; they have no absolute interpretation. Since we have the freedom to choose the vertical positionings, we should try to make them meaningful in the absolute sense. A reasonable solution is to plot, for each predictor, the profile of the response surface with each of the other predictors set at their average (see also Ruppert et al. 2003). This is automatically done by the plotlssvmadd command.
As a last example we consider the diabetes data set also discussed in Hastie and Tibshirani (1990). The data come from a study (Sockett, Daneman, Clarson, and Ehrich 1987) of the factors affecting patterns of insulin-dependent diabetes mellitus in children. The objective is to investigate the dependence of the level of serum C-peptide on various other factors in order to understand the patterns of residual insulin secretion. The response measurement is the logarithm of C-peptide concentration (pmol/ml) at diagnosis, and the predictor measurements are age and base deficit (a measure of acidity The result is shown in Figure 8 using the vertical alignment procedure discussed above. It can be seen that both effects appear to be nonlinear. The variable age has an increasing effect that levels off and the variable basedef appears quadratic.
Regression with correlated errors
In this section we consider the nonparametric regression model where E[ε|X] = 0 , VAR[ε|X] = σ 2 < ∞, the error term ε i is a covariance stationary process with E[ε i , ε i+k ] = γ k , γ k ∼ k −a , a > 2 and m is a smooth function. However, the presence of correlation between the errors, if ignored, causes breakdown of commonly used automatic tuning parameter selection methods such as cross-validation or plug-in (Opsomer et al. 2001;De Brabanter et al. 2011b). Data-driven bandwidth selectors tend to be "fooled" by autocorrelation, interpreting it as reflecting the regression relationship and variance function. So, the cyclical pattern in positively correlated errors is viewed as a high frequency regression relationship with small variance, and the bandwidth is set small enough to track the cycles resulting in an undersmoothed fitted regression curve. The alternating pattern above and below the true underlying function for negatively correlated errors is interpreted as a high variance, and the bandwidth is set high enough to smooth over the variability, producing an oversmoothed fitted regression curve.
The model selection method is based on leave-(2l + 1)-out cross-validation (Chu and Marron 1991). To tune the parameter l, a two-step procedure is used. First, a Nadaraya-Watson smoother with a bimodal kernel is used to fit the data. De Brabanter et al. (2011b) have shown that a bimodal kernel satisfying K(0) = 0 automatically removes correlation structure without requiring any prior knowledge about its structure. Hence, the obtained residuals are good estimates of the errors. Second, the k-th lag sample autocorrelation can be used to find a suitable value for l. More theoretical background about this method can be found in De Brabanter et al. (2011b).
Consider the beluga and US birth rate data sets (Simonoff 1996). We will compare the leave-(2l + 1)-out cross-validation method with classical leave-one-out cross-validation (see Figure 9). It is clear from both results that existence of autocorrelation can seriously affect the regression fit. Ignoring the effects of correlation will cause the nonparametric regression smoother to interpolate the data. This is especially visible in the US birth rate data set. Without removing autocorrelation there is no clear trend visible in the regression fit. By using the above described method for model selection the regression fit shows a clear pattern, i.e., the US joined the second world war after the attack on Pearl Harbor (December 1941), decreasing birth rate during the entire course of the war and finally increasing birth rate after the war in Europe and the Pacific was over (mid September 1945).
The relevant MATLAB commands for the beluga data set are given below. Model selection accounting for correlation is selected first using 'crossval2lp1', the classical leave-one-out cross-validation second using 'leaveoneout'. and US birth rate data set (right panel). The green line represents the estimate with tuning parameters determined by classical leave-one-out cross-validation and the red line is the estimate based on the above described procedure.
Conclusions
We have demonstrated that several nonparametric regression problems can be handled with StatLSSVM. This MATLAB-based toolbox can manage standard nonparametric regression, regression with autocorrelated errors, robust regression, pointwise/uniform confidence intervals and additive models with a few simple lines of code. Currently the toolbox is supported for MATLAB R2009b or higher. | 5,265.4 | 2013-10-22T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
A New Proof of the Lester’s Perimeter Theorem in Euclidean Space
Received: September 12, 2019 Revised: January 25, 2020 Accepted: February 03, 2020 Published Online: March 30, 2020 An injection defined from Euclidean n-space E n n ( ) 2≤ < ∞ to itself which preserves the triangles of perimeter 1 is an Euclidean motion. J. Lester presented two different proofs for this theorem in Euclidean plane (Lester 1985) and Euclidean space (Lester 1986). In this study we present a general proof which works both in Euclidean plane (n = 2) and Euclidean space (2 < n < ∞).
Introduction
It is well known that some geometric transformations can be characterized by the properties of they preserve. For instance, collinearity preserving bijections of Euclidean n-space E n n � ( ) 2 ≤ <∞ characterize the affine transformations and this theorem is known as the fundamental theorem of affine geometry. The Möbius transformations of the extended complex plane can be characterized by as transformations preserving quadruples of concylic points. In Minkowski space the Alexandrov's theorem which describes Lorentz transformations as the transformations of Minkowski space preserving the speed of light. In Euclidean space E n n � ( ) 2 ≤ <∞ the Beckman-Quarles theorem which identifies as motions those functions from E n to itself preserving pairs of points of a given fixed distance apart. More precisely Beckman-Quarles theorem (Beckman and Quarles 1953) states that a function from E n to itself which preserves the relation | | x y Q − = for a fixed � Q ∈ + must be an Euclidean motion where | | x y − denotes the distance between x y E n , ∈ . This theorem plays a major role in our result. G. Martin (unpublished) characterized the equiaffine transformations (affine and area preserving) of E 2 via the injections which preserves triangles with area 1 as follows, see (Lester 1985). Theorem 1.1: An injection from Euclidean plane to itself which preserves triangles with area 1 must be equiaffine, see (Lester 1985).
J. Lester generalized this theorem to the Euclidean space E n as follows: Theorem 1.2: Let f be an injection from Euclidean space � E n n 2 ≤ <∞ ( )to itself which preserves triangles with area 1 must be a Euclidean motion, see (Lester 1986). J. Lester also obtained the following results using triangles of perimeter 1 instead of triangles of area 1. Theorem 1.3: Let f be an injection from Euclidean plane � E 2 to itself which preserves triangles of perimeter 1 must be a Euclidean motion, see (Lester 1985). Theorem 1.4: Let f be an injection from Euclidean space E n n ( ) 2 ≤ <∞ to itself which preserves triangles of perimeter 1 must be a Euclidean motion, see (Lester 1986).
A New Proof of the Lester's Perimeter Theorem in Euclidean Space
is a triangle of perimeter 1 then must be a point on n-dimensional rotated ellipsoid with equation
is a triangle of perimeter
Then the locus of all points X E n ∈ is an n-dimensional rotated ellipsoid drilled by two points. These two points are clearly the vertices of the ellipsoid. Lemma 2.2. Let f be an injection from Euclidean space E n n ( ) 2 < <∞ to itself which preserves triangles of perimeter 1. Then f preserves the right angles. Proof: Let l 1 and l 2 be two distinct lines in E n which meets perpendicularly. Denote the common point of these lines by F 1 . Now take a point on l 2 , say F 2 , such that Now following the same way in the proof of Lemma 2.1, one can easily construct the n-dimensional rotated ellipsoid Ω with focal points F 1 and F 2 . Clearly l 1 and Ω meets at two points, say A and B. Now draw the Euclidean line passing through F 2 and parallel to L 1 . Obviously, this line and Ω meets at two points and denote them by C and D.
It is clear that either AC BD or AD BC . Without loss of generality we may assume AC BD . Clearly one can easily see that AF F C 1 2 is a rectangle which consists of four triangles AF F F F C F CA 1 2 1 2 2 , , and CAF 1 . The perimeter of these triangles is 1. Clearly, by hypothesis, the perimeter of the triangles A F F F F C F C A ' ' ', ' ' ', ' is a rectangle, see (Lester 1985 | 1,087.8 | 2020-03-30T00:00:00.000 | [
"Mathematics"
] |
Exploring Technologies, Materials, and Methods for an Online Foundational Programming Course
Introductory computer programming courses are inherently challenging for a variety of reasons. With increased demands for online delivery, the use of effective technologies, materials, and methods that best support online learning is essential to maximize student success. This article describes a recent study conducted at our institution with an overall objective to improve the design and online delivery of a foundational course in Java programming. The online course included a variety of technologies and materials intended to improve student learning outcomes, including an online synchronous interaction component similar to teleconferencing. A comparison of students' backgrounds, perspectives, and outcomes in an online section of the course compared to a bench- mark face-to-face section was conducted using various evaluation methods. The results indicated that online synchronous sessions and several other aspects of the course were beneficial toward improving online learning. Results of the study, conclusions, and other issues warranting further consideration are described.
Introduction
Introductory computer programming courses typically pose a challenge due to the variation in students' background and experience, the manner in which the material builds on itself, and the extensive time required to complete programming projects.They are usually important courses for students majoring in computing, information technology, or software engineering disciplines because they serve as the cornerstone of the curriculum.Despite best efforts by instructors, introductory programming courses typically evidence high attrition rates.Within this context of historic problems with student success in traditional programming classes, many instructors are being asked to offer these courses online.It is incumbent upon faculty to explore alternatives and adopt strategies that will maximize the likelihood of student success and retention under new course delivery modes.
This article describes a research study that explored a variety of technologies, materials, and methods for the design and delivery of an online introductory Java programming course.The overall goal was to determine how to improve online delivery of foundational programming courses, with the objective of improving student engagement and learning outcomes.It builds upon experience gained in a first online offering of the course and a prior study regarding that course.The first version of the course was comprised of selfstudy instructional units, numerous programming assignments, topic-specific discussion threads, online examinations, and other pedagogical aids.The current study added virtual synchronous learning activities to the online course via an academically-oriented collaboration software package named Elluminate, and yielded significant new data derived from students' responses to questionnaires, surveys administered at the beginning, mid-point, and end of the semester, and students' performance in the course.A face-to-face class taught in parallel by the same instructor provided a control group for this study.
The remainder of this work contains a brief description of literature that is relevant to teaching and learning programming, and online computing-related courses.Following this review, the design and organization of the online course are described in Section 3.This course was built upon two major software capabilities: a Learning Management System (LMS) and a Synchronous Groupwork System (SGS).These capabilities and how they were employed will be discussed.Section 4 identifies the research questions for this study, the methods, and the results.This article concludes with a summary and discussion of the results and lessons learned.
Literature Review
This literature review will address issues related to developing and delivering online programming courses including the range of technology and methods used, degree of support for interactivity or synchronous communications, and reported outcomes.Bowers (2007) describes the use of meeting groupware to introduce collaborative group work into an online Information Technology program.Recorded lectures were augmented with synchronous distance project work by students.In the first version of the course, all student activities were in a main virtual room.In a second version, groups worked in separate rooms before returning to report results in the main room.The authors found value in the group work, assessing it as leading to outcomes similar to face-to-face group work.Fisher and von Gudenberg (2006) discuss the presentation of a Java course online.Two major technologies formed the basis of the course: a hypermedia tutorial system that provided significant interactivity with examples and exercises, and a semi-automated assessment scheme that improved feedback time to the students.The course did not include synchronous activities, and, while, in the opinion of the authors, the completing students produced highly satisfactory work, attrition rates were approximately 50%.Reeves et al. (2002) describe an introductory course in C++ taught online with the We-bCT Learning Management System.The course was built around this LMS which contained the syllabus, daily assignments, longer-term programming projects, PowerPoint presentations, a textbook, and a programming environment.Students in the online class were encouraged to attend a face-to-face section of the same course (at least at the start of the term), but only one student took advantage of this opportunity.While the opportunity for synchronous communications was present, it was largely ignored.Zachery and Jensen (2003) describe the organization of a course in JavaScript programming that was offered online.Their course included custom materials that they called "example-based narratives" which illustrated the results of executing pieces of code, coupled with a hint facility for exercises.Programming assignments required completion of skeleton code provided by faculty and did not include hints.Students could get help either via email or telephone calls to faculty during office hours, maintaining a synchronous aspect to the course.Molstad (2001) describes uses of various types of distance educational technology in an online introductory programming course, including the use of two-way audio-video capabilities that were used to allow students to access recordings of lectures.Students could ask questions that arose in the process of viewing the recordings, but no synchronous capability was available.Thomas (2000) describes an online C++ programming course.She relates that even though the course was geared toward mature students with at least a year of programming experience, face-to-face meetings with a teaching assistant proved useful.Despite the fact that the course included utilization of standard distance communications, several students reported feeling isolated and wishing for better contact with the instructor.Suggestions included holding face-to-face exam reviews.Thomas concludes that online courses require mature and motivated students and some compensation for the lack of face-to-face interactions.
The literature contains conflicting findings regarding the quality of outcomes that are achieved in online versus face-to-face programming courses.Ury (2004) states that, in an absolute sense, the performance of online students was satisfactory, but that their aggregate final grade was significantly lower than that of students who took an equivalent face-to-face class.Kleinman and Entin (2002) arrive at a different conclusion, reporting that there were no significant differences in overall outcomes.El-Sheikh et al. (2007) report relatively little difference in the outcomes of matched face-to-face and online Java courses, but note that dropout rates were much higher in the online version of the course.Reeves et al. (2002) report that while completers of the online version of the course performed about as well as those from the face-to-face class, the online section had double the attrition rate, a result corroborated by El-Sheikh et al.This brief literature review reveals some of the major issues that are relevant to the effective design and delivery of online introductory computer programming courses.A variety of technologies have been employed, and attempts to provide capabilities that mimic those available in class are evident.In online courses, synchronous communications are supported through a range of technologies including telephone calls, instant messaging, and meeting software.The range of materials and methods is as broad as those for face-to-face classes.
The question of the level of achievement in online versus face-to-face classes does not seem completely resolved with regard to introductory programming.An ability to foster efficient, real-time communications would seem to be of value.The next sections of this paper will provide a description of the second offering of an online course in Java programming in which synchronous communication capabilities were added.Results of a study conducted to assess the course structure and outcomes are presented.
Course Design and Organization
This section contains a description of the overall design and motivation for the course.The course structure is described, including the common course elements, course web site and usage, and the synchronous learning sessions conducted.
Overview and Design Rationale
The study was conducted in a course named "Java Programming", an introductory-level programming course offered by the Department of Computer Science at the University of West Florida (UWF).It is the first of a three-course sequence of programming foundations for computing-related majors.The introductory programming course is also taken by students pursuing a minor in computer science, several other majors, and students with a general interest in programming.The combination of majors and non-majors enrolled in the course makes it more challenging to teach and more interesting to study.The work described here was conducted in two sections of the course offered in the fall 2007 semester.One section was delivered face-to-face with a 3-hour class meeting held once a week.The other section was offered online, via an online course management system named Desire2Learn, with a synchronous component delivered using Elluminate.The initial enrollment of the face-to-face section was 22 students, with a final enrollment of 19, while the online section started with 29, and had a final enrollment of 16.Although some components were unique to each section, several course elements were common to both sections, and are described next.
Elements Common to the Face-to-Face and Online Versions of the Course
Both sections of the course had the same instructor and the same goals, sometimes using different methods to achieve those goals as appropriate for online or face-to-face learning environments.A common set of learning outcomes applied to both sections.Students in both sections were assigned the same programming projects, had the same deadlines, were graded using the same criteria, and had access to the same teaching assistant (TA).The final exam, which was proctored, was also common for both sections.Students in both sections also took two exams during the semester, which covered the same material but were administered in different ways.The face-to-face section took their mid-semester exams in class, while the online mid-semester exams were administered using the course management system, Desire2Learn.
Course Website Structure and Usage
A dedicated website was developed for the course using Desire2Learn.Students in both sections had access to the course website.In order to enable students to find specific materials, the website was organized into categories including announcements, content, discussion boards, drop boxes, and a grade book.The content was divided into eight instructional units, each of which included a summary page with an overview of the material, and links to related assignments and resources.Having a single page for each unit that provided an overview of the content along with links to slides, examples, assignments, and other resources related to that content made it much easier for students to find everything relevant to each topic.This arrangement was motivated by student feedback from the previous semester's online course.Another design feature of the course was the phased delivery of the instructional units.Each unit was published just prior to its date on the schedule to avoid overwhelming students with too much information from the beginning of the semester or with resources that were not relevant to their current learning goals.
Another important component of the course website was the well-organized and numerous discussion forums.The website included five categories of discussion: general discussions, which were used for introductions as well as discussions regarding the course or Java programming in general; project discussions, which included a forum for each assigned project; lab discussions, which included a topic area for each assigned lab; "muddiest point" discussions, which included a forum for each instructional unit; and exams, which included a discussion topic area for each exam.The variety of discussion forums allowed students to identify the most appropriate area to post questions and answers related to course policies and schedule, assignments, content, or issues pertaining to an exam.The discussion boards became an instrumental tool for engaging students and facilitating interaction among them.
The course website also included an extensive set of resources for the students.These included course-specific resources, such as "getting started" guides, program and documentation templates, additional programming examples, content slides, links to selfreview exercises, and exam review guides.The resources also included links to Java programming resources, integrated development environments (IDEs), programming guides, and textbook-related resources.The course website also provided a number of other useful features, including a page for announcements and reminders for upcoming deadlines, drop boxes for students to submit their work, a grade book that displayed each student's grades in relation to the class average, a class list that allowed students to see which other students were currently logged into the course website, and a capability to email the entire class, individual students, TA, or instructor.
The Conduct of Synchronous Sessions
One of the novel components of the online course was the synchronous sessions conducted using Elluminate, a software product that enables synchronous communication and interaction.The original design objective was to include a weekly collaborative session in which the students would initially work in sub-groups to collaborate on solving a programming problem, then reconvene to discuss the problem and solution all together.However, the actual use of the synchronous sessions evolved into a different format for a variety of reasons.Although the instructor encouraged participation by scheduling the sessions at the time most preferred by the majority of students, the number of students who participated in the synchronous sessions was not sufficient for effective subgroup work.However, the students did work collectively on the planned activities.
The reason for low participation is thought to be the optional nature of these sessions.Online courses at our institution are not scheduled with any corresponding time-blocks that permit instructors to require participation in synchronous activities.This lack of required sessions clearly impacted student participation.The weekly sessions were used to discuss the programming projects and lab assignments, and to discuss activities, examples, and questions relevant to each assignment.The synchronous communication software allowed students to interact using text messages, voice, and video.Elluminate also provides an application sharing feature that allowed the instructor to demonstrate the development, debugging, and execution of Java programs to students.The instructor could also take control of an individual student's desktop, which was useful to resolve a specific question, or to assist in finding program faults.The sessions were recorded and made available online so that all students could later view them.
Initially, the synchronous sessions were only available for students enrolled in the online section, but were later opened up to students in the face-to-face section to encourage more collaboration.Since the course involved so many different activities and technologies, it was deemed important to try to determine those that contributed to successful outcomes.A study designed to help make those determinations is described in the next section.
An Empirical Study in the Efficacy of the Online Course
The goal of this study was to determine what was most effective in the design of an online foundational programming course and how to improve online delivery of the course.An additional objective was to improve student engagement and learning outcomes.In this section, we describe our research questions, the methods used in our study, and summarize the results.
Research Questions
The following two research questions guided the study: Question 1: What mix of technologies, materials, and methods is most beneficial in the design and delivery of an online foundational computer programming course?Question 2: Will this online course yield learning outcomes that are comparable to a well-established face-to-face benchmark course?
A variety of data were collected to answer the research questions.Background information was collected on all students to create context for both questions.Data regarding participation in synchronous online learning sessions, pre-and post-course surveys administered by the investigators, the standard university course evaluation forms, and a separate survey from the university's Academic Technology Center provided inputs for question 1.Comparison of final averages between the groups provided quantitative data to address question 2. Each category of data and the results are discussed in the following sections.
Methods
Several methods were used to collect student input on the course.All students were given background questionnaires at the beginning of the semester with open-ended questions to identify their prior programming experience.Brief surveys were administered at the start and at the end of the semester that the study spanned.The surveys included basic demographic questions, questions on students' prior experience in online courses, as well as questions that asked students to rate the usefulness of online course components, and their interest and confidence in their programming skills.Additionally, our institution's Academic Technology Center (ATC), a unit that provides faculty support related to instructional technology, posted their own surveys on the course website for the students to complete at the mid-term and end of the semester.
The course included 14 one-hour long synchronous sessions, approximately one per week.As described in Subsection 3.4, the sessions could not be required, which impacted the number of students who participated.Each session focused on an assignment relevant to the material covered during that week.For example, for the unit on an elementary data storage structure named an array, the session focused on the assignment and activities related to arrays.During the sessions, the instructor discussed the assignment or activities with the students, asked relevant questions, and also responded to students' questions.In some cases, students also informally interacted with each other during the session.
As part of the state-mandated university requirements, students completed course evaluations at the end of the semester, and those results were analyzed as part of the study.Learning outcomes for face-to-face and online students were compared on common course elements, namely the programming projects and final exam.The next sections summarize student background, participation in the synchronous sessions, survey results, and learning outcomes.
Student Background Information
Questionnaires were administered to all students at the beginning of the course.The questionnaires indicated that the students' backgrounds were extremely varied, ranging from minimal computer usage to extensive experience involving multiple environments and applications.Information related to background experience with operating systems and software applications submitted in questionnaires by 21 face-to-face students and 16 online students are summarized in Figs. 1 and 2. The results indicate that the computer experience of students in both the face-to-face and the online sections was similar.
Students' experience with programming was also quite varied.Some students had no prior programming experience while others already knew several languages.Results regarding prior experience with programming in Java and other programming languages are shown in Fig. 3.It is interesting to note that no students in the online section had any prior experience with Java programming.However, a third of the students in the face-to-face section had prior experience with Java.It is likely that this difference in background impacted learning outcomes and attrition rates between the face-to-face and online sections.Prior programming experiences in languages other than Java were comparable.
Participation in the Synchronous Sessions
The number of students who participated in each synchronous session was recorded.Data regarding the number of times each recording was accessed would have provided interesting information, but was unavailable.Fig. 4 shows the total number of students who participated in each session, grouped by section.Although student participation in the online synchronous sessions was lower than expected, the data helps to shed some light on the usefulness of such sessions.The number of online students who participated in the sessions remained consistent throughout the semester, with an average of 2.3 students in the first half of the semester, and 2.1 in the second half of the semester.Sessions 1-8 were only available for online students, but the remaining sessions (sessions 9-14) were available for all students.The sessions were opened up to the face-to-face students in the hope of increasing participation and enabling more collaborative activities during each session.Despite this policy change, few face-to-face students (an average of 1.2) participated in the sessions.
Despite the overall low number of students who participated, it seems that many more students derived benefit from the sessions by later accessing the recordings.Although actual usage data on the number of students who later accessed each recording was unavailable, informal feedback received from students during the semester suggested that many who didn't participate in the sessions routinely accessed the recordings later, and asked questions about the activities discussed in the recordings.
Results of the Pre-and Post-Course Surveys
Two surveys were administered to students in both sections, one at the beginning and one at the end of the semester.The first survey was taken by 13 students (6 female and 7 male), 8 from the online section and 5 from the face-to-face section.All but three students had prior programming experience and all but one had previously taken an online course.When asked about resources that helped most in prior programming courses, the most common student responses were textbook, programming assignments, and discussion threads.Table 1 summarizes demographics, and the results for students' interest in The survey at the end of the semester was completed by 15 students.Table 2 provides a summary of the results of that survey.Interestingly, the students were younger and, as completers of the course, had survived the winnowing out process that occurs in introductory programming classes.A decrease in both interest in programming and in confidence in their aggregate ability to program had occurred, but the decrease in interest was mostly from the face-to-face class.Students were provided with a multi-part 5-point Likert scale question regarding the factors that most fostered learning in the course.In addition to the items in the first survey, a question was asked regarding the utility of the online synchronous learning sessions.Those sessions had the highest mean score (1.46/5) for importance as a contributor to learning.The factors with the next highest mean scores were linked resources and programming assignments.
While the number of survey respondents was too small for any meaningful generalization on these results, they point to some intriguing issues that warrant further study.The most interesting result was the very high rating for the online synchronous sessions as contributors to learning.Further study of this result is clearly indicated.Another important question is whether or not it is typical for introductory programming students, even ones who have had prior programming classes, to experience a decrease in interest and confidence in early classes.The opposite outcome would be desirable.
Another interesting result was that interest and confidence went hand-in-hand on both surveys.Only two of the 28 survey completers indicated a difference of more than one increment between interest and confidence.All students indicated either the same level or more interest than confidence in their abilities.A clear relationship appeared to exist between their interest and confidence measures, and final grades.It is quite likely that initial interest and confidence are strong predictors of outcome, especially in courses such as this one in which students had some prior programming experience.Additional exploration of strategies through which synchronous learning activities might bolster interest and confidence would be beneficial.
Results of the Surveys Administered by the Academic Technology Center
Surveys distributed to the online students midway and at the end of the course provide interesting data with regard to course design and delivery.Eleven students completed the mid-course survey and 3 students completed the end-of-course survey.The mid-term surveys consisted of demographic, background, 10 Likert-scale, and 3 open-ended questions regarding course experiences.The results showed that 90% of the online students agreed or strongly agreed that they had the technical competence necessary to succeed with online courses.Also, 90% of the students agreed or strongly agreed that (a) ongoing communication with the instructor was necessary for success in the course, (b) the level of interaction with the instructor of this course contributed to their understanding of course objectives, and (c) they were comfortable with the course management system used for the course.Additionally, 80% of the students agreed or strongly agreed that they (a) normally received responses to their emails within 24 hours, (b) normally received assignment feedback within one week of submitting assignments, and (c) had sufficient opportunity to interact with other students online.Also, 70% of the students agreed or strongly agreed that (a) the online orientation provided useful information, (b) they received constructive assignment feedback, and (c) the level of interaction between students contributed to their understanding of course objectives.
Seven students answered an open-ended question that asked if they were receiving the amount of support needed to be successful online learners.All of the responses indicated they had, although one student stated that support was more difficult for students who work "second shift".Six students answered an open-ended question addressing what they would change about the course so far.Three students stated that they would not change anything.Two students stated that they were dissatisfied with the textbook.One student indicated that it appeared as though less teaching effort was given to the online course compared to face-to-face courses.
The survey distributed at the end of the course consisted of demographic and background questions, 46 Likert-scale questions regarding the online learning experience, course content, course structure, course appeal, course technology and support, the instructor, the learning environment, and 3 open-ended questions.Only three students submitted the end-of-course survey, however the results are quite interesting.Responses to the question that asked students if they would have preferred to take the course in a faceto-face environment, one student agreed, one was neutral, and one disagreed.Yet, all three students agreed or strongly agreed that they liked to learn online, and agreed or strongly agreed that they would take another online course.All responses regarding the online learning experience were favorable or neutral.
In responses to 13 questions regarding course content, all three students agreed or strongly agreed that (a) the course content was clear, understandable, and aligned to the course objectives, (b) the assignments were clear, relevant, and challenging, and (c) they learned a great deal in the course.With regard to course structure, all three students agreed or strongly agreed that (a) the course learning outcomes/objectives were clear, (b) the course material was well-organized, (c) the directions for the course were clear, (d) the structure of the course was easy to understand and follow, and (e) the instructional guides were helpful in focusing on the important topics.
All three students were neutral, agreed, or strongly agreed that (a) the course was accessible when needed, (b) they did not have any technology-related problems, (c) the course management system was easy to use, (d) the online course discussions, drop box, quizzes, and grade book were easy to use, and (e) the help disk assisted them in solving technical problems.All three students agreed or strongly agreed that (a) they interacted often with their instructor, (b) they received constructive assignment feedback, and (c) the instructor responded to questions.
Responses to open-ended questions included one suggestion for more video within the course content, and one suggestion for more specific assignment instructions.All three students were neutral, agreed, or strongly agreed that (a) the course was interesting and that they were satisfied with the course.All three students agreed or strongly agreed that they would recommend the course to other students.The survey results reflect favorably on the online course design and implementation, and informally validate conclusions drawn from our comparison of data between the face-to-face and online sections of the foundational programming course.
Analysis of Student Learning Outcomes
An analysis of student learning outcomes for the course was performed in an effort to measure relative achievement of students in the face-to-face class (N = 19) compared to students in the online class (N = 16).Descriptive measures of the two groups are summarized in Table 3.
An independent means t-test was performed on the final grades achieved by the students in the two groups.Levene's test for equality of variance showed no difference in the dispersion of grades (F = .492and p = .488).The results of the t-test are presented in Table 4, which indicate that no significant difference was found in the attainment of the two groups.
Although the face-to-face class achieved a slightly higher final average grade than the online class, very little difference (slightly more than 0.1%) between the means existed in this study.As previously mentioned, both classes had to complete the same programming projects and exams.All graded items were graded by the same teaching assistant (programming projects) or instructor (exams), supporting a consistent evaluation of student learning in both groups.This result is mediated by the fact that, as noted previously (El-Sheikh et al., 2007), the attrition rate was higher in the online version of the course compared to the face-to-face version.Specifically, 3 students withdrew from the face-toface course (a 14% attrition rate) compared to 13 students who withdrew from the online version (45% attrition).
Results of the End-of-Semester Course Evaluations
Students in both sections were asked to complete state-mandated course evaluation forms at the end of the semester.The forms asked students to answer 18 questions related to the course organization and instructor's teaching skills using a 5-point Likert scale indicating responses of Excellent, Very Good, Good, Fair, or Poor.Ten out of 19 students (53% response rate) completed the course evaluation forms from the face-to-face section and 6 out of 16 students (38% response rate) from the online section, which are typical response rates for such forms.A comparison between the face-to-face and online sections of the percentage of students answering Excellent or Very Good to various questions is given in Table 5.
Overall the results were encouraging, with a majority of students in both sections responding favorably (Excellent or Very Good) to all questions.Ratings provided by faceto-face students were somewhat higher than those provided by online students for the majority of the questions.
On the question regarding the facilitation of learning, 90% of face-to-face students rated the facilitation of learning as excellent or very good, with 60% of students rating it as excellent.In comparison, approximately 67% of online students indicated that the facilitation of learning was very good or excellent, with half of those students rating it as excellent.This result suggests that it is possible to engage students in an online programming course and to identify appropriate ways to facilitate learning in an online environment.In alignment with this result, approximately 67% of online students also indicated that stimulation of interest in the course, availability to assist students, and communication of information was very good or excellent, compared to 80% of face-toface students for the first two items, and 88% for the third item.This also suggests that online students felt that they were able to understand the ideas and information presented through online delivery, get assistance if they needed it in such an environment, and maintain an interest in the material.Another significant result from analysis of the course evaluation forms was that 100% of online students rated the class meetings favorably with respect to their usefulness, with 66.7% rating them as excellent, 16.7% rating them as very good (giving a total of 83.4% shown in the table), and 16.7% rating them as good.Class meetings for the online students were the weekly synchronous sessions that were conducted using Elluminate.This result suggests that students found the synchronous sessions useful towards the facilitation of their learning.It is interesting to note that 100% of the face-to-face students also indicated that their class meetings were useful.
All students who completed the evaluation forms from both sections rated the course requirements as excellent or very good.With respect to the usefulness of the course assignments, 83.4% of online students rated the assignments as excellent or very good, which is very comparable to the 89% rating provided by face-to-face students.In addition, 66.7% of online students indicated that the expression of performance expectations for the course was very clear, compared to 90% of face-to-face students.Collectively, these results are important because they suggest that both online and face-to-face students felt that they had similar learning experiences with respect to the class meetings, course requirements, and assignments.
Although the results for the overall assessment of instructor and overall course organization appear to indicate a difference between the face-to-face and online students, a closer investigation reveals a few subtle similarities.66.7% of online students rated the overall assessment of the instructor as excellent, compared to 60% of face-to-face students.In addition, 66.7% of online students rated the overall course organization as excellent, compared to 80% of face-to-face students.The parallel between these ratings further suggests that the online and face-to-face students had comparable learning experiences in the course.
Conclusions and Discussion
Adapting foundational programming courses for online delivery is a challenging task, but an essential one to solve effectively, given the increasing demand for online courses and programs.The purpose of the work reported here was to employ a variety of technologies, methods, and materials in an online version of an introductory computer programming course, and to assess the outcomes that resulted.The remainder of this section contains conclusions regarding these issues.
With regard to /research question 1, the best mix of technologies, methods, and materials, several conclusions might be drawn.A wide mix of technologies and methods were utilized in the course, and with a few notable exceptions, provided support as expected.However, a few surprises occurred.For example, discussion forums are an integral part of any online course.This version of the course included a discussion forum for "muddiest points".It was hypothesized that this particular forum would generate useful synchronous and asynchronous discussions.However, it generated very little discussion, while other discussion areas that directly related to graded course elements, such as programming projects or exams, generated a considerable amount of ongoing discussion throughout the semester.In an online course, students have a variety of competing means through which to gain clarifications on difficult-to-understand points, possibly accounting for this result.Despite this surprising outcome regarding a potentially interesting course component, discussion forums generally will remain an important part of the course materials.
The incorporation of weekly online synchronous sessions was novel to the delivery of the online programming course.These sessions provided an opportunity for students and the instructor to interact directly and synchronously using text, audio, and shared documents and applications to discuss course-related concepts and assignments.Survey results and feedback strongly suggested that the synchronous sessions were helpful for both online and face-to-face students.However, a surprisingly small number of students actually participated in the sessions.It appears likely that, based upon informal comments from students, many more benefited from the sessions indirectly by viewing the session recordings.Collection and analysis of more detailed data regarding usage patterns of the recordings, data that was not available for the current study, would provide a better picture of the broader benefits of online synchronous sessions.Future work will also include the evaluation of course resources through correlation with learning outcomes and assessment measures.
The second research question pertains to outcomes in the course.An encouraging result of this study is that very little difference was noted in attainment between the face-toface and online sections of students completing the course.However, the online students clearly had a higher attrition rate.It should also be noted that the attrition rate in the online course was the same rate reported for the first iteration of the course.The online synchronous sessions were anticipated to aid retention in the course.Given the relatively low participation in the synchronous sessions by online students, it is likely that the full potential of the synchronous sessions is yet to be realized.The higher attrition rates evidenced in the online course compared to the face-to-face version are certainly not unique to computing disciplines (Angelino et al., 2007;Carr, 2000).Angelino, Williams, and Natvig report that attrition rates for online courses are typically 10-20% higher than for face-to-face courses.Carr suggests that potential online students should be asked about their self-responsibility and computer literacy skills before the course begins.A major goal of future work must be to address the retention issue.
Comparison of results between the face-to-face and the online sections suggests that more required interactions and attendance may be necessary to improve student success in an online programming course.Online students had a clear tendency to skip events unless they were required.Decline in student participation in both face-to-face and online activities suggests the need to increase the number of required activities in both delivery formats.Institutional policy does not currently support the requirement for students to engage in virtual synchronous activities in online courses.This is due to the fact that online courses do not have a time slot associated with them.Despite efforts to identify a best time for everyone, the scheduling of synchronous events is challenging.Modification of institutional procedures and policies that better support and enforce such online synchronous learning activities would help.
The learning outcomes and attrition rates must be viewed in another light -the substantial differences in background among the participants in the course.Survey results revealed significant differences in prior programming experience between the online and face-to-face sections.No students in the online section had any prior Java programming experience.This fact certainly had an effect on successful completion rate in the online course.The survey results also pointed to an interesting relationship between interest and confidence measures, and final grades.Based on an informal analysis of the results, it appears that initial interest and confidence are strong indicators of learning outcomes in introductory programming courses.These initial results warrant further investigation.Longitudinal studies that track online and face-to-face students' interest level and confidence in programming skills as they progress through the program would provide useful data.
Although this study was designed to evaluate the effectiveness of various technologies, materials, and methods used to support online learning with the goal of informing best practices related to teaching foundational programming online, the infrastructure at our institution for online courses permits access by face-to-face students as well.Access to the online synchronous sessions was provided to the face-to-face students partway through the semester to provide additional opportunities for collaborative learning and student engagement.A small proportion of face-to-face students did choose to make use of this optional resource.Further consideration regarding the benefit of overlaying technologies originally designed for online students into face-to-face environments is needed.Course delivery strategies that integrate best practices for face-to-face and online learning clearly appear to be worthy of further study.
Fig. 1 .
Fig. 1.Questionnaire results for students' prior experience with operating systems.
Fig. 2 .
Fig. 2. Questionnaire results for students' prior experience with software applications.
Fig. 3 .
Fig. 3. Questionnaire results for students' prior experience with programming languages.
Fig. 4 .
Fig. 4. Number of participants in the synchronous sessions.
Table 1
Selected results from the initial survey programming and confidence in their programming skills.The two groups were similar with regard to age, interest, and confidence.The last two elements in Table1are mean scores on a 5-point Likert scale, where a response of 1 indicates very high and a response of 5 indicates very low.These results indicate a fairly high interest in programming with somewhat less confidence in initial abilities to program, in both the face-to-face and online classes.
Table 3
Descriptive statistics for the online and face-to-face classes
Table 5
Percentage of students responding Excellent or Very Good to selected course evaluation questions in the faceto-face and online classes | 8,969.8 | 2008-01-01T00:00:00.000 | [
"Computer Science"
] |
Test-beam studies of a small-scale TORCH time-of-flight demonstrator
TORCH is a time-of-flight detector designed to perform particle identification over the momentum range 2– 10 GeV/ c for a 10 m flight path. The detector exploits prompt Cherenkov light produced by charged particles traversing a quartz plate of 10 mm thickness. Photons are then trapped by total internal reflection and directed onto a detector plane instrumented with customised position-sensitive Micro-Channel Plate Photo-Multiplier Tube (MCP-PMT) detectors. A single-photon timing resolution of 70 ps is targeted to achieve the desired separation of pions and kaons, with an expectation of around 30 detected photons per track. Studies of the performance of a small-scale TORCH demonstrator with a radiator of dimensions 120×350×10 mm 3 have been performed in two test-beam campaigns during November 2017 and June 2018. Single-photon time resolutions ranging from 104 . 3 ps to 114 . 8 ps and 83 . 8 ps to 112 . 7 ps have been achieved for MCP-PMTs with granularity 4 × 64 and 8 × 64 pixels, respectively. Photon yields are measured to be within ∼ 10% and ∼ 30% of simulation, respectively. Finally, the outlook for future work with planned improvements is presented.
Introduction
TORCH is a time-of-flight (ToF) detector designed to perform Particle IDentification (PID) at low momentum (2-10 GeV∕ ) over a 10 m flight path [1,2]. The principle of operation is demonstrated in Fig. 1. TORCH exploits prompt Cherenkov photons produced by charged particles traversing a quartz plate of 10 mm thickness, combining timing measurements with DIRC-style reconstruction, a technique pioneered by the BaBar DIRC [3] and Belle II TOP [4,5] collaborations. A fraction of the radiated photons are trapped by total internal reflection, which then propagate to focusing optics at the periphery of the plate. Here a cylindrical mirrored surface maps the photon angle to a position on a photo-sensitive detector; custom-designed Micro-Channel Plate Photo-Multiplier Tube (MCP-PMT) detectors [6] are used to measure the times of arrival and positions of each photon. Combined with external tracking information, the spatial measurement allows the Cherenkov angle of the emitted photon to be determined.
The TORCH detector has been proposed for Upgrades Ib and II of the LHCb experiment in order to improve the pion, kaon, and proton Fig. 1. Schematics of a TORCH module demonstrating the principle of operation. (a) Total internal reflection traps Cherenkov light generated by a particle traversing the radiator plate. (b) Upon reaching the focusing optics, the angle of the photon in the − plane is mapped to the ′ −coordinate on the detector, allowing to be determined. Note that the ′ axis is rotated by 36 • from the vertical ( -axis).
Sections 5 and 6, respectively. Finally a summary and outlook for the future is given in Section 7.
Mechanics and optics
The demonstrator consists of a 120 (width) × 350 (height) × 10 (thickness) mm 3 radiator plate, optically coupled to a focusing block which has a cylindrically mirrored surface designed to focus 2 mm beyond the exit surface onto the MCP-PMT photocathode. The block has the same dimensions as it would have for a full-sized module in LHCb, except having its width reduced to match the 120 mm width of the plate. The radiator plate and focusing block assembly was mounted into a rigid frame which allowed the angle of incidence of the beam to be varied by tilting the demonstrator about the -axis, seen in Fig. 1. The complete structure was contained within a light-tight box and mounted upon a translation table, allowing the module to be positioned in the and directions with respect to the beam. Further details of the optical components and mounting mechanics can be found in Ref. [2].
MCP-PMTs and electronics
In each of the two test-beam campaigns, the demonstrator was instrumented with a different two-inch square MCP-PMT with a 64 × 64 anode pixelisation. The tubes were custom-designed for the TORCH project by Photek Ltd (UK) [6] and represent the final prototypes of a three-stage development process [8]. Charge from the MCP electron avalanche is collected on a resistive layer (''sea'') inside the PMT vacuum, and capacitively coupled to the anode pads. This allows charge sharing to improve the spatial resolution beyond the anode-pad pitch of 0.828 mm. In November 2017, the implemented MCP-PMT had a 4 × 64 granularity in ( , ′ ), where the coarse granularity was achieved by electrically grouping pixels on an external Printed Circuit Board (PCB), connected to the anode pads using anisotropic conductive film. In June 2018 the granularity in the −direction doubled to 8 × 64, which, with charge sharing, gives an effective pixelisation which exceeds that required for optimal TORCH performance [9]. For LHCb installation, a pixelisation of 8 × 128 is planned.
Both MCP-PMTs have an active area of 53 × 53 mm 2 , corresponding to approximately half the width of the demonstrator. In both test-beam campaigns the MCP-PMT was mounted between one side edge and the centre of the focusing block, with the other half of the detector plane not being instrumented.
In the 4 × 64 MCP-PMT, the insulating layer which separates the resistive-sea from the anode readout pads has a thickness of 0.5 mm. This results in a point-spread function at the pads of 1.80 ± 0.15 mm (FWHM), which was determined from laboratory measurement and verified by simulation [6]. The 8 × 64 MCP-PMT has a 0.3 mm insulating layer, and results in a point-spread function at the anode pads of 1.30 ± 0.13 mm (FWHM). The quantum efficiencies (QEs) of both tubes were measured in the laboratory, and are shown in Fig. 2. It can be seen that the integrated QE of the 8 × 64 MCP-PMT is around a factor two less than for the 4 × 64 device. Although the QE of the 8 × 64 is not optimal for reaching the desired number of photons per track, the performance of future tubes is expected to improve with further iterations of development.
Readout electronics employing the NINO [10] and HPTDC [11] chipsets were custom-developed for the TORCH project [12]. Due to the increased granularity of the 8 × 64 MCP-PMT in the coarse-pixel direction, an entirely new readout system was developed to replace that used for the 4 × 64 device. Because of differences in the size and shape of the boards, new holding mechanics were fabricated for the 8 × 64 device, which introduced a 5 mm upwards offset of the MCP-PMT in the ′ −direction relative to the 4 × 64 device.
Hit clustering
As previosuly discussed, the Photek MCP-PMT was designed so that a single incident photon will give hits on several neighbouring pixels [6]. This means that the 64 physical pixels in the ′ − direction can provide an effective granularity of 128 pixels by exploiting charge sharing. In this way, to reconstruct single photons, hits are clustered according to the following criteria: • they must have the same −coordinate (coarse pixel direction); • they must be adjacent neighbours in the ′ −coordinate; • the arrival of the hit must be timed within 1 ns of its neighbour.
All three criteria must be met for any pair of hits to be included in the same cluster. However for the 8 × 64 dataset, the criteria were slightly modified to account for a small fraction of dead channels: namely, if two clusters fall on either side of a known dead channel and the hits neighbouring the dead channel fall within 2 ns of each other, then the clusters are merged. Cluster size is determined by the number of hits in the cluster, and the cluster position is taken to be the average position of the centroid of all the hits.
Detector simulation
A simulation of the TORCH demonstrator has been developed, using optical processes modelled by Geant4 [13]. Custom libraries were used to model the detector response and readout, which take input from laboratory measurements including the MCP-PMT quantum efficiency, gain, and point-spread function. Losses due to quartz surface scattering and Rayleigh scattering are modelled. The same simulation was used for both test-beam periods, but with differing input from laboratory-measured parameters for the respective MCP-PMT used.
Test-beam setup
In both test-beam campaigns, a 5 GeV∕ beam was used, comprising approximately 70% pions and 30% protons. The TORCH demonstrator was positioned with the beam striking half way down the radiator plate, 5 mm from the edge (below the MCP-PMT), and tilted back from the vertical by 5 • . This geometrical configuration ensured that the Cherenkov pattern was well contained on the MCP-PMT detector surface.
The same beam-line infrastructure was installed for both campaigns, displayed schematically in Fig. 3. A pair of identical timing stations, T1 and T2, spaced approximately 11 m apart, was used to provide a time reference for TORCH. Each station, oriented at 49 • to the beam, consisted of a 100 mm long, 8 × 8 mm 2 borosilicate bar in which Cherenkov light was generated from traversing particles. A singlechannel MCP-PMT detected the direct photons and provided a precise timing signal. The signals were injected into the TORCH electronics and read out simultaneously with the rest of the data. By combining signals from both stations, a time of flight measurement could be made independently of TORCH, providing a cross-check of PID for the particle traversing the TORCH prototype. Additionally, each station had a pair of scintillators providing an 8 × 8 mm 2 coincidence. Requiring a signal in both scintillators narrowed the beam definition accepted by the trigger and improved the resolution of the time reference. The timing power of the stations is demonstrated in Fig. 4, which shows clearly the separation of pions and protons in the beam. In addition, a pair of threshold Cherenkov counters filled with CO 2 at 2.5 bar were introduced for both campaigns, and provided the independent source of PID.
An EUDET/AIDA pixel beam telescope [14] was also installed in the beam-line, consisting of six 18.4 μm pitch sensors (Mimosa26). The telescope allowed an accurate measurement of the beam profile incident on TORCH, even though an event-by-event synchronisation was not possible. Fig. 5 shows the beam profile measured by the telescope when extrapolated to the TORCH radiator, giving an RMS spot size of 2.73 ± 0.02 mm in and 2.01 ± 0.02 mm in . The beam divergence was measured to be 5.8 ± 0.2 mrad and 2.6 ± 0.2 mrad in and , respectively.
Triggering of the TORCH readout and telescope was provided by an AIDA-2020 Trigger Logic Unit (TLU) [15]. The new beam-line infrastructure allowed a large increase in achievable data rate with respect to [2]. By providing the independent source of PID, the Cherenkov counters allowed T1 to be removed from the trigger, with T2 alone being used as a time reference. This led to a wider beam profile that could be triggered upon, significantly increasing the acceptance and trigger rate. Comparing the PID information from the Cherenkov counters with the PID from ToF, the purities of the pion and proton samples from the Cherenkov counters were approximately 94% and 82% in November 2017, and 98% and 96% in June 2018, respectively.
Calibrations
Two data-driven calibrations were applied to the data to correct the timing of the MCP-PMT output signals, the first to account for timewalk in the NINO chip, and the second to correct for integral non-linearity in the HPTDC.
The first correction accounts for timewalk of the NINO (i.e. differences in timing due to variations of pulse amplitude), and is adapted from the data-driven method employed in Ref. [2]. The first stage in the calibration process is to define an MCP-PMT photon cluster. Assuming each cluster corresponds to a single photon, the hit pixels within that cluster should have simultaneous recorded times, and any difference , between pairs of channels , would be a consequence of time slew. The NINO utilises a time-over-threshold technique, outputting a binary signal with a width defined by the rising and falling edges of the MCP-PMT input pulse when passing an adjustable threshold. The signal width for a channel is hence related to the input pulse amplitude.
This introduces a relationship between , and the corresponding pulse widths, which can be parameterised as: where , are the recorded times of the hits on channels , , and is chosen to be a 2-dimensional function of quadratic form, with those coefficients determined by a fit to pairs of hits from the same cluster. The constant term also corrects for the relative delay between individual pixel timing offsets ( 0 's). In this way, the parameters of can be determined for all pairs of channels during a data run. Thereafter a correction is made to the measured arrival time of each single pixel hit according to its measured pulse width. This method assumes the time walk of each individual pixel is uncorrelated with all the others, and improves the method employed in Ref. [2] by comparing all pairs of hits, rather than parameterising and correcting only next-to-nearest neighbours.
The second calibration accounts for non-linearity in the HPTDC chip, where the bins used to digitise the data are not equally spaced in time [11], leading to integral non-linearity. Several large dedicated calibration datasets were taken to allow a code-density test [16] to be performed to correct for this effect.
A calibration step which is presently missing is the so-called chargeto-width calibration, which would allow a more accurate measurement of the amount of charge collected in any given pixel hit as a function of the width of the output pulse. This in turn would allow more accurate cluster centroiding to be performed. This calibration requires a dedicated laboratory-based charge-injection system, and this is currently under development. Fig. 6 shows the uncorrected distributions (pixel maps) of hits on the MCP-PMTs from the two run periods for pions and protons combined, taken with the beam positioned at the vertical mid-point of the radiator plate, 5 mm from the edge below the MCP-PMT. Bands can be seen, corresponding to different photon paths within the radiator plate. The empty bins in Fig. 6b indicate dead channels. These are attributed to broken wire bonds of the NINO electronics board, an issue which has been resolved in subsequent iterations.
Single-photon time resolution
The single-photon time resolution of the demonstrator can be measured by comparing the time at which a photon is detected to that predicted from the TORCH reconstruction algorithm [2]. The algorithm determines the photon path in the radiator plate and the Cherenkov angle from the position of the track entry point, the track direction, and the position of the photon hit on the MCP-PMT. Combining this with knowledge of the primary particle species and its momentum, the time of propagation can then be calculated through the intermediate steps of determining the phase and group refractive indices. Note that the nominal values of beam position and incident angle are used in the reconstruction, leaving any finite beam width and divergence to be accounted for statistically, as described below.
For each column of pixels, the measured arrival time can be plotted against the ′ (finely-granulated) pixel number. Fig. 7a shows an example distribution for the 8 × 64 MCP-PMT, selected for protons only, with the predictions from the reconstruction algorithm overlayed. In calculating the predicted time in the reconstruction, each photon is treated individually and its energy calculated [1]. The distinct bands seen in the figure correspond to the different orders of reflection from the side faces of the demonstrator, illustrated schematically in Fig. 7b. This clearly demonstrates that photon paths in the radiator plate are well separated. Note that within a given order of reflection for a specific set of track parameters, the measured ′ pixel coordinate is correlated to the Cherenkov photon energy, with the finite pixel size contributing to the chromatic uncertainty.
For those photon hits which had either no reflections off a side edge or which only had a reflection off the edge below the MCP-PMT Fig. 7b. The data are fitted with a Crystal Ball function with parameters = 0 ± 1 ps, = 115 ± 1 ps, = −1.01 ± 0.03, and = 4.5 ± 0.4 [17]. The contribution from the timing reference has not yet been subtracted. The tail is attributed to backscattering of primary photoelectrons in the MCP-PMT and possibly a small contribution from residual timewalk.
(corresponding to orders 0 and 1 ′ in Fig. 7b), a residual distribution (i.e. the measured minus predicted times of arrival) is constructed. The residual distributions of individual bins in ′ are first fitted to determine the resulting mean. These means are expected to be offset with respect to each other due to chromatic dispersion, and thus are corrected by offsetting the photon arrival times within each bin by the mean of the bin. Recombining the bins then gives the final fitted distribution. The sigmas are also dependent on photon energy, hence the measurements are averaged. Fig. 8 shows an example of a residual distribution fitted with a ''Crystal Ball'' function, consisting of a Gaussian core with a power-law tail [17]. The tail models back-scattering of primary photoelectrons in the MCP-PMT and possibly a small contribution from residual timewalk, whilst the standard deviation of the Gaussian component is interpreted as the measured time resolution.
The time spread of the residual distribution, Total , is a combination of several factors which must be subtracted in quadrature to give the intrinsic timing resolution. This is given by where TORCH is the intrinsic single-photon timing resolution which we wish to determine, Beam is the spread in the residual distribution resulting from the finite width and divergence of the incident beam, and TimeRef is the time resolution of the T2 station that provides the reference time for TORCH. The latter two contributions are discussed below.
The beam-spread contribution: The contribution to the time spread due to the beam profile, Beam , is determined from simulation. The simulation is run with the spread in beam position and divergence as measured by the telescope, and then for a beam with no spread. In each case, a residual distribution is constructed the same way as for data, and the width of the two compared. The value of Beam is found to be 14 ± 2 ps, where the quoted error accounts for the uncertainty on the beam profile and differences between the values measured for each MCP-PMT column.
The time reference contribution: The T2 station provides the time reference with respect to which the Cherenkov photons in TORCH are measured. The time resolution of the stations is demonstrated in Fig. 4, showing the (T2−T1) time difference in June 2018 data. The pion and proton beam contributions are clearly separated. To determine the resolution of the downstream station T2 independent of TORCH, it is assumed the behaviour of the pair of time reference stations T1 and T2 is identical before their signals propagate to the TORCH readout, however the timing of upstream station T1 is degraded due to ∼11 m Table 1 The single-photon time resolutions for pions measured for the 4 × 64 MCP-PMT in November 2017. The MCP column numbers match those shown in Fig. 6a Table 2 The single-photon time resolutions for pions and protons measured for the 8 × 64 MCP-PMT in June 2018. The MCP column numbers match those shown in Fig. 6b (2), the time resolution TORCH is determined separately for each MCP-PMT column and for each incident particle species. Unfortunately a significant pollution of pions is observed in the proton sample for the November 2017 dataset due to the Cherenkov counters being non-optimally tuned, so only photons resulting from an identified pion are used in this case. This results in four measurements for the 4 × 64 MCP-PMT, presented in Table 1, and 16 for the 8 × 64, shown in Table 2.
The measurements from the two test-beam periods are generally similar, with resolutions TORCH between 100 and 110 ps typically observed. The overall trend of enhanced performance in the 8 × 64 MCP-PMT with respect to the 4 × 64 is attributed to the improved resolution in the −direction due to the doubling of the number of pixel columns. Columns 5 and 7 stand out in particular for the 8 × 64 MCP-PMT data, with measured resolutions of order 90 ps. This results from the application of better calibration corrections for these columns. It is noted that six of the eight columns for the 8 × 64 MCP-PMT give better resolutions for pions than protons, an effect which is attributed to a residual pion pollution in the proton sample in the June 2018 data. In this case a fraction of pions will be falsely reconstructed as protons, resulting in an incorrect predicted time.
As indicated in Fig. 7a, orders 0 and 1 cannot be distinguished in data. The difference in time of arrival between the two orders from the simulation ranges from 5-30 ps, varying with the −position of the hit. This will widen the residual distribution, and leads to a slightly degraded time resolution being measured than with only a single order of reflection. However, as the effect is photon-energy dependent, no attempt has been made to subtract this contribution.
Incorporating a charge-to-width calibration for the NINO in addition to the data-driven approach here employed would allow a chargeweighted average of the time and position of each cluster to be determined. This would improve the resolution further, and closer to the desired 70 ps.
Photon counting
The photon counting efficiency of the demonstrator is determined by counting the number of detected clusters and comparing to the number expected in simulation. Monte Carlo samples corresponding Table 3 The arithmetic means of the photon-yield distributions shown in Fig. 9 and the ratio of data compared to simulation. The quoted uncertainties are purely statistical. to 10 000 incident pions at 5 GeV/c were used for each detector configuration. Fig. 9 shows the distributions of number of photons seen and expected in data and simulation, respectively, for the two MCP-PMT arrangements. The arithmetic mean numbers of measured photons are compared in Table 3. A negligible difference is observed in counting efficiency between pions and protons, hence no selection is made in the data based on the species. The reduced number of clusters observed for the 8 × 64 MCP-PMT with respect to the 4 × 64 device in both data and simulation is expected, given that the 8 × 64 device has a significantly lower quantum efficiency (as seen in Fig. 2). The photon yield in the 4 × 64 MCP-PMT agrees within 10% of the simulation, however for the 8 × 64 device, a ∼30% loss in data is observed. The yields depend strongly on the MCP-PMT gains and the NINO thresholds, the best estimates of which were used in the simulation. 1 The observed discrepancies are attributed to uncertainties due to small signals arising from charge sharing, for which the systematics from the NINO threshold values are significant. Future laboratory work will therefore focus on improving the efficiency and calibration of the MCP-PMTs and the electronics.
Summary and future plans
Studies of a small-scale TORCH demonstrator with customised MCP-PMTs and readout electronics have been performed during two testbeam periods in November 2017 and June 2018. Single-photon time resolutions ranging from 104.3 ps to 114.8 ps and 83.8 ps to 112.7 ps have been measured for MCP-PMTs of granularity 4 × 64 and 8 × 64, respectively. The improvement for the 8 × 64 is attributed to its factor two increase in granularity. The measurements are within 30-40% of the 70 ps targeted. The photon yields show a strong dependence on the MCP-PMT quantum efficiency, and also highlight future work that is required to better understand the factors associated with the operational parameters of the MCP-PMT, the properties of charge sharing, and the calibration of the readout electronics.
A half-sized LHCb demonstrator module with a 660 × 1250 × 10 mm 3 radiator plate has been constructed and is currently being evaluated [18]. The demonstrator has been instrumented with the same 8 × 64 MCP-PMT and readout electronics as for the June 2018 beam test, alongside a second identically-configured 8 × 64 MCP-PMT with an improved quantum efficiency. This will allow timing resolution studies and photon-yield measurements with improved photon statistics. Analysis is underway and will be the subject of a future paper.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 5,751.2 | 2020-02-18T00:00:00.000 | [
"Physics"
] |
Bio-Imaging-Based Machine Learning Algorithm for Breast Cancer Detection
Breast cancer is one of the most widespread diseases in women worldwide. It leads to the second-largest mortality rate in women, especially in European countries. It occurs when malignant lumps that are cancerous start to grow in the breast cells. Accurate and early diagnosis can help in increasing survival rates against this disease. A computer-aided detection (CAD) system is necessary for radiologists to differentiate between normal and abnormal cell growth. This research consists of two parts; the first part involves a brief overview of the different image modalities, using a wide range of research databases to source information such as ultrasound, histography, and mammography to access various publications. The second part evaluates different machine learning techniques used to estimate breast cancer recurrence rates. The first step is to perform preprocessing, including eliminating missing values, data noise, and transformation. The dataset is divided as follows: 60% of the dataset is used for training, and the rest, 40%, is used for testing. We focus on minimizing type one false-positive rate (FPR) and type two false-negative rate (FNR) errors to improve accuracy and sensitivity. Our proposed model uses machine learning techniques such as support vector machine (SVM), logistic regression (LR), and K-nearest neighbor (KNN) to achieve better accuracy in breast cancer classification. Furthermore, we attain the highest accuracy of 97.7% with 0.01 FPR, 0.03 FNR, and an area under the ROC curve (AUC) score of 0.99. The results show that our proposed model successfully classifies breast tumors while overcoming previous research limitations. Finally, we summarize the paper with the future trends and challenges of the classification and segmentation in breast cancer detection.
Introduction
Cells are the building blocks of human tissues, and tissues eventually form organs. Every cell has some functions to perform; once their work is done, they die. However, sometimes, cells do not die after their performance due to internal and external issues, and new tissues are formed without need. This abnormal division of cells or production of extra cells causes tumors. Different factors such as alcohol consumption, obesity, birth control pills or injections, estrogen, progesterone, diethylstilbestrol during pregnancy, radiation treatment, and inheritance mutations can cause breast cancer. In the same manner, some factors can reduce the chances of breast cancer, such as breastfeeding, early age pregnancy, and hormonal balance [1]. The uncontrolled division of cells can occur in any body part, but here, we discuss the cells in the glands that produce milk (called lobules). Their abnormal growth causes breast cancer [2]. New research shows that breast cancer is about 23% in females out of all cancer types, which is much more rational than in males. Every eighth or ninth female is exposed to breast cancer at any stage of their life in Europe [3]. According to the World Health Organization (WHO), early cancer detection considerably increases the probability of making suitable decisions for a successful treatment plan [4]. There are different types of cancers worldwide causing a considerable rate of annual deaths as illustrated [5] in Figure 1. Breast cancer has a high mortality rate; early detection is required to avoid this. Early diagnosis of breast mass can improve the survival rate in women [6]. Therefore, automatic systems to improve breast cancer masses detectors are becoming better day by day to help radiologists [7]. Our research aims to facilitate physicians to diagnose breast cancer at its early stages. In the past, many AI techniques have been applied to classify tumors. Our contribution improves the detection accuracy rate using the SVM, which helps the healthcare system detect tumors in the initial stages to avoid further complications [8]. Below are the key contributions of this research. • We apply preprocessing techniques and segmentation to patient data collected from the mammograms' Breast Cancer Wisconsin Diagnostic Dataset (BCWD). • We bring forth the classification of patient's data (cancerous or non-cancerous) by using the SVM classifier. • We contribute to precisely detecting the Breast Cancer stage (Benign or Malignant) by using SVM, KNN, and LR. • We reduce the false-negative rate (FNR) and false-positive rate (FPR) without reducing the degree of precision and accuracy. • We compare our proposed results with state-of-the-art models to assess performance. • We practically implement the simulations for data classification through SVM, KNN, and LR, furthermore helping to increase the accuracy rate by approximately 97.7 % with an error rate of 2.3%.
The rest of the paper is organized as follows. Section 3 presents related work that is done in this field by researchers. Section 4 explains the whole proposed methodology, Section 5 explains SVM, KNN, and LR in detail with simulation results and discussion. Finally, the paper is concluded in Section 6.
Background
Breast tumors are benign (not harmful) and malignant (cancer, harmful). Benign tumors are not harmful usually. It does not diffuse to other parts or organs of the body. It exceptionally invades the neighboring cells and tissues. It usually does not grow back and is removable by proper chemotherapy or surgery. Malignant tumors are hazardous to life as they can penetrate the neighboring cells and tissues. They can move to the other parts of the body as well, which can lead to death [9].
Medical Images
Working on digital images is a challenging task [10][11][12][13]. Digital Image processing automation is used extensively in medical technology, but its crucial threat is that mortality is elevated due to cancer. To improve the early diagnosis of tumors, a dataset of medical images is required to train the system for cancer detection. The suspected tissue images are segmented by dividing the image-based data into different attributes such as texture, color, and intensity [14]. Medical images are used to obtain helpful information such as the location and size of any disease in the human body. It helps to find the exact location of pectoral tumor muscles and damaged tissues [15].
Types of Medical Images
Researchers use different medical images (i.e., thermograph, magnetic resonance imaging (MRI), X-ray mammograms, ultrasound images, histopathological images) to train the algorithms to diagnose the tumor.
Thermography is an advanced and cost-effective method for screening breast cells that do not allow the body cells to face ionizing radiation. Cancer symptoms include angiogenesis, swelling, nitric oxide vasodilatory phenomena, and estrogen. Thermography plays a vital role in improving breast cancer detection and classification [16]. The patients who have high risks of tumor are given magnetic resonance imaging (MRI), where the other imaging techniques fail to detect any abnormality. It is not very frequently used due to its high cost. Mammography is a very commonly used technique for tissue screening to diagnose a tumor. The golden way of screening the breast is mammography in the past, but its interpretation is problematic because it specifies tiny, subtle features and malignancies in patients [17]. This screening technique is not effective on density breasts. Young females have more risk of radiation-induced breast cancer because their undifferentiated cells are prone to be influenced by ionizing radiation as compared to old females [18]. For the detection and diagnosis of tumors in the dense breast, ultrasound is subordinate to mammographic screening. Therefore, results are dependent on tumor size, breast density, tools, and the experience of physicians [19]. Different techniques such as MRI, ultrasound, mammography, and thermography are done in clinical analysis. Moreover, in histopathology, suspected patients undergo a needle tissue biopsy. Pathologists take hematoxylin and eosin (H&E) stained tissue samples of patients and investigate those tissues under the microscope. This analysis is hectic and time consuming. That is why in the last decade, computer-aided diagnosis (CAD) systems have been automated, with advanced techniques to diagnose tumors [20].
Machine Learning Techniques
Machine learning is a branch of artificial intelligence (AI) in which different algorithms are used to differentiate normal and tumor cells. Some techniques are SVM, KNN, LR, Naïve Bayesian network, artificial neural network, decision tree, and random forest. Machine learning has been used in many healthcare applications such as physical activity recognition and cognitive health assessment [21][22][23][24].
Deep Learning Techniques
Deep learning is a sub-branch of machine learning that eventually relates to artificial intelligence (AI). Deep learning has been used in many healthcare applications such as dementia detection and cognitive health assessment [25][26][27][28][29][30][31]. Some techniques distinguish tumor cells from normal cells, such as convolutional neural networks (CNN), RNN, and DNN. These techniques can be used in the segmentation and classification of normal and abnormal breast cells. This paper initially reviews the methods for the effective segmentation and classification of tumors used by researchers and proposes a model for classification using SVM, KNN, and LR. To enhance accuracy in cancer detection, different AI techniques are experimented with to obtain accurate decisions about disease stages that can be minor or acute. Different AI techniques have been developed [32,33] for precise automated diagnosis. Some of the most effective techniques are CNN, SVM, and genetic machine learning algorithms. Researchers are working hard to merge two or more artificial techniques from the last decade to produce new hybrid techniques for better accuracy.
Different AI techniques are applied these days to improve the flaws of tumor detection. SVM, KNN, and LR are effective combinations to classify diseases. These techniques are applied to multidimensional datasets to predict precisely whether the tissues or cells are healthy or infected. The collected raw data are processed and stored in a database. Then, we apply different classifier algorithms to that dataset to obtain better results. The main concern of this research is to differentiate between benign and malignant tumors. Accordingly, we propose KNN, SVM, and LR to differentiate benign from malignant tumors. This research will help radiologists, physicians, and health consultants to diagnose the initial stage, Benign, or acute stage, Malignant. The whole experimentation is done on the Breast Cancer Wisconsin (Diagnostic) Dataset collected from the Kaggle Website.
Related Works
Breast cancer is a deadly disease in the present era. Different researchers are working hard to help diagnose it at the initial stage to avoid an acute phase. In this classification field, CNN and SVM are essential to help the researchers to classify patients' data. Here, we overview different machine learning and deep learning techniques on different bio-images. However, our primary focus is on mammographic images.
The authors in [34] have proposed a cloud and decision-based fusion AI system using a hierarchical DL (CF-BCP) model to predict breast cancer. This simulation uses MATLAB (2019a) and deep learning techniques, i.e., CNN and DELM on 7909 and 569 fused samples. Their model attains 97.975% accuracy in the detection of breast cancer. The research in [35] analyzed SVM, KNN, LR, random forest, naïve Bayes, and decision tree techniques on a dataset from Dr. William H. Walberg of Wisconsin Hospital breast cancer in the early stages. The LR model gave the best result with 98.1% accuracy. The study in [36] compares different classification methods such as KNN, decision tree, SVM, Bayesian network, and naïve Bayes under the WEKA environment to check the best accuracy. The overall experiment shows that Bayesian network gave the highest accuracy with fewer features. Still, the highest accuracy for the more featured dataset was given by SVM. The study in [37] reviews several segmentation techniques on ultrasound and mammographic images. For this, preprocessing is necessary to remove the redundant data. High-quality data will help achieve the best possible accuracy in classifying whether the cancer is benign or malignant.
The authors in [38] proposed a model based on the local pixel information and neural network for segmentation and extraction of the region of interest (ROI) on a dataset having 250 ultrasound images using machine learning ANN and BPNN to differentiate benign and malignant tumors. They have done breast cancer classification on two datasets, the first having 380 and the second having 163 ultrasound images from University Hospital, Amman, Jordan. They used CNN and SVM classifiers for the feature extraction and classification of breast cancer. They successfully achieved the performance of 94.2% [39]. The proposed work in [40] classifies breast cancer that is benign or malignant. The authors used 151 images, out of which 79 images are benign tumors (BIRADS 2-3) and 72 are malignant tumors (BIRADS 4-5) for the experiment. They used CAD systems, specifically random forest (RF), SVM, CNN, and conducted Segmentation, Feature Extraction, and Classification, attaining the accuracy of 80.00%, 77.78%, and 85.42%, respectively. Ultrasound-based existing research is mentioned in Table 1. The authors in [41] proposed a parallel model including CNN and RNN to classify hematoxylin-eosin-stained breast biopsy images. They experiment on three datasets: BACH2018 has 400 images. Bio-imaging2015 has 249 histology images, and Extended Bioimaging2015 includes 1319 images to classify normal tissues, benign lesions, carcinomas, and invasive carcinomas. The authors in [42] have proposed a new hybrid convolutional and recurrent deep neural network for the classification of breast cancer. They used recurrent neural network (RNN), CNN, SVM, and NVIDIA GPUs on an Image Net dataset, ICIAR, ISBI, ICPR, and MICCAI, having 3771 images, 249 images from Bioimaging2015, and 400 histopathological images in 2019. The highest accuracy achieved was 91.3%. The authors in [43] have introduced a novel transfer learning-based approach to automate normal tissues, benign lesions, and malignant lesions. They applied the deep neural network ResNet-18 and enhanced its adoption by using global contrast normalization (GCN) on data augmentation. They used DNN and softmax classifier on 7909 histopathological images from Anatomy and Cytopathology (P&D) Lab, Brazil, and conducted binary classification. The authors in [44] used Breast Cancer Computer-Aided Diagnosis (BC-CAD) and deep neural network (DNN) and RNN binary classification techniques on 92 histopathological images from Wisconsin UCI to differentiate normal and tumor cells. The proposed methodology in [45] focused on CNN, ML, DL, IHC-Net, a combination of naïve Bayes, SVM, and RFD as segmentation, feature extraction, and classification techniques. They used a dataset of 400 histopathological images and finally obtained the best accuracy (98.24%). The classifier with hand-engineered features gave more performance with a 98.41% F-score and 97.66%. Histopathological image dataset-based research and its results are given in Table 2. SVM is used to obtain better results in classification in [46]. CAD systems follow two segmentation methods. First, one region of interest (ROI) is detected, and second, they use a threshold. The author used a DCNN architecture named AlexNet to classify two classes. They used y(DDSM) and DDSM (CBIS-DDSM) datasets. AUC obtained an accuracy of about 88% using the (CBIS-DDSM) dataset, the accuracy of DCNN also improved to 73.6% and overall AUC with the involvement of SVM obtained an accuracy of 94%. The work in [47] applied the CNN technique to train two datasets: the Full-Field Digital Mammography Dataset (FFDM) and the Digital Dataset of Screening Mammography (DDSM), the latter having 14,860 Mammographic images. CNN, AlexNet, and ImageNet are used to classify benign and malignant.
The authors in [48] worked on the segmentation and classification of breast cancer using DL, SVM Soft-Max function, and Sigmoid function on a dataset of 400 mammographic images. They found that SVM showed better results than DL techniques. The authors in [49] proposed different segmentation techniques such as HDF K-means clustering, OKFCA, OKFC algorithm, fuzzy and region growing technique, and AOKFCA algorithm on a dataset of 322 mammographic images from the Society (MIAS) database. The whole experiment shows that MFKFCS produces the highest accuracy of 80.42%. Mammographic dataset-based research and its results are given in Table 3. Thermograms are also used in breast cancer classification. The authors have used a public dataset containing 146 breast thermograms (117 benign and 29 malignant) and achieved a sensitivity of around (79.86%) [51]. The authors in [50] proposed a method to detect breast cancer using mammograms. This study employs preprocessing, segmentation, feature extraction, and classification. Breast cancer is classified using LR, AdaBoost, decision tree, KNN, and random forest classifiers. The obtained accuracy was 90%, 85%, 57%, 54%, 76%, and 61% for SVM, LR, AdaBoost, decision tree, KNN, and random forest classifiers, respectively. Overall, SVM achieved the highest accuracy among others.
From the above literature review, mammographic bio-imaging shows low response accuracy compared to histopathological bio-imaging. We propose a model by applying machine learning techniques such as SVM, KNN, and LR on mammographic bio-imaging to enhance the accuracy of breast cancer detection. This research will help the radiologists and physicians diagnose this disease, and accordingly, they will prescribe precautions and medication to the patients.
Proposed Methodology
This study detects masses in mammograms and identifies benign and malignant tissues. This paper proposes a new CAD system. It involves preprocessing of the dataset, feature extraction, and classification. The confusion matrix, the receiver-operating curve (ROC), and the AUC evaluate a classifier for precise accuracy. The whole process of segmentation and classification is mentioned in Figure 2.
Dataset Description
The Breast Cancer Wisconsin Diagnostic Dataset (BCWD) is collected from the Kaggle Website (https://www.kaggle.com/uciml/breast-cancer-wisconsin-data accessed on 1 January 2022). This breast cancer database was initially obtained from Madison University of Wisconsin Hospitals. It is mammographic data that contain attributes such as clump thickness, cell size uniformity, cell shape, marginal adhesion, single epithelial cell size, bare nuclei, bland chromatin, normal nucleoli, and mitoses. The dataset contains 699 instances from different patients. It combines eight different data groups containing two classes with 458 benign and 241 malignant instances. We divide the data into two parts, 60% as training data and the remaining 40% as test data, and conduct simulation accordingly.
Preprocessing
As the collected data need refinement, different techniques are implemented to improve the raw data to obtain better results. There are two main steps: Extraction and Classification to convert raw data into compelling, valuable data. Preprocessing consists of the following steps. Data transformation involves converting the data files that are understandable to human beings. File format, data magnification, and data mapping are helpful to enhance accuracy. We used normalization to remove noise and data redundancy in our scenario and map the dataset. Data noise is removed by using a Gaussian filter. Data redundancy and inconsistency are also removed manually. These factors affect the overall accuracy of any model. Enigmatic and missing values cause inaccuracy. We stabilize the data flaws manually by inserting mean and median values and eliminating the record in which 60% of values are missing.
Classification
Classification is used to differentiate benign from malignant tumors to treat patients accordingly. Data mining is a required field to analyze data and conduct estimations [52]. Many issues are resolved during run time. Extensive data mining is used effectively in pattern recognition. Text mining is done in feature selection. For breast cancer detection, the following parameters are used: Uniformity-cell-shape, Uniformity-cell-size, Bare-nuclei, Bland chromatin, the thickness of clumps, and normal-nucleoli. We use 5-folds crossvalidation in all models on training data using MATLAB to obtain trained and give better accuracy on test data. Then, we conduct a simulation of test data. The above attributes help to attain high accuracy in test data. Different classification techniques in machine learning can obtain the highest accuracy. All three techniques that are used in this simulation are given below.
K-Nearest Neighbor Model
KNN is a classification algorithm in machine learning that predicts the accuracy of disease detection. All KNN models such as Fine, Medium, Coarse, Cosine, Cosine, and Weighted KNN are used in the simulation.
•
Find the K, for instance, (x i , t i ) nearest to the test instance x. • Output of classification is majority class, as shown in Equation (1).
The implementation of KNN on medical data goes through a series of steps that are mentioned in the below Algorithm 1. if X has an unknown system call then 8: X is abnormal 9: else 10: for each instance D_j in training data do 11: calculate sim(X, D_j) 12: if sim(X, D_j) equals to 1.0 then 13: X is normal; exist 14: Find k biggest scores of sim(X,D) 15: calculate sim-avg for k-nearest neighbors 16: end if 17: end for 18: end if 19: end for 20: if sim-avg is greater than threshold then 21: X is normal 22: else 23: X is abnormal 24: end if
Logistic Regression Model
This algorithm consists of only one model to check the accuracy rate of the disease. Implementation of the LR model on medical data goes through the following steps that are mentioned in the below Algorithm 2.
Here, we throw light on the overall working of this algorithm as mentioned in the following Equations (2)- (6).
Support Vector Machine Model
The segmentation of breast cancer is used to eliminate various abnormalities from data. In this step, data are classified as either benign or malignant based on its features. SVM takes instances and assigns them a specific class for proper evaluation. Data ambiguity is eliminated, and cases are evaluated to predict accurate results. The resolution is enhanced and removes the unwanted pixels by image masking. The gray-scale conversion eventually sets the image size to check whether it is according to the threshold. This process of normalization is completed, and the threshold is calculated by using the methodology of Otsu threshold [53]. SVM implementation on medical data goes through the different steps mentioned in the Algorithm 3.
There are numerous classifiers, and SVM is one of them. All SVM models such as Linear, Quadratic, Cubic, Fine Gaussian, Medium Gaussian, and Coarse Gaussian SVM are used in the simulation. We train the dataset and evaluate the results accordingly in MATLAB. Here, we explain the SVM algorithm, and its working is given below in Equations (7)- (14). Optimize ∝ i and ∝ j 10: Evaluate input values 11: Evaluate Accuracy 12: Evaluate Confusion matrix 13: end for 14: until no change in ∝ or other resource constraint criteria met 15: Ensure: Retain only the support vector (∝ i > 0) 16: return : Output=0
Evaluation and Results
According to the literature review of existing work, the overall histopathological bio-images show better accuracy results than others, as mentioned in Table 4. We use accuracy as an evaluation measure. "Accuracy is derived by dividing the number of correct predicted classes by the total number of samples evaluated, as shown in Equation (15)".
Sensitivity or recall is used to calculate the fraction of positive patterns that are correctly classified, as shown in Equation (16). The accuracy is directly related to the true-negative and false-positive classes. Here, true positive (TP) indicates that cancer exists and is predicted positive. True negative (TN) indicates that cancer exists but is predicted negative. False positive (FP) indicates that cancer does not exist but is predicted to be positive. False negative (FN) indicates that cancer does not exist and is predicted negatively.
Precision is used to compute the percentage of "positive patterns correctly predicted by all predicted patterns in a positive class", as shown in Equation (17). KNN relies on distances between neighbors measured by Euclidean, and data normalization helps to enhance classification accuracy. In the KNN model, a k-value is required to predict the unknown points to differentiate the classes eventually. A k-value decides the number of nearest neighbors to obtain the value for unlabeled data. The k-value is always a positive integer. We used an odd number of neighbors (3,5,7) and k at the value of 7 to give the best result in the simulation.The KNN employed in the proposed approach achieves the highest accuracy of 100% in the training dataset and 97.0% in the test with the weighted model. This model has a prediction speed of 2500 observations per second and a training time of 6.1157 s. The fine model achieved 94% accuracy with a prediction speed of 2500 observations per second and a training time of 2.9811 s. The medium model of KNN achieved 96% accuracy with a prediction speed of 1500 observations per second and a training time of 3.9217 s. Coarse gave us the least accuracy out of all the KNN models. When no other classifier is available, the results achieved by employing KNN are satisfactory; nevertheless, because the value of the k is chosen at random, its performance is less than the SVM classifier. The receiver operating characteristic (ROC) curve plot graph defines the diagnostic capability of a binary classifier. The ROC graph contains FPR on the x-axis and TRP on the y-axis. The limit for the x and y-axis lies between 0 and 1 to plot a graph of all possible threshold values of the classifier. So, the ROC curve gave us a tradeoff between cost and benefit. As we obtained more values close to 1, our model attains high accuracy. The confusion matrix and ROC curve of the KNN classifier is given in Figure 3a,b. We achieve the following accuracy from KNN models as given in Table 5. The logistic regression model's perimeters are estimated using LR classification. The LR classifier achieves 94.0% accuracy with a prediction speed of 2400 observations per second and a training time of 52.778 s. The confusion matrix and ROC curve of the LR classifier are given in Figure 4a,b. We achieve the following accuracy by using this model given in Table 6. We simply tuned our model using parameters in SVM. We have two classes, malignant and benign, graded by colors: blue color for malignant and red for benign. Tuning the area-mean and concave points-mean proves efficient classifiers. Our data lie in different magnitudes. We use unity-based normalization and tuned all data records to a 0-1 range. SVM creates a hyper plane that divides the two classes into malignant and benign. To avoid under fitting and over fitting problems, we optimized the parameters by applying C parameter and Gamma techniques. SVM achieves the highest accuracy of 97.7% with quadratic and cubic models. The quadratic model takes 2.4081 s to train with a prediction speed of 3700 observations per second, while the cubic model takes 4.7405 s to train with a prediction speed of 2300 observations per second. Quadratic is the best fit model regarding prediction speed and training time. With a prediction speed of 2000 observations per second, the linear model achieved 97.5% accuracy in 3.509 s. With fine Gaussian, SVM achieved the lowest accuracy. Overall, the number of positive identifiers in both classes is much more than the incorrect ones. These findings show that SVM can forecast breast cancer and distinguish between benign and malignant tumors. After overall simulation, we obtain a confusion matrix; the receiver operating characteristic (ROC), parallel coordination, and scattered plot of SVM models are given in Figures 5a,b and 6a,b, respectively. Finally, we obtain the following accuracy percentage of different SVM models given in Table 7.
Conclusions
Different bio-images are used in the existing work to evaluate which bio-imaging can help differentiate benign and malignant tumors with high accuracy. Based on previous work, we conclude that mammograms and histopathological datasets play a vital role in classifying and effectively diagnosing breast cancer. The actual goal of this research work is to evaluate the accuracy of the machine learning techniques, i.e., SVM, LR, and KNN. We select these techniques as these techniques are the best-proven approaches to diagnosing diseases in the healthcare sector. The MATLAB environment enhances the accuracy of the state-of-the-art models in the simulation. The proposed approach effectively improves the cancer detection rate using instances from the dataset. The simulation results show that quadratic and cubic models of SVM achieved an accuracy of 97.7% based on rules. Still, the overall average accuracy of KNN is higher than SVM. With our contribution, cancer detection accuracy goes up. The positive prediction rate for benign is 97% and 99% for malignant, whereas the false prediction rate for benign is 3% and 1% for malignant. Overall, the proposed model accuracy increases by decreasing false positives and false negatives. This model is designed precisely to diagnose whether a patient is suffering from benign or malignant tumors. Future research can be done toward the microscopic classification of anomalies. Multilayered neural network architecture can be used in the future for complex features. | 6,475 | 2022-05-01T00:00:00.000 | [
"Computer Science"
] |
Distribution of absorbed photons in the tunneling ionization process
We describe a procedure that allows us to solve the three-dimensional time-dependent Schrödinger equation for an atom interacting with a quantized one-mode electromagnetic field. Atom-field interaction is treated in an ab initio way prescribed by quantum electrodynamics. We use the procedure to calculate probability distributions of absorbed photons in the regime of tunneling ionization. We analyze evolution of the reduced photon density matrix describing the state of the field. We show that non-diagonal density matrix elements decay quickly, as a result of the decoherence process. A stochastic model, viewing ionization as a Markovian birth-death process, reproduces the main features of the calculated photon distributions.
Results
Theoretical model. We will first outline briefly the procedure we employ to describe an atom interacting with a quantized electromagnetic field. The quantized vector potential can be written as 9,35 : where it is assumed that the electromagnetic field is quantized in a finite volume V, and a k, are the photon annihilation operators. The combined Hilbert space of the system atom+ field is the tensor product H atom ⊗ H field . Here H atom and H field are electron and photon sectors of the Hilbert space, respectively. We will employ the well-known fact that the photon Hilbert space is spanned by the Fock states |N� -the eigenstates of the operator N k, =â † k, â k, of the number of photons in the mode k, . We will consider below only one mode of the field, corresponding to linear polarization in the z-direction and a particular photon frequency ω = 0.057 a.u. (wavelength of 800 nm). We will omit, therefore, subscripts k, in all the formulae below. Using the basis of the Fock states, the matrix elements of the photon operators in Eq. (1) assume the well-known form 36 : while all other matrix elements have zero values. The state of the combined system atom+field at the initial moment of time, t 0 = 0 a.u., is assumed to be a tensor product φ 0 ⊗ |N 0 � of the ground atomic state φ 0 and the Fock state |N 0 � . Subsequent evolution of the system is governed by the time-dependent Schrödinger equation (TDSE) 9 (we use velocity gauge to describe atom-field interaction): where Ĥ atom is atomic Hamiltonian. We do not have to include the field Hamiltonian in Eq. (3) because the vector potential in Eq. (1) is time-dependent, i.e., the representation we use in Eq. (3) is the Schrödinger representation for the atomic operators and interaction representation for the field operators. This representation can be obtained from the Schrödinger picture, in which neither atomic nor field operators depend on time, by means of the unitary transformation exp −iĤ field t generated by the field Hamiltonian Ĥ field .
We use a non-relativistic form, Ĥ atom =p 2 2 + V (r) , of the atomic Hamiltonian, with the short-range atomic potential V (r) = −1.903e −r /r . This potential supports only one bound state of s− symmetry with ionization potential |ε 0 | = 0.5 a.u. Though our computational procedure can be applied equally well for the case of the Coulomb potential, we choose a short-range atomic potential with only one bound state to concentrate fully on ionization by excluding all effects due to excitation processes. Another assumption we make is the dipole approximation, which consists in neglecting the spatial dependence of the vector potential in Eq. (1). Both these assumptions are easily justified for the moderate field intensities (of the order of several units of 10 14 W/cm 2 ) that we consider below. A short explanation of what we mean by the light intensity may be appropriate here. As is wellknown, for the Fock state of the field, the expectation values of the field operators (e.g., the vector potential (1)) are zero. To relate the photon number N to the observable effects, we can use instead the cycle-averaged expectation value of the Poynting operator, which in the Fock state |N� is 36 ωcN/V . The cycle-average for the Poynting vector, computed for the classical monochromatic linearly polarized wave wave E 0 cos ωt , is, on the other hand: . From the point of view of the time-averaged flux of energy, the Fock state |N 0 � is, therefore, equivalent to a monochromatic wave with E 0 = √ 8πωN 0 /V . We will call E 0 defined in this way the 'equivalent field strength' .
To solve evolution equation (3) numerically, we use the completeness of the Fock states in H field , and expand the time dependent wave-function of the atom+field system as: www.nature.com/scientificreports/ where |f N (t)� are vectors from the atomic Hilbert space H atom , and the parameters n 1 , n 2 define the range of the Fock states we need to keep to ensure convergence of the expansion (4). Details of the procedure we use to solve the TDSE (3) using the expansion (4) are given in the Section "Methods" below. We solve the TDSE for the interval of time (0, MT), where T = 2π/ω is an optical cycle corresponding to the driving frequency ω = 0.057 a.u. For the majority of the calculations we report below, we used M = 12 . That this choice of M is adequate for the purposes of the present work is shown in the section "Methods" below. We are primarily interested in the evolution of the state of the electromagnetic field on the interval of pulse duration. The state of the field can be described by the reduced density matrix ρ F (t) 36 , which can be computed from the density matrix ρ(t) = |�(t)���(t)| of the complete atom+field system obtained from the solution |�(t)� of the TDSE, by taking a partial trace with respect to atomic variables. For the |�(t)� represented as an expansion (4), the partial trace can be easily computed, giving the following expression for the reduced density matrix describing the state of the field: where �f N2 (t)|f N1 (t)� is a scalar product of the vectors f N (t) from the atomic Hilbert space H atom occurring in the expansion (4). Matrix elements of ρ F (t) in the basis of the Fock states can, therefore, be easily computed once the TDSE equation (3) is solved. The diagonal matrix elements ρ NN F (t) then give us the probabilities P N (t) of observing the field in a state with N photons.
We will be interested below in the process of absorption of photons. For the field initially in the Fock state |N 0 � , the number of photons is fixed and the projection of the wave-function of the system at time t on the Fock state |N 1 � directly gives us the probability of absorption or emission of N 1 − N 0 photons. In the case of the initial Fock state of the field that we consider in the present work, the matrix element ρ N 0 −n,N 0 −n F (where n is a positive integer) give us, therefore, the probability P n of absorption of n photons. From the point of view of experimental studies of photon absorption dynamics, a more natural choice would be a coherent state of the field. This choice entails a difficulty, however. For the coherent state the number of photons is not fixed and we can specify only the expectation value N 0 of the number of photons. The dispersion of the number of photons is proportional to N 1 2 0 36 and can be much larger than the number of photons absorbed in the process of ionization. This makes the definition of the number of absorbed photons less straightforward for a coherent state. It is the possibility of the direct interpretation of the ρ N 0 −n,N 0 −n F as the absorption probabilities which motivated our choice of the Fock state of the field as the initial state. We can expect, however, that the general features of the ionization dynamics which we describe below, will be valid in the case of ionization driven by the field in a coherent state. The phase of the field in a Fock state is completely undefined. It is known 37 that the effect of the undefined field phase on the density matrices can be described as a suitable average of density matrices obtained for coherent states with different phases of the field. Therefore, we can expect this effect to vanish for long enough pulses. That this is indeed the case was shown, e.g., in 38 by comparing electron spectra obtained for the case of ionization driven by the field in a coherent state and in a Fock state of the same effective field strength.
More convenient for the study of the absorption process is the normalized probability distribution Q n . This distribution is a conditional probability of absorbing n photons from the field provided at least one photon has been absorbed, and it coincides with P n for n ≥ 1 up to a normalization factor which ensures that n=+∞ n=1 Q n = 1 .
The results for the probability distribution of absorbed photons, Q n , that we obtain by solving the TDSE (3) and using Eq. (5) are shown in Fig. 1. For comparison, we also show in the Figure the results we obtain for Q n using the SFA (the details of this calculation are given in the Section "Methods"), and the results we obtain using a stochastic model of the ionization process we present below.
Stochastic model of strong field ionization. To understand the main features of the distributions, Q n , in Fig. 1, and get a better insight into the photon absorption for the strong field ionization process, we report below a study of the dynamics of the process. This study is based on an analysis of the reduced density matrix ρ F (t) as a function of time on the interval of the pulse duration. The exact equation governing evolution of the reduced density matrix can be obtained by taking a partial trace with respect to the atomic variables of the von-Neumann equation describing quantum evolution of the atom+field system: where Ĥ (t) and ρ(t) are the Hamiltonian and the density matrix of the system atom+field, respectively. Eq. (6) and the initial condition ρ(0) = |N 0 ��N 0 | ⊗ |φ 0 ��φ 0 | (here |N 0 � and |φ 0 � are initial states of field and atom) determine, in principle, subsequent evolution of the reduced photon density matrix. This equation is exact but can hardly be solved in practice for the system we are presently studying. Various simplifications of the general equation for the reduced density matrix, the so-called 'master equations' , describing evolution of a subsystem interacting with the environment (often called the 'reservoir' in the literature) are known 39 . These approximations are usually based on the assumption that the evolution of the reduced density matrix is Markovian 39 , that is the process has no memory and its evolution for t > t 0 is defined by the state at t 0 . In this case the general form of the master equation in the so-called Lindblad form 40 can be derived. www.nature.com/scientificreports/ Atom-field interaction as an example of decoherence phenomenon. In the problem we are presently considering, the subsystem we are interested in is the electromagnetic field. Atom plays the role of the environment. That this separation may be meaningful and useful can be seen from Fig. 2, which shows absolute values of the reduced density matrix elements ρ N 0 −n,N 0 −n F as functions of time (measured in the units of the optical cycles).
One observes from the Fig. 2 that the density matrix elements ρ N 0 −n,N 0 −n F with negative n 1 , n 2 (corresponding to the emission of photons) have negligible values. This is, of course, expected for the long pulses we consider presently. More interestingly, we observe the progressive decay of the non-diagonal elements of the density matrix for positive n 1 , n 2 . For the time t = 12 o.c, for both field strength values shown in Fig. 2, matrix elements are predominantly concentrated along the main diagonal with positive n 1 , n 2 . Such behavior is, in fact, typical for the density matrix of a subsystem interacting with the environment, and is a manifestation of the decoherence process 41 . The decoherence process has been invoked [41][42][43][44] to clarify the measurement problem and to understand the transition from the quantum to classical description in quantum mechanics.
Decay of the non-diagonal elements of the reduced density matrix can be understood as follows. According to Eq. (5), matrix elements of the reduced density matrix are the overlaps �f N2 (t)|f N1 (t)� of the corresponding states of the environment (states of the atom in our case). With increasing time, the environment states |f N (t)� with different N become, progressively, more and more separated in energy, and thus, their overlaps tend to zero. The state (4) of the atom+ field system evolving from the initial product state |φ 0 � ⊗ |N 0 � is an entangled state. With the atomic vectors |f N (t)� becoming approximately orthogonal with time, a measurement performed on an atom which finds the atom in the state |f N (t)� , allows us, therefore, to state with certainty that the field is in the corresponding state |N� . For large enough times, therefore, the environment (i.e. the atom) carries complete information about the system (field). This is, indeed, the decoherence process at work, as described, e.g., in 43 . Thus, we see that by picturing the field as a system, and the atom as the environment, we encounter an interesting example of the decoherence process which manifests itself through the decay of the non-diagonal matrix elements, computed in the basis of Fock states, of the reduced photon density matrix.
Evolution of diagonal elements of the reduced density matrix. In the previous section, we saw that, due to the decoherence process, the non-diagonal matrix elements of the reduced photon density matrix become small for large enough time. We now turn our attention to the diagonal elements of the reduced density matrix. As we mentioned above, under the assumption of the Markovian character of the process, the master equation describing evolution of the reduced density matrix can be written using the Lindblad operators, which in our case can be represented as matrices with dimension equal to the number of photon states we consider. For the problem of a one-mode electromagnetic field in a cavity interacting with a bath (which may be, e.g., the phonons in the material of the walls of the cavity), the master equation can be further simplified to give a relation of the form 39 : where, using the notation we employed above, P n (t) =ρ N 0 −n,N 0 −n F are diagonal elements of the reduced photon density matrix. For the problem considered in 39 the explicit expressions for the coefficients n ans µ n in this equation can be given. We cannot use those particular expressions, however, since our problem differs somewhat from the one considered in 39 , where the environment was considered to remain in thermal equilibrium during the process. This would certainly not be the case in our problem, where the environment (the atom) is not in an equilibrium state for the whole interval of the pulse duration. However, we can preserve the general structure of Eq. (7) as the master equation describing the evolution of the density matrix in our problem. Indeed, Eq. (7) is a Kolmogorov equation 36,45 describing the so-called 'birth-death processes' [45][46][47] , and is, therefore, sufficiently general for our purposes. More detailed justification of the possibility to use Eq. (7) for the description of the evolution of the diagonal photon density matrix elements is given in the section "Methods" below.
The 'birth-death processes' are continuous-time Markov chains 45 in which only jumps to the neighboring states are allowed at small intervals of time. The meaning of the parameters k and µ k in Eq. (7) in our case is the rate of absorption ( k ) or emission of a photon ( µ k ) in a photon state, where k photons have already been absorbed. Using Eq. (7) as the master equation for our problem, we thus assume a Markovian character for the process. We can use the Kolmogorov equation (7), not only for the probability distribution P n , but also for the normalized distribution Q n of the number of absorbed photons. One can see that if P n (t) obeys Eq. (7), evolution of Q n (t) for large enough times is also described by an equation of the type (7). Indeed, by definition for n ≥ 1 : Q n (t) = P n (t)/C , where C is the normalization factor equal to the total probability of absorption of at least one photon, which for long enough pulses should be equal to the total ionization probability. In the field regime we consider, the total ionization probability, in turn, is proportional to time. Therefore the constant C ∝ t . We have, therefore, Q n (t) ∝ P n /t . Substituting this expression into Eq. (7) and neglecting terms of the order of 1/t 2 , we obtain a Kolmogorov type equation (7) for Q n (t).
What will interest us now is the steady state solution of Eq. (7), i.e., the solution to which solutions Q n (t) of the Kolmogorov equation (7) tend in the limit t → ∞ . Such a solution represents the equilibrium state; in our case it is the equilibrium between the atom and the field reached at the end of the ionization process. A sufficient condition for the steady state distribution to exist is | k−1 /µ k | < 1 for all k greater than some k 0 . The steady state solution, if it exists, can be obtained from Eq. (7) by equating time-derivatives on the left hand side to zero and solving the recurrence relation thus obtained. The results reads 48 : As for the rate coefficients n ans µ n in Eq. (7), we will try to find a set of them having as simple a form, and as few free parameters as possible. As one can see from Eq. (8), it is the ratio k−1 /µ k which determines Q n . We represent this ratio as k−1 /µ k = 1 + f (k) , and we use the following trial form for the function f(k): where q(x) is a third-order polynomial fixed uniquely by the requirement that f(x) and its first derivative be continuous functions. A typical form of the function defined by Eq. (9) is shown in Fig. 3. Our choice of f(x) is suggested by Eq. (8). As one can see from this expression, if for large n, the Q n 's form a geometric sequence such that Q n+1 /Q n = b < 1 , then the choice β = b − 1 in Eq. (9) will reproduce such behavior that, (provided |β + 1| < 1 ) guarantees the existence of a steady state solution of the Kolmogorov equation. Similarly, if for small n, the Q n 's behave so that Q n+1 /Q n = a , we might use α = a − 1 in Eq. (9). Finally, the choice of the parameter x 0 fixes the position of the maximum of the distribution. We have thus three parameters, α , β , and x 0 , in Eq. (9) which we will consider as fitting parameters for the trial form of f(k). Results of the three-parameter fits, based on Eq. (9), are shown in Fig. 1.
Discussion
Our results, presented in Fig. 1, show that the SFA results obtained using the procedure described in the section "Methods" below, and the ab initio results obtained by solving the TDSE (3) with the QED Hamiltonian, agree reasonably well. The three-parameter fitting procedure, based on the 'birth-death' model we described above, gives a considerably better agreement with the calculated distributions Q n . The relatively small deviation between the TDSE and the SFA results can be due to the fact that SFA neglects effect of the atomic potential on the ionized electron. For systems with Coulomb potential, for instance, SFA can give predictions for ionization probabilities which are a few orders of magnitude smaller than the results of TDSE calculations, unless special care is taken to take into account Coulomb interaction 8 . For systems with short range interactions TDSE and SFA usually agree better, but we can still expect some differences for a potential of a small but finite radius of the order of one atomic unit, like the potential V(r) we use. This can be seen if we note that the physical results do not depend on the gauge used for the description of the electromagnetic field. SFA results obtained using the commonly used length and velocity gauges, on the other hand, may differ 8 , thereby implying presence of an intrinsic, albeit possibly relatively small, error. SFA becomes a gauge-independent theory only for the zero-range potential 8 .
For the reader's convenience we summarize briefly the basic premises on which this model was based. We first demonstrated that the non-diagonal elements of the reduced density matrix describing the field vanish with time. We believe this phenomenon presents an interesting example of the decoherence process which may occur when we follow the evolution of a subsystem interacting with its environment. The subsystem in question and the environment are, in our case, the field and the atom, respectively. Looking at the atom as the environment is certainly not a conventional way to look at the ionization process. We saw, however, that the mechanism leading www.nature.com/scientificreports/ to the decay of the non-diagonal matrix elements of the photon density matrix is the same as in the more conventional examples of the decoherence phenomenon, e.g., a subsystem interacting with a collection of harmonic oscillators 41,49,50 . Different environment states in the entangled subsystem+environment state become approximately orthogonal with time. This leads to the suppression of the non-diagonal elements of the reduced density matrix describing the subsystem 41,42 . The source of this approximate orthogonality in the present case is the fact that, for large enough times, different atomic vectors in the expansion Eq. (4) become well separated in energy. As for the diagonal matrix elements of the reduced photon density matrix (5), we proposed a simple model which pictures absorption and emission of photons in the process of strong-field ionization as a 'birth-death' process. An assumption we made was the Markovian character of the process. For the related problem of the one-mode electromagnetic field in a cavity, the master equation, governing evolution of the field, can be cast in a form 39 that is reminiscent of the Kolmogorov equation describing the birth-death stochastic models. We adopted this equation as the equation governing the evolution of the reduced photon density matrix for the strong field ionization process. We were interested in the steady state solution of the Kolmogorov equation, to which Q n (t) tends in the limit t → ∞ . As we noted, the steady state solution does not always exist. For instance, we would not have such a solution for a pure birth process (all µ k = 0 in Eq. (7)). If, for example, we set all coefficients µ k to zero in Eq. (7) and assume that all coefficients k have equal values, k = , then the solution of Eq. (7) would be the Poisson distribution 48 Q n (t) = e − t ( t) n /n! which does not have a non-trivial steady state limit for t → ∞ . That photon absorption distributions are distinctly non-Poissonic has been noticed in the literature 7,38,51 . In framework of our model, based on the Kolmogorov equation, the non-Poissonic character of the distribution of absorbed photons is just a consequence of the fact that the Poisson distribution is not a steady state distribution; Q n 's in this case explicitly depend on time.
Conclusions
We studied evolution of the quantum system consisting of atom interacting with quantized electromagnetic field. The study was based on the numerical solution of the time-dependent Schroödinger equation driven by the QED Hamiltonian. Use of the QED picture allowed us to study probability distribution Q n of photons absorbed in the tunneling regime of strong field ionization.
We proposed a statistical model based on the view of ionization as a stochastic birth-death process. In framework of this model the distribution Q n can be interpreted as a steady state solution of the Kolmogorov equation to which absorbed photons distribution tends in the limit of large times. Making an assumption about the behavior of the ratio of the birth and death rates k−1 /µ k of the stochastic process encapsulated by Eq. (9), with three parameters considered as fitting parameters, we obtained results presented in Fig. 1
Methods
Solution of the TDSE. We will use a co-ordinate representation for the vectors |f N (t)� in the expansion Eq. (4). We will omit, therefore, the Dirac notation for |f N (t)� and will simply write f N (r, t) , understanding them as functions of the spatial coordinates and time. To solve the TDSE (3) we represent f N (r, t) as (we employ the geometry imposed by the field geometry with the polarization vector along the z-direction): The radial variable r is treated by discretizing the TDSE on a grid with a step-size δr = 0.1 a.u. in a box of size R max . Upon substituting expansions (4) and (10) into Eq. (3), projecting the result on vectors Y l i 0 (n) ⊗ |N i � with different l i , N i , and computing the arising matrix elements, we obtain a system of coupled evolution equations for the radial functions f Nl (r, t) . This system of coupled equations is solved using a matrix iteration method 52 . This procedure, in fact, is quite similar to the procedure employed for the solution of the ordinary atomic TDSE reported previously [53][54][55] .
The computational cost of realization of the strategy based on the expansions (4), (10) depends on the values of the parameters n 1 , n 2 in Eq. (4), parameter l max in Eq. (10), and parameter R max defining the size of the box. The parameters n 1 and n 2 should be roughly of the order of the maximum number of photons which can be absorbed (parameter n 1 ) or emitted (parameter n 2 ) during the evolution. To choose parameters l max and R max properly, we relied on the experience gained in solving ordinary atomic TDSE for the classical field with field intensity related to the photon number N 0 according to the relation we gave above. For instance, for the solution of Eq. (3) for the equivalent field strength of E 0 = 0.1 a.u. (corresponding intensity of 3.51 × 10 14 W/cm 2 ), we used R max = 1500 a.u., l max = 50 , n 1 = n 2 = 50 . The choice of R max also depends, of course, on the duration of the time interval over which we solve the TDSE (3). As mentioned above, for the majority of the calculations we report, this interval was (0, 12T), where T = 2π/ω , with an optical cycle corresponding to the driving frequency ω = 0.057 a.u. Solving the TDSE, we introduce a cutoff envelope function g(t), so that the expression for the vector potential used for the solution of the TDSE (3) is g(t)Â(r, t) where  (r, t) is the operator given by Eq. (1), and the envelope function was chosen as: www.nature.com/scientificreports/ where (0, MT) is the time interval on which the TDSE has been solved. Introduction of the adiabatic turning on and off of the interaction is a commonly-employed procedure in quantum field theory, used e.g., for the construction of S-matrix 56 . In our case, the introduction of the envelope function is also necessary for computational reasons. We cannot propagate the TDSE numerically on very long time intervals, and we wish to avoid the effects of a sudden turning on and off of the interaction. We must ensure, of course, that the envelope function we use is indeed adiabatic, i.e., the results we obtain are not affected by the choice of the duration of time interval (0, MT) on which we propagate the TDSE. We presented an extensive study of this question in the work 38 , where we solved the TDSE for the system atom+photon field using a semi-classical description of the quantized electromagnetic field, based on an approach proposed in 37 . We showed in that work that the results for the photon number distribution are not affected by the particular choice of envelope function, as long as the interval on which the TDSE is propagated is sufficiently long. To show that this conclusion remains valid in the present case, when we solve the fully quantum TDSE using expansion (4), we show, in Fig. 4, the absorbed photons probability distributions Q n , obtained for different pulse durations for an effective field strength of 0.07 a.u.
Estimate of the photon density matrix based on the SFA. To estimate the elements of the reduced density matrix (5) describing the photon field, we can use a semi-classical method for the description of the quantized electromagnetic field proposed in 37 . In the framework of this procedure, the atomic Hilbert space vector |f N (t)� in the expansion (4) can be found as a Fourier transform: where m = N − N 0 , and the vector |�(t, θ)� belonging to the atomic Hilbert space is a solution of the timedependent Schrödinger equation: with an initial condition such that �(r, t, θ) is the ground atomic state at t = 0 , and A(t, θ) is a classical field: with the same effective amplitude A 0 = 8N 0 πc 2 ωV as the quantum electromagnetic field (1), and the same envelope function g(t) we used above in the fully quantum calculation. The effect of the quantum nature of the field in the semi-classical approach 37 reveals itself through the presence of the classical phase θ , uniformly distributed in the interval (0, 2π) . The appearance of the uniformly-distributed classical phase can be traced back 37 to the fact that the phase of the field is completely undetermined in the Fock state of the field 36 . By computing the Fourier transform of the solution of the classical TDSE (13), as prescribed by Eq. (12), we can find the components |f N (t)� in the Eq. (4). www.nature.com/scientificreports/ To use this recipe, we need an analytic estimate for the solution of the classical TDSE (13). This estimate can be obtained using the SFA. In framework of this approximation, the solution to Eq. (13), satisfying the condition that the atom is in the ground atomic state φ 0 (r) at t = 0 can be written as 7,8 : where ε 0 is the ground state energy, and a(k, t, θ) are the SFA ionization amplitudes given in the velocity gauge which we use by the expression 7,8 : where the classical vector potential A(τ , θ) is given by Eq. (14), and φ 0 (k) is the Fourier transform of the initial state wave-function. Using Eq. (16) we can compute the quantity ã(k, t, m) = 2π 0 a(k, t, θ)e −imθ dθ 2π . We will need only the elements of the photon density matrix with N 1 < N 0 , N 2 < N 0 (corresponding to the states of the field with at least one photon absorbed by the atom), where the term φ(r)e −iε 0 t in Eq. (15) does not contribute. Using Eq. (12) and Eq. (15), we obtain, for these elements of the photon density matrix in Eq. (5): All the integrals in Eq. (17), Eq. (12), Eq. (15) were computed numerically. The calculation is quite straightforward and we will not dwell upon its details.
Justification of the use of Kolmogorov equation (7) for the diagonal elements of the reduced photon density matrix. In the present section, we describe in more detail the reasoning which led us to the assumption that the evolution of the diagonal elements of the reduced photon density matrix can indeed be described by the Kolmogorov equation (7). We cannot provide a mathematically rigorous proof of this statement. Indeed, the task we have at hand is the description of the irreversible behavior of a system interacting with a reservoir, which is a notoriously difficult problem. It can be solved in some instances when simplifying assumptions about the reservoir can be made, e.g., the assumption that the reservoir is affected very little by the system and that it remains in a state of thermal equilibrium during the process. Such an assumption was made in 39 ; it allows one to obtain a Kolmogorov-type equation describing the evolution of the reduced photon density matrix for the field interacting with the cavity. This assumption amounts to postulating that the reservoir is sufficiently large (i.e. contains many degrees of freedom) and that typical relaxation times of the reservoir are much faster than the typical time interval over which the photon density matrix changes appreciably. We can hardly use these assumptions in the present case, where the role of the reservoir is played by the atomic system. We cannot, therefore, use this line of argument. We will, instead, present some arguments of heuristic character, based primarily on numerical evidence. First, we note that the three-term structure of Eq. (7) appears quite naturally in the quantum mechanical equation for the evolution of the diagonal elements of the photon density matrix. The latter can be obtained from the expression (5) for the density matrix elements. We will employ, as above, the notation P n (t) =ρ N 0 −n,N 0 −n F for the diagonal elements of the reduced photon density matrix. We can use the above-mentioned fact that, in the semi-classical method proposed in 37 (which is an excellent approximation for the field parameters we consider), the atomic Hilbert space vector |f N (t)� in the expansion (4) is a Fourier transform (12) of the solution �(r, t, θ) of the time-dependent Schrödinger equation (13) with the classical vector potential A(t, θ) given by the Eq. (14). Using Eq. (12), Eq. (13), and expressing �(r, t, θ) in terms of |f N (t)� by using Fourier transform inverse to Eq. (12), we obtain: where m 1 = N 1 − N 0 , and Ĥ (t, θ) is the Hamiltonian operator on the right hand side of the Eq. (13). Projecting Eq. (18) on f N (r, t) and using the explicit expression for the Hamiltonian (13) and vector potential (14), we obtain for the time derivative of P n (t) = �f N 0 −n (t)|f N 0 −n (t)�: where A 0 and g(t) are the amplitude and the envelope function in Eq. (14), I(z) stands for the imaginary part of a complex z. Deriving Eq. (19), we neglected terms proportional to the overlaps �f N (t)|f N 1 (t)� with N = N 1 which give us non-diagonal elements of the photon density matrix. As we have seen, these vanish with time due to the decoherence process. Eq. (19) is not yet of the form of the Kolmogorov equation Eq. (7) since it involves matrix elements of the momentum operator calculated with amplitudes f N (t) , and not the overlaps of the f N (t) (15) �(r, t, θ) = φ 0 (r)e −iε 0 t + a(k, t, θ) e ik·r (2π) 3/2 , and similarly for the second matrix element in Eq. (19). In Eq. (20) a(t) and b(t) are functions of time satisfying a(t) + b(t) = 1 , but are otherwise arbitrary. In order for Eq. (20) to be of the Kolmogorov form (7), these functions must be chosen so that the coefficients with P n (t) , P n−1 (t) and P n+1 (t) resulting upon substituting Eq. (20) (and an analogous expression for the matrix element �f N (t)|p z |f N+1 (t) ) in Eq. (19)) be approximately time-independent. We can provide only numerical evidence indicating that such a choice is indeed possible, and that the diagonal photon density matrix elements P n (t) calculated in the present work do approximately satisfy a Kolmogorov-type equation. To show this, let us introduce the functions P n (t) = P n (t)/P n where P n are the constant values which P n (t) assume at the end of the pulse. If we assume that P n are indeed stationary limiting values of the random birth-death process described by Eq. (7), then, from Eq. (8), we must have: P n+1 /P n = n /µ n+1 . Using this relation and the Kolmogorov equation (7), one obtains the following equation which P n (t) must satisfy if our basic assumptions about the statistical character of the process are correct: This equation is somewhat easier to handle numerically than Eq. (7) since we have only µ n and n with the same n on the right-hand-side. We can check now, if the P n (t) we obtain from our numerical calculations indeed satisfy equation Eq. (21) with some coefficients µ and . A straightforward way to proceed is to use a least-squares fit, taking as an input the computed values of P n (t) and their derivatives, and using Eq. (21) as the fitting expression. More specifically, we form the functional: dP n (t) dt = µ nPn−1 (t) + nPn+1 (t) − ( n + µ n )P n (t) . where the values of P n and its derivative are computed on the interval (t 1 , t 2 ) , and we seek the minimum of the functional (22) with respect to variations of the parameters µ n , n . The results of this procedure for a particular field strength of 0.08 a.u. and particular values of t 1 , t 2 defining the interval on which the fitting procedure is applied, are shown in Fig. 5 for n = 17 and n = 18 (which are the values of n for which the distribution of the absorbed photons have a maximum for this intensity, as Fig. 1 shows). One can see that the fit based on Eq. (22) reproduces the correct behavior of the derivative dP n (t) dt fairly well. That this fact is not entirely trivial can be seen from Fig. 6, where we show the functions P n (t) appearing on the right-hand-side of Eq. (21) for n = 18 . If the functions P n (t) were simple (e.g., monotonic) functions of time, the success of the fitting procedure (22) could be regarded as a mere coincidence. This is not the fact, however. As Fig. 6 shows, the functions P n (t) are rather complicated functions of time. Even more important, perhaps, is the fact that, to obtain the results shown in Fig. 5, we used the interval (t 1 , t 2 ) at the end of the pulse. As one can see form the Figure, the fitting expression (22) with the coefficients µ n , n obtained for this interval proves relatively accurate even for t-values lying well outside the interval (t 1 , t 2 ) . This, in our opinion, shows that the fitting formula (22) indeed captures essential features of the behavior of the diagonal matrix elements of the reduced photon density matrix.
We can also see that Eq. (21) approximately describes ionization dynamics by converting the difference equation into a differential one. Let us consider the function P (n, t) , which for integer n coincides with P n (t) . Then, from Eq. (21), one can obtain a partial differential equation that P n (t) should approximately satisfy: A consequence of this equation is that the ratio ∂P(n,t) ∂t / ∂P(n,t) ∂n is a function of n only. Fig. 7 shows lines of constant elevation of the function h(t, n) = arctan ∂P(n,t) ∂t / ∂P(n,t) ∂n in the (t, n)− plane (we use arctangent of the ratio to make h(t, n) vary within finite limits). If Eq. (23) is approximately valid, the lines of constant elevation of h(t, n) should be the lines of constant n. This is indeed approximately the case, as can be seen from Fig. 7. It shows that lines of constant elevation of h(t, n) are indeed lines n ≈ const everywhere in the (n, t)-plane apart from some regions, which are in fact the neighborhoods of points where ∂P(n,t) ∂n has zeros. When deriving Eq. (23), we approximated finite differences with first-order partial derivatives. This approximation fails in the vicinity of zeros of ∂P(n,t) ∂n . Therefore, the deviation of the lines of the constant elevation from n ≈ const in the vicinity of zeros of ∂P(n,t) ∂n is to be expected. We believe, therefore, that Fig. 7 provides good evidence in favor of the approximate validity of Eq. (23). Assuming that this equation is valid, we can retrace the steps we used to derive it and obtain the Kolmogorov equation Eq. (7) for the diagonal elements of the reduced photon density matrix. | 9,783.2 | 2021-02-17T00:00:00.000 | [
"Physics"
] |
Influence of Embedded Gap and Overlap Fiber Placement Defects on Interlaminar Properties of High Performance Composites
Automated fiber placement (AFP), once limited to aerospace, is gaining acceptance and offers great potential for marine structures. This paper describes the influence of manufacturing defects, gaps, and overlaps, on the out-of-plane properties of carbon/epoxy composites manufactured by AFP. Apparent interlaminar shear strength measured by short beam shear tests was not affected by the presence of defects. However, the defects do affect delamination propagation. Under Mode I (tension) loading a small crack arrest effect is noted, resulting in higher apparent fracture energies, particularly for specimens manufactured using a caul plate. Under Mode II (in-plane shear) loading there is a more significant effect with increased fracture resistance, as stable propagation for specimens with small gaps changes to arrest with unstable propagation for larger gaps.
Introduction
The use of automated fiber placement (AFP) is increasing, as it offers the possibility to produce very complex shapes with tight process control [1]. Initially developed for high performance aerospace applications, the capability for efficient manufacture of complex structures can also be applied to marine components such as hydrofoils [2], propellers [3], and tidal turbine blades [4]. These structures tend to be thicker than aerospace composites, so through-thickness properties are more critical. Figure 1 shows an example of a foil manufactured by AFP on an ocean racing yacht.
AFP enables the trajectory of unidirectional composite tape to be optimized but laying down complex shapes with this technology can result in defect introduction. Two particular types of defects are possible; gaps between tapes and overlaps where they are superposed. Several authors have investigated the influence of these defects on in-plane properties, with particular emphasis on the more critical compression properties. These have included studies to quantify how 90 • defects can result in ply waviness, leading to reduced compression performance [5], and tow drops have also been shown to affect compression behaviour [6]. The number and offset of gaps are additional parameters which have been examined [7][8][9][10]. Experimental studies have also underlined the importance of defect orientation [11,12]. A recent study highlighted the importance of staggering, offsetting successive ply defects, to reduce their influence [13]. There has also been some testing and modelling work on the influence of gaps on behaviour under dynamic loading [14,15] and fatigue [16]. In two previous papers, for the same carbon-epoxy material as studied here, the authors also examined the influence of different gap and overlap singularities first under in-plane tensile [17], then under in-plane shear and compression loads [18]. Such defects may occur due to variations in tape width or movements on complex mould shapes. In spite of the significant amount of data now available for in-plane properties very few studies have focused on the influence of manufacturing defects on out-of-plane behaviour. Comer and colleagues [19,20] did study the interlaminar fracture behaviour of thermoplastic composites manufactured by tape placement but without defects. They found that the laminates produced by Laser Assisted Automated Tape Placement (LATP) performed better than the autoclaved laminates in terms of interlaminar fracture toughness, probably due to the presence of butt joints, but ILSS (interlaminar shear strength) and other mechanical properties were lower. Void contents were higher in the LATP materials. Stokes-Griffin and Compston [21] also used an out-of-plane shear test (ILSS) to study processing parameters for tape placement of carbon/PEEK and found a significant influence of placement rate. Grouve et al. [22] used a peel test to examine adhesion between layers in tape laid carbon/PPS laminates for different manufacturing conditions, but again without studying singularities. Other work on the influence of manufacturing singularities on out of plane properties include a recent study by Zhou et al. who used a through thickness tensile test [23]. Their experimental and numerical results indicated that gaps of up to 3 mm could result in a drop in out-of-plane tensile strength from around 37 to 30 MPa. Ghayour et al. used short and long flexural specimens to examine the effect of tow gaps. They found a reduction in apparent shear strength of 13% and a drop in flexural In spite of the significant amount of data now available for in-plane properties very few studies have focused on the influence of manufacturing defects on out-of-plane behaviour. Comer and colleagues [19,20] did study the interlaminar fracture behaviour of thermoplastic composites manufactured by tape placement but without defects. They found that the laminates produced by Laser Assisted Automated Tape Placement (LATP) performed better than the autoclaved laminates in terms of interlaminar fracture toughness, probably due to the presence of butt joints, but ILSS (interlaminar shear strength) and other mechanical properties were lower. Void contents were higher in the LATP materials. Stokes-Griffin and Compston [21] also used an out-of-plane shear test (ILSS) to study processing parameters for tape placement of carbon/PEEK and found a significant influence of placement rate. Grouve et al. [22] used a peel test to examine adhesion between layers in tape laid carbon/PPS laminates for different manufacturing conditions, but again without studying singularities. Other work on the influence of manufacturing singularities on out of plane properties include a recent study by Zhou et al. who used a through thickness tensile test [23]. Their experimental and numerical results indicated that gaps of up to 3 mm could result in a drop in out-of-plane tensile strength from around 37 to 30 MPa. Ghayour et al. used short and long flexural specimens to examine the effect of tow gaps. They found a reduction in apparent shear strength of 13% and a drop in flexural stiffness of 35% (due to thickness reduction), compared with a hand lay-up reference, when gaps were present [24]. However, to date there has been very little work to characterize the presence of defects in AFP composites using a fracture mechanics approach. This is a powerful method to enable the interaction of local singularities with a propagating crack to be characterized, and fracture mechanics values are being increasingly integrated in design in order to evaluate damage tolerance. A recent paper describes these tests [25]. This paper will examine the influence of singularities deliberately introduced at the mid plane of [90/0 7 /90] S laminates on the out-of-plane fracture properties of the same carbon/epoxy composites as those previously studied under in-plane loading [17,18]. Samples from plates manufactured with and without caul plates were tested. This is an original application of fracture mechanics testing, which introduces some difficulties in specimen definition and analysis but provides the basis to allow the presence of defects to be accounted for in design.
Material
The results reported in this paper were obtained by testing laminates made from AS4/8552 prepreg supplied by Hexcel Composites in Dagneux (France), reference (8552/AS4/RC34/AW194). Various plates were manufactured using a Coriolis 8 tow robotic fibre placement machine installed at Quéven (France) [26]. In order to be processable by the machine, the prepreg is slit to a width of 6.35 ± 0.125 mm. A single batch of prepreg material was employed to manufacture all the panels, with a compaction force of 600 N applied during lay-up. In order to introduce gap and overlap defects in specific regions of the panels, the machine was programmed using off-line software to stagger sectors. The same defect sizes as references [17,18] were introduced as they can be commonly found in double curvature parts produced using the AFP technology.
A 15 µm thick PTFE film was inserted at the mid-plane of the laminate to act as a delamination initiator. The edge of the starter film was located 10 mm from the centre of the gap/overlap defects.
In order to determine if the caul plate has an effect on the delamination behaviour of laminates having manufacturing singularities, a first series of panels was cured using a caul plate (2 mm thick aluminium sheet) while a second series of panels was cured without. To promote flow of material during cure and prevent the laminate from sticking to the caul plate, a Wrightlon™ 5200 PTFE release film supplied by AirTech ® (Springfield, TN, USA) was placed between the plate and the laminate.
The panels were cured in an autoclave at 180 • C under 7 bar pressure for 2 h after a dwell at 110 • C for 1 h, following the prepreg supplier's recommendations. After cure, the panels were C-scanned to check the quality of the laminates. The test specimens were then cut using a diamond coated wet circular saw and the edges were polished to prevent premature crack initiation.
Configuration of Samples
Two types of tests were employed in this study. First, short beam shear tests were performed on specimens from panels manufactured with a caul plate. This is a widely-used quality control test described by ASTM D2344. Then Mode I and Mode II interlaminar fracture tests were performed. These are usually performed on unidirectional laminates, but in order to investigate how crack propagation is affected by gap or overlap defects, these singularities must be placed in a layer at 90 • to the crack front. This requires careful consideration; Mode III stresses may occur during the loading phase if both the specimen and each specimen arm are not symmetrical with respect to their mid-planes. Anticlastic deformation will be generated by the non-symmetrical half laminates.
This has been addressed in previous studies on delamination of cross-ply laminates [27][28][29]. The solution adopted here was to balance each arm by placing an additional 90 • layer on the outer surface of the specimen, so the specimen layup was the following: The samples must also be stiff enough to avoid large displacements, and this was achieved by keeping the majority of unidirectional plies in each arm of the laminate. Figure 2 shows schematically the defect positions in the two central 90 • plies.
This has been addressed in previous studies on delamination of cross-ply lam [27][28][29]. The solution adopted here was to balance each arm by placing an addition layer on the outer surface of the specimen, so the specimen layup was the following [90°/0°7/90°2/0°7/90°] The samples must also be stiff enough to avoid large displacements, and th achieved by keeping the majority of unidirectional plies in each arm of the laminat Figure 2 shows schematically the defect positions in the two central 90° plies.
Material Quality Control
All test panels were inspected using ultrasonic C-Scan. Sofratest™ 49,944 equip was used for the inspections with a flat aluminium panel acting as a reflector. The c was performed by a focused transducer, with a frequency of 10 MHz and a focali length of 76 mm. The acquisition step was 0.5 mm. The quality of all the panels was factory, with low attenuation and no evidence of delamination.
To check for voids and to verify the position and the morphology of the emb defects, cross sections were observed with a Jeol JSM 6460 LV scanning electron m scope. To produce flat sections, the samples were polished with diamond paste do 1 micron before examination. Several images were taken and then assembled, to pr the figures of defect regions shown in this paper. Figure 3 shows a section throug thickness of a specimen, illustrating the film insert and a 6.35 mm gap. The pre-crac ated by the PTFE film is clearly visible on the left-hand side of the micro-graph. It noted that the defect initially created by a 6.35 mm gap has healed during cure. The and the 90° fibres have flowed into the gap reducing the defect size to about 2 mm.
Material Quality Control
All test panels were inspected using ultrasonic C-Scan. Sofratest™ 49,944 equipment was used for the inspections with a flat aluminium panel acting as a reflector. The control was performed by a focused transducer, with a frequency of 10 MHz and a focalization length of 76 mm. The acquisition step was 0.5 mm. The quality of all the panels was satisfactory, with low attenuation and no evidence of delamination.
To check for voids and to verify the position and the morphology of the embedded defects, cross sections were observed with a Jeol JSM 6460 LV scanning electron microscope. To produce flat sections, the samples were polished with diamond paste down to 1 micron before examination. Several images were taken and then assembled, to produce the figures of defect regions shown in this paper. Figure 3 shows a section through the thickness of a specimen, illustrating the film insert and a 6.35 mm gap. The pre-crack created by the PTFE film is clearly visible on the left-hand side of the micro-graph. It can be noted that the defect initially created by a 6.35 mm gap has healed during cure. The resin and the 90 • fibres have flowed into the gap reducing the defect size to about 2 mm.
Short Beam Shear Tests
Short beam shear tests (NF EN ISO 14130) were performed in three-point flexure under displacement control at 2 mm/minute on specimens from all the panels produced with
Short Beam Shear Tests
Short beam shear tests (NF EN ISO 14130) were performed in three-point flexure under displacement control at 2 mm/minute on specimens from all the panels produced with a caul plate. The distance between supports was 5 times the specimen thickness. Apparent interlaminar shear strength τ 13 is calculated as: 0.75*P/(Bh), with P the critical load at delamination, B the specimen width, and h the thickness.
Interlaminar Fracture Toughness Tests
The delamination specimens were tested on an Instron™ test machine with a load cell of 10 kN at loading rates of 2 mm/min and 1 mm/min respectively for Mode I and Mode II tests. For each configuration, an average of six specimens was tested.
The configuration of the delamination tests in Mode I is the Double Cantilever Beam which respects the standard ISO15024 [30]. This test applies a through-the-thickness tension to the two arms of samples. The loading is introduced through aluminium blocks bonded to the end of the samples on the upper and lower surfaces. Specimens are loaded by pins that leave the blocks free to rotate. Cyanoacrylate adhesive was used to bond the blocks.
For Mode II delamination tests a 4ENF (Four Point End Notched Flexure) geometry [31] was used to propagate mid-plane cracks under the effect of a shear stress introduced in 4-point bending. This is one of a number of in-plane shear delamination tests available [32] and has the advantage of encouraging the stable crack propagation required here. Figure 4 illustrates the 4ENF specimen configuration. For this test, the distance between the upper loading points was 50 mm and the distance between the lower supports was 100 mm. A roller bearing was positioned so that the upper load points rotate about the specimen mid-length.
Short Beam Shear Tests
Short beam shear tests (NF EN ISO 14130) were performed in three-point flexure under displacement control at 2 mm/minute on specimens from all the panels produced with a caul plate. The distance between supports was 5 times the specimen thickness. Apparent interlaminar shear strength τ13 is calculated as: 0.75*P/(Bh), with P the critical load at delamination, B the specimen width, and h the thickness.
Interlaminar Fracture Toughness Tests
The delamination specimens were tested on an Instron™ test machine with a load cell of 10 kN at loading rates of 2 mm/min and 1 mm/min respectively for Mode I and Mode II tests. For each configuration, an average of six specimens was tested.
The configuration of the delamination tests in Mode I is the Double Cantilever Beam which respects the standard ISO15024 [30]. This test applies a through-the-thickness tension to the two arms of samples. The loading is introduced through aluminium blocks bonded to the end of the samples on the upper and lower surfaces. Specimens are loaded by pins that leave the blocks free to rotate. Cyanoacrylate adhesive was used to bond the blocks.
For Mode II delamination tests a 4ENF (Four Point End Notched Flexure) geometry [31] was used to propagate mid-plane cracks under the effect of a shear stress introduced in 4-point bending. This is one of a number of in-plane shear delamination tests available [32] and has the advantage of encouraging the stable crack propagation required here. Figure 4 illustrates the 4ENF specimen configuration. For this test, the distance between the upper loading points was 50 mm and the distance between the lower supports was 100 mm. A roller bearing was positioned so that the upper load points rotate about the specimen mid-length.
Data Analysis
Standard data analysis to determine strain energy release rates was not applicable here because of the cross-ply nature of the sample and the unstable behaviour of the crack growth. The method used is based on the calculation of the crack length using beam theory and requires only the force and displacement data. This data analysis was developed by the author in a previous study of delamination tests performed under pressure [33] when visual crack length measurement was not possible inside a pressure vessel. The derivation of the data analysis can be found in [33].
For Mode I the crack length is calculated using the following equation: where a Calc is the calculated crack length, E is the flexural modulus, I is the second moment of inertia, δ is the opening displacement and P the applied load. The apparent fracture toughness in Mode I, G Iapp , is then calculated as: For Mode II the crack length is calculated as: Here, δ is the deflection of the beam due to cross head displacement, P is the applied load, E is the flexural modulus, I is the second moment of inertia, S the span between the outer loading rollers, and L the distance between the inner and the outer loading rollers.
The apparent fracture toughness in Mode II is then calculated as: It should be noted that the crack lengths and toughness values given in this paper are apparent values. They can only be used in a comparative way, but they enable the effect of the gap/overlap defects on the delamination behaviour of CF/epoxy laminates to be examined.
For calculations under both Mode I and Mode II loading, δ and P were recorded by the test machine data acquisition system.
In order to determine the term EI, Equations (1) and (3) were inverted giving the two following equations: During the linear loading phase of the test prior to any crack propagation, the initial crack length a 0 is known (See Figure 4). The compliance C = δ/P is determined from the slope of the force vs. displacement curve during the linear loading, allowing EI to be determined for each specimen. Then G IIcApp can be determined from Equation (4). Figure 5 shows the results from short beam shear tests. These results indicate that the short beam shear test, among the most widely used tests to check composite quality, is not very sensitive to the presence of these singularities. This is an interesting result, but this test has some limitations as noted by Whitney and Browning [34]. They showed the complex stress state in ILSS specimens, with compression stresses tended to suppress the interlaminar shear failure mode. It was therefore decided to perform interlaminar fracture tests, as these can quantify the behaviour when a propagating delamination meets a zone containing singularities. Figure 6 shows the results from Mode I tests on samples laid up from prepreg layers of the same material as that employed for tape laying, and manufactured in the autoclave with the same cure cycle, i.e., the same composite but without any singularities.
Mode I Testing of Unidirectional Composites without Singularities
short beam shear test, among the most widely used tests to check composite quality, is not very sensitive to the presence of these singularities. This is an interesting result, but this test has some limitations as noted by Whitney and Browning [34]. They showed the complex stress state in ILSS specimens, with compression stresses tended to suppress the interlaminar shear failure mode. It was therefore decided to perform interlaminar fracture tests, as these can quantify the behaviour when a propagating delamination meets a zone containing singularities. Figure 6 shows the results from Mode I tests on samples laid up from prepreg layers of the same material as that employed for tape laying, and manufactured in the autoclave with the same cure cycle, i.e., the same composite but without any singularities.
Mode I Testing of Unidirectional Composites without Singularities
These curves show typical behaviour of this material in DCB tests without singularities. The initiation from the starter film is unstable for all specimens, a load drop is noted on the force-displacement plot as the crack rapidly advances a few millimetres beyond the insert film, but then stable propagation is recorded throughout the test. This results in propagation values in the range 0.25-0.30 kJ/m 2 . Figure 7 shows the load vs. displacement traces (left) and the calculated R-curves (right) of [90/07/90]s laminates containing 0.5 mm gaps. The crack propagation is typical of stick/slip behaviour. The crack jumps immediately to a 90/0 interface. With further loading, the crack jumps from one 90/0 interface to the other 90/0 interface. This phenomenon has also been observed by Brunner and Blackman for tests on specimens with 90° central layers without singularities [35]. Despite the differences in the laminate configurations, the apparent toughness levels reached in this study are comparable with the calculated GIC values reported by Brunner and Blackman. These curves show typical behaviour of this material in DCB tests without singularities. The initiation from the starter film is unstable for all specimens, a load drop is noted on the force-displacement plot as the crack rapidly advances a few millimetres beyond the insert film, but then stable propagation is recorded throughout the test. This results in propagation values in the range 0.25-0.30 kJ/m 2 . Figure 7 shows the load vs. displacement traces (left) and the calculated R-curves (right) of [90/0 7 /90]s laminates containing 0.5 mm gaps. The crack propagation is typical of stick/slip behaviour. The crack jumps immediately to a 90/0 interface. With further loading, the crack jumps from one 90/0 interface to the other 90/0 interface. This phenomenon has also been observed by Brunner and Blackman for tests on specimens with 90 • central layers without singularities [35]. Despite the differences in the laminate configurations, the Figure 7 shows the load vs. displacement traces (left) and the calculated R-curves (right) of [90/07/90]s laminates containing 0.5 mm gaps. The crack propagation is typical of stick/slip behaviour. The crack jumps immediately to a 90/0 interface. With further loading, the crack jumps from one 90/0 interface to the other 90/0 interface. This phenomenon has also been observed by Brunner and Blackman for tests on specimens with 90° central layers without singularities [35]. Despite the differences in the laminate configurations, the apparent toughness levels reached in this study are comparable with the calculated GIC values reported by Brunner and Blackman. Figure 8 shows an example of the load/displacement plots from six DCB tests on specimens with the largest gap defects. There is an initial stable propagation as the damage zone in front of the insert develops, then the crack meets the singularity region and the load increases until an unstable crack jump. Further crack propagation is then stable. The plots from the six specimens all show the same form, with similar load levels once the crack has passed the singularity.
Influence of Manufacturing Defects in Mode I
Materials 2021, 14, x FOR PEER REVIEW 9 of 18 Figure 8 shows an example of the load/displacement plots from six DCB tests on specimens with the largest gap defects. There is an initial stable propagation as the damage zone in front of the insert develops, then the crack meets the singularity region and the load increases until an unstable crack jump. Further crack propagation is then stable. The plots from the six specimens all show the same form, with similar load levels once the crack has passed the singularity. The corresponding R-curves ( Figure 9) show that GIApp increases as the crack approaches the zone containing the manufacturing defect (red shaded area), then an unstable crack propagation of around 15 mm is noted, before the fracture energy stabilizes. The corresponding R-curves (Figure 9) show that G IApp increases as the crack approaches the zone containing the manufacturing defect (red shaded area), then an unstable crack propagation of around 15 mm is noted, before the fracture energy stabilizes. The corresponding R-curves ( Figure 9) show that GIApp increases as the crac proaches the zone containing the manufacturing defect (red shaded area), then an ble crack propagation of around 15 mm is noted, before the fracture energy stabiliz Figure 9. Mode I fracture R-curve, strain energy release rate versus the calculated crack length, for 6 DCB specimen containing a gap of 6.35 mm, manufactured with caul plate (calculated from plots in Figure 8). Table 1 summarizes the results from Mode I tests on specimens from panels wi eight singularity conditions. The values shown are mean values. For each type of d Figure 9. Mode I fracture R-curve, strain energy release rate versus the calculated crack length, for 6 DCB specimens containing a gap of 6.35 mm, manufactured with caul plate (calculated from plots in Figure 8). Table 1 summarizes the results from Mode I tests on specimens from panels with the eight singularity conditions. The values shown are mean values. For each type of defect, a peak value in the defect zone and a value at a calculated (arbitrary) crack length of 80 mm, i.e., beyond the singularity, are given. Figure 10 shows the results from the Mode II tests on the same material in unidirectional form without singularities. As for the Mode I tests on unidirectional specimens (Figure 6), the initiation of the delamination is unstable. The subsequent crack propagation is stable reaching G IIapp values of the order of 800 J/m 2 . This level of G IIapp is comparable to results found in the literature for the same material (values around 0.9 kJ/m 2 ) [36]. Figure 11 shows examples of load-displacement plots for Mode II tests of the cross ply laminated containing a gap of 0.5 mm, in which no defects could be found after cure. Figure 10 shows the results from the Mode II tests on the same material in unidirectional form without singularities. As for the Mode I tests on unidirectional specimens (Figure 6), the initiation of the delamination is unstable. The subsequent crack propagation is stable reaching GIIapp values of the order of 800 J/m 2 . This level of GIIapp is comparable to results found in the literature for the same material (values around 0.9 kJ/m 2 ) [36]. Figure 11 shows examples of load-displacement plots for Mode II tests of the cross ply laminated containing a gap of 0.5 mm, in which no defects could be found after cure.
Influence of Manufacturing Defects in Mode II
Here there is an initial load drop, followed by a period of stick-slip crack propagation, then a steep increase in load as the crack reached the region of the specimen affected by the loading point compression zone. The plots for the six specimens are quite similar, both in terms of behaviour and values. Figure 12 shows the corresponding plots of GIIapp versus calculated crack length for these specimens. The shaded area represents the location and size of the embedded defect.
The shape of the R-curves suggests that the stiffness of the sample increases after an unstable crack propagation as observed for the samples cut out of unidirectional laminates. The observation of the edges of the test coupons indicates that the crack front has jumped to the upper 0°/90° interface, as illustrated in Figure 13. In this case, corresponding to the smallest defect, there is an initial crack jump from the insert to the defect zone. Then from the defect zone onwards the propagation values appear to be quite stable, around 0.65 kJ/m 2 . Figures 14 and 15 show the R-curves for six Mode II specimens with the two larger gaps (3.175 mm and 6.35 mm). A clear effect of the defect zone on the crack propagation behaviour is observed, with crack arrest, there is an increase in apparent fracture energy followed by an increasingly unstable jump as the defect zone is increased. For the 6.35 mm gap the crack jumps to below the loading point, there is no longer any stable propagation. The results for the 3.175 mm gap show an intermediate behaviour. Figure 16 shows the R-curves of a laminate containing a 3.175 mm overlap defect (Figure 2iv). The crack propagation in these samples differs from that of specimens containing gap type defects by the fact that the crack advances rapidly through the defect area to a calculated crack length of approximately 65 mm with a corresponding GIIapp of approximately 0.4 kJ/m 2 . Once the crack front has passed the defect area, the crack propagates in a more stable manner at a GIIapp of the order of 0.6 kJ/m 2 , similar to the values for specimens with- Here there is an initial load drop, followed by a period of stick-slip crack propagation, then a steep increase in load as the crack reached the region of the specimen affected by the loading point compression zone. The plots for the six specimens are quite similar, both in terms of behaviour and values. Figure 12 shows the corresponding plots of G IIapp versus calculated crack length for these specimens. The shaded area represents the location and size of the embedded defect.
The shape of the R-curves suggests that the stiffness of the sample increases after an unstable crack propagation as observed for the samples cut out of unidirectional laminates. The observation of the edges of the test coupons indicates that the crack front has jumped to the upper 0 • /90 • interface, as illustrated in Figure 13. In this case, corresponding to the smallest defect, there is an initial crack jump from the insert to the defect zone. Then from the defect zone onwards the propagation values appear to be quite stable, around 0.65 kJ/m 2 . Figures 14 and 15 show the R-curves for six Mode II specimens with the two larger gaps (3.175 mm and 6.35 mm). A clear effect of the defect zone on the crack propagation behaviour is observed, with crack arrest, there is an increase in apparent fracture energy followed by an increasingly unstable jump as the defect zone is increased. For the 6.35 mm gap the crack jumps to below the loading point, there is no longer any stable propagation. The results for the 3.175 mm gap show an intermediate behaviour. Figure 16 shows the R-curves of a laminate containing a 3.175 mm overlap defect (Figure 2iv). The crack propagation in these samples differs from that of specimens containing gap type defects by the fact that the crack advances rapidly through the defect area to a calculated crack length of approximately 65 mm with a corresponding G IIapp of approximately 0.4 kJ/m 2 . Once the crack front has passed the defect area, the crack propagates in a more stable manner at a G IIapp of the order of 0.6 kJ/m 2 , similar to the values for specimens without defects, until it reaches the loading roller. The behaviour is similar to that of the specimens without defects but with more unstable crack propagation through the defect area. This can be explained by the fact that in the case of overlap defects, the local fibre content is increased. The lack of resin results in lower resistance to the crack propagation. Table 2 summarizes the Mode II results. Once again, to compare the effects of the different defect types, the values are taken as the peak values and those corresponding to an arbitrary calculated crack length of 75 mm. Figure 15. Mode II fracture energy as a function of the crack length for six 4 ENF specimens containing a gap of 6.35 mm from plates manufactured with caul plate. Figure 16. Mode II fracture energy as a function of the crack length for six 4ENF specimens containing an overlap of 3.175 mm from plates manufactured with caul plate.
Discussion
First, it should be noted that this type of interlaminar fracture data for AFP composites with singularities does not exist in the literature, so it is not possible to compare results with published values. However, the tests provide a large amount of information which is discussed below in three sections: First the validity of the experimental approach is analysed. Then the influence of the type of loading on the crack propagation resistance is discussed. Fnally, the influence of the caul plate is examined.
The Mode I delamination test on unidirectional laminates is standardized, and has been extensively studied, but its application to AFP materials poses three main difficulties. The first is the need to balance the stacking sequence of the specimen arms, so that they are symmetric both with respect to the overall specimen and also with respect to each arm mid-plane. As the defects need to be placed in 90 • layers, this requires adding an additional external 90 • ply on each face resulting in a specific stacking sequence for the test. The second choice to be made is the position of the singularity as the aim here is to examine how a propagating crack is affected when it meets a singularity; this differs from the standard test which is primarily focused on obtaining initiation values of G Ic from implanted thin films. Here a distance of 10 mm was maintained between the end of the insert film and the centre of the defect in order that the crack will start to propagate. This choice of distance is open to discussion; a longer distance may allow additional damage mechanisms to develop, while a shorter distance may be affected by the insert film. However, the distance was kept constant for all tests and should therefore allow direct comparisons between defect types to be made. Finally, the third difficulty is determining crack length. This is intrinsic to all fracture mechanics tests, but the choice here was to calculate the crack length from the measured force and displacement. Again, this is open to discussion but the same approach was used for all tests and so again it allows comparisons to be made for different defects on the same basis.
Mode II testing is more controversial than Mode I, even without adding manufacturing defects [32]. The choice here of a four-point ENF specimen was based on the need for a stable crack propagation for which this geometry is the simplest available option. Once again, the main difficulty is the determination of the crack length and the choice was made here to use the calculated apparent crack length value.
It is noticeable that in some cases the calculated crack length appears to decrease after the unstable crack propagation. This crack length is derived from the measured stiffness of the specimen, and the latter will depend on both the open crack and the damage zone in front of the crack. Aksoy and Carlsson [37] showed that this damage zone involves microcracks which extend well beyond the open crack and coalesce to form a new crack surface. The relationship between measured stiffness and crack length is further complicated by the friction between fracture surfaces, which has been shown to have a significant influence on Mode II fracture energy [38].
A comparison of the Mode I results with those for the unidirectional material indicates higher peak values in the initiation region. The apparent crack resistance tends to increase as the damage zone in front of the crack top interacts with the singularities. Various authors have shown that a plastic or process zone precedes the main crack tip during Mode I propagation [39,40]. As a result of the development of this zone, and its interaction with through-thickness defects, damage including resin cracking and debonding, can occur above and below the crack plane, initiating secondary cracks. Even during the first millimetres of propagation, there is a tendency for the crack to jump between the 0/90 • interfaces (see Figure 17). This has been seen in previous work on crack propagation at 0/90 • interfaces [41]. These appear to encourage crack bifurcation, deviating cracks to the 0/90 • interface, and this raises the apparent fracture energy, Figure 17b. Once the crack has passed the singularity it propagates more easily, and stable propagation fracture energy values return to those measured at the start of the test, around 0.3-0.4 kJ/m 2 . These values are similar to those published elsewhere for this 0/90 • material [41]. The type and dimensions of the singularity and the presence of a caul plate, all affect the peak fracture energy (Table 1). Concerning the Mode II results, these clearly indicate that the presence of the larger gaps has a significant effect on the crack propagation behaviour. There is a large arresting effect increasing with increasing defect size (within the range of defect tested in this study) Concerning the Mode II results, these clearly indicate that the presence of the larger gaps has a significant effect on the crack propagation behaviour. There is a large arresting effect increasing with increasing defect size (within the range of defect tested in this study) followed by unstable crack growth. The specimens with overlaps are much less affected by the defect. Examination of micrographs suggests that the crack arrest may be due to large resin pockets resisting Mode II crack propagation, though local curvature in the plies visible in Figure 18 which will also hinder crack advance. The scatter in Mode II values is quite low; the crack tends to stay at the same 0/90 • interface during propagation.
Finally, concerning the influence of the caul plate, the data are provided in Tables 1 and 2, but Figure 19 shows plots of mean values to illustrate the effects more clearly. Concerning the Mode II results, these clearly indicate that the presence of the larger gaps has a significant effect on the crack propagation behaviour. There is a large arresting effect increasing with increasing defect size (within the range of defect tested in this study) followed by unstable crack growth. The specimens with overlaps are much less affected by the defect. Examination of micrographs suggests that the crack arrest may be due to large resin pockets resisting Mode II crack propagation, though local curvature in the plies visible in Figure 18 which will also hinder crack advance. The scatter in Mode II values is quite low; the crack tends to stay at the same 0/90° interface during propagation. Finally, concerning the influence of the caul plate, the data are provided in Tables 1 and 2, but Figure 19 shows plots of mean values to illustrate the effects more clearly. Under Mode I loading the caul plate tends to increase the fracture energy slightly, perhaps due to a more constrained interlaminar microstructure resulting in less planar interfaces.
Under Mode II loading the difference is small for small defects but for larger gaps (3.175 mm) the crack resistance with a caul plate is lower than without. For the largest 6.25 mm gaps the results are similar with and without the caul plate. This suggests there may be a critical ratio between the dimensions of the zone affected by the defect and the homogenizing effect of the caul plate, which controls the damage zone and energy required to propagate the crack. More work is needed to quantify this effect.
Conclusions
This study investigates the effect of AFP manufacturing gap/overlap singularities on the crack propagation behaviour in carbon fibre epoxy laminates. Such local defects may occur due to the variation of tape width or laying tapes over complex shapes while their influence under in-plane loads was characterized in two previous studies [17,18]. Here original results for out-of-plane loads are presented for the same materials and defects.
It was first necessary to develop a specific test procedure and laminate stacking sequence. Mode I and Mode II interlaminar fracture tests were performed because results from simpler tests (ILSS) were shown not to be sensitive to these defects. The latter test Under Mode I loading the caul plate tends to increase the fracture energy slightly, perhaps due to a more constrained interlaminar microstructure resulting in less planar interfaces.
Under Mode II loading the difference is small for small defects but for larger gaps (3.175 mm) the crack resistance with a caul plate is lower than without. For the largest 6.25 mm gaps the results are similar with and without the caul plate. This suggests there may be a critical ratio between the dimensions of the zone affected by the defect and the homogenizing effect of the caul plate, which controls the damage zone and energy required to propagate the crack. More work is needed to quantify this effect.
Conclusions
This study investigates the effect of AFP manufacturing gap/overlap singularities on the crack propagation behaviour in carbon fibre epoxy laminates. Such local defects may occur due to the variation of tape width or laying tapes over complex shapes while their influence under in-plane loads was characterized in two previous studies [17,18]. Here original results for out-of-plane loads are presented for the same materials and defects.
It was first necessary to develop a specific test procedure and laminate stacking sequence. Mode I and Mode II interlaminar fracture tests were performed because results from simpler tests (ILSS) were shown not to be sensitive to these defects. The latter test involves a complex stress field, with compression tending to close the defects, and should not be used to investigate the influence of AFP manufacturing singularities.
The interlaminar fracture results show clear effects of the gap defects on the crack propagation behaviour under out of plane loading. The effects measured increase with increasing defect size. The influence of these defects under both Mode I and Mode II loading conditions is always to slow down crack propagation, promoting unstable crack growth. Use of a caul plate during manufacturing influences the measured values of fracture energy but not the overall trends observed.
In further work, it would be interesting to examine whether cyclic and dynamic loading have a similar influence on the fracture behaviour. Funding: The authors are grateful to the region of Brittany (France) and the Regional Council of Morbihan for their financial support.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Raw data confidential at present, study ongoing. | 9,921.8 | 2021-09-01T00:00:00.000 | [
"Physics"
] |
High Throughput Traction Force Microscopy for Multicellular Islands on Combinatorial Microarrays.
The composition and mechanical properties of the cellular microenvironment along with the resulting distribution of cellular devolved forces can affect cellular function and behavior. Traction Force Microscopy (TFM) provides a method to measure the forces applied to a surface by adherent cells. Numerous TFM systems have been described in literature. Broadly, these involve culturing cells on a flexible substrate with embedded fluorescent markers which are imaged before and after relaxion of cell forces. From these images, a displacement field is calculated, and from the displacement field, a traction field. Here we describe a TFM system using polyacrylamide substrates and a microarray spotter to fabricate arrays of multicellular islands on various combinations of extra cellular matrix (ECM) proteins or other biomolecules. A microscope with an automated stage is used to image each of the cellular islands before and after lysing cells with a detergent. These images are analyzed in a semi-automated fashion using a series of MATLAB scripts which produce the displacement and traction fields, and summary data. By combining microarrays with a semi-automated implementation of TFM analysis, this protocol enables evaluation of the impact of substrate stiffness, matrix composition, and tissue geometry on cellular mechanical behavior in high throughput.
traction stress field is the calculated, using the known mechanical characteristics of the substrate.
In the MATLAB code provided with this protocol, the displacement field is calculated using a publicly available digital image correlation (DIC) algorithm (Landauer et al., 2018). Traction forces calculation is performed using an available Fourier transform traction cytometry (FFTC) algorithm (Sabass et al., 2008;Han et al., 2015). These methods were chosen based on their relatively low computational expense and time and relatively few required user inputs. This supports the high throughput nature of the analysis and goal to allow "off the shelf" use for labs who do not focus on TFM as a core competency. However, the code was intended to enable users to substitute alternative displacement or traction field algorithms based on the needs and computational resources of the user and application.
Performing TFM often involves bulk preparations of adhesive substrates which requires relatively large amounts of adhesive proteins, decreasing the practicality of performing TFM on multiple ECM proteins or surface bound ligands. Here we use contact printed microarrays which allows multiple ECM/ligand combinations, with replicates, to be included on a single substrate with low material usage (Flaim et al., 2005). TFM is often implemented either on single cells, or randomly distributed small colonies. In the microarray system, we can assess the mechanical behavior of multicellular islands where forces are transmit to the substrate as well as neighboring cells, resulting in collective behavior (Mertz et al., 2013). Further, these multi cellular islands are of consistent size and shape. This permits analysis of average mechanical behavior of these islands as a function of the environmental conditions. Island diameter can be tuned by using differently sized microarray pins, adding geometry as an additional parameter to investigate. The MATLAB code provided includes techniques to identify island boundaries and align data from replicate islands to enable analysis of these average behaviors. These cellular micro-arrays can be assayed in parallel using immunocytochemistry, which has been described in more detailed elsewhere (Kaylan et al., 2017), to allow correlation of mechanical and phenotypic behavior.
Our application has focused on the relationship between spatial patterns of mechanical behavior and differentiation of liver progenitor cells. This method can be applied to other stem cell systems especially where spatial patterning and collective mechanical behavior is of interest. It can also be applied to investigations of how the mechanical behavior of cancer relates to its microenvironment and further 3 www.bio-protocol.org/e3418 impacts proliferation or response to drugs. Although here we used contact microarray printing, this protocol can easily be applied to other micro patterned systems such as those with non-contact printing (Romanov et al., 2014) and Polydimethylsiloxane (PDMS) stamped substrates (Kane et al., 1999). In this application, we utilize the XY position within the array to know the condition of that island. This protocol could be extended to other systems where spatial position of cells on their substrate is linked to an experimental variable or substrates such as with gradients of stiffness (Hadden et al., 2017) or biomolecules (Dertinger et al., 2002). j. Invert the dishes, as shown in Figure 1D, and let stand for 15 min to allow beads to migrate towards the surface.
Materials and Reagents
k. Expose dishes to 365 nM UVA for 15 min in the UV crosslinker.
l. Fill dishes with deionized water as shown in Figure 1E n. Dehydrate gels at 50 °C on a hot plate until all water has evaporated from the gel. A dehydrated gel is shown in Figure 1A. Gel dishes can be stored for up to one month in a dark, dry location.
B. Microarray printing
This process prints an array of circular spots of biomolecules onto the polyacrylamide substrates.
These array spots are where cells will adhere forming circular islands.
Note: ECM protein printing (ECMP) buffer is appropriate for most ECM molecules. Growth factor (GF) printing buffer is suitable for other classes of molecules such as growth factors or ligands,
where a low pH could cause issues. For many ECM proteins, 250 µg/m is a suitable concentration. Optimal concentrations will vary depending on the molecule, its retention, and its function. The total volume in each well should be between 5 and 15 µl.
b. In your source plate, you should also include a solution with a fluorescent marker which will be used to convey the orientation of the array to determine the locations of each condition.
We recommend rhodamine-conjugated dextran at a final concentration of 2.5 mg/ml. The source plate configuration will differ based on the arrayer, pin configuration, and desired array layout.
c. Mix each well thoroughly by pipetting. Take care to avoid generating bubbles. Centrifuge the source microplate for 1 min at 1,000 x g. Source plates can be used immediately or 8 www.bio-protocol.org/e3418 e. Prepare the microarrayer and arraying program using the manufacturer's software. The setup and programming will differ based on the arrayer and desired array layout. The program should be devised such that the array orientation is unambiguous, and the locations of each arrayed conditions are known and could be determined by the location relative to the fluorescent marker in any orientation. An example of this is provided in Figure 2. i. Place dishes into an appropriate adaptor. If the arrayer can fit a standard multiwell plate, the 6-well plate is suitable to hold the dishes.
Note: See the note under Materials and Reagents #5.
j. Begin array fabrication. Check frequently that the humidity has not dropped below 65% RH (non-condensing).
k. When the program is complete, store fabricated arrays covered with aluminum foil at room temperature and 65% RH (non-condensing) overnight. While the array spots are visible, it is helpful to visibly mark the top or bottom of the array so the orientation is known when placing on the microscope. For some hydrogel and pin combinations, it may be necessary to store arrays at ambient temperature and humidity for an additional two days to ensure arrays have dried completely. Arrays can be stored for up to 7 days before use.
C. Seeding cells on microarrays
Here cells are transferred from their normal culture condition on to the microarrayed hydrogel substrates for TFM.
1. To sterilize the gels, add 3 ml of PBS with 1% v/v penicillin/streptomycin. Expose to UV C for 30 min. Exchange penicillin/streptomycin solution for cell culture media.
2. Collect and count cells following the cell appropriate procedure. Resuspend cells in culture media at an appropriate concentration for seeding. This will differ based on cell type but will likely range between 170 x 10 3 and 7 x 10 5 cells/ml. Add 3 ml of cell suspension to each dish.
Incubate dishes at 37 °C and 5% CO2 for 2-24 h, or until confluent cell islands have formed.
Seeding density and time may need to be optimized for your cells and application. Agitation of the dishes every 15-60 min may also aid in forming consistent, confluent islands.
3. Once islands have formed, rinse arrays twice with 3 ml prewarmed media. At this stage, add any experimental treatments such as growth factors or inhibitors. Change media every 1-2 days until time to perform TFM, or as your cell culture protocol suggests, maintaining any treatment concentrations at each exchange. Figure 3 shows an example of an array with cellular islands. 5. Begin the automated imaging of the phase contrast images of the cell islands. Save this file with a suitable name that notes the experiment details as well as that it is the phase contrast image.
6. Switch to the appropriate fluorescent channel for the beads. Individually find and save the Z plane focus of the top surface of the gel under each island. Take care to avoid changing the XY positions. Save this file with a suitable name that notes the experiment details as well as that it is the pre-dissociation image. An example image is shown in Figure 5B. Save this file with the same experiment details as well as that it is the pre-dissociation image.
8. Carefully add 150 µl of the SDS solution to the dish, taking care to not bump or move the dish.
Monitor dissociation of the cell islands using the phase contrast channel. Wait until the islands have completely dissociated from the substrate at which time there the island locations, when viewed in phase contrast channel, should appear mostly blank.
Note: Some cells may require addition of more SDS solution, or higher concentration.
Additionally, Triton-X may be used instead. 1. Ensure the provided MATLAB code has been saved to an appropriate location. Navigate to the folder where this directory has been saved.
2. Make sure all image files have been saved or transferred to a folder available from the computer to be used for analysis.
3. From the command window, run the function run_island_tfm with no inputs.
4. You will be prompted to select the file with the phase contrast image. Navigate to and select the file with the phase contrast image. Pixel size is pulled from the image metadata but can be set manually. All other fields are set by the user. 7. Once these fields are completed, click "Set Info." This will populate the "Data file name" field using the entered information which will be the file name of the output file. This field can be changed manually. Two additional fields will appear. Enter a name for the first condition in the "Condition 1" field. Enter the numbers of all the islands assigned this condition in the "Islands with Condition 1" field as a list of numbers. Click "Set Condition." If there are additional conditions, repeat these steps for each of the remaining conditions. Once all conditions have been set, the "Done" button will appear. Confirm that all experiment information is correct, then click "Done." All islands assigned a condition in this step will be analyzed. To analyze only some islands, set the number of conditions to the number conditions represented by the islands to be analyzed, and only list those islands in the "Islands with Condition" field. 8. The script will now cycle through the conditions and islands. It will first run a script to correct frame shifts between the before and after dissociation. A GUI will now appear to aid in drawing a boundary around the cell island. This GUI is shown in Figure 7. The software will attempt to draw a boundary around the cell island, which is plotted in red. The slides can be used to adjust the parameters of the trace which will cause the trace to rerun in real time. You can also change the values of these parameters by entering numbers into the text boxes and then hit the "Rerun" button. Alternatively, you can draw the boundary manually by clicking the "draw manual" button. Left click around the island until the boundary is closed. Double click inside the boundary to create the shape. To avoid sharp corners, after the polygon is drawn, a blurring and rounding is applied using the current value in the "Blur" field.
To avoid this rounding, set this field to 1. The manual draw function can be used to trace multiple areas. To reset the manual boundaries, click "Clear boundaries," or click "Rerun" to repeat the automated tracing. When satisfied with the boundary, click "Done." 9. This will bring up the next island. Repeat this process for all islands in the file selected for analysis. 10. When the boundary trace has been completed for each island, the data will be saved. The program will then move to calculate the displacement and traction fields for each island. This process can take up to several hours depending on the number of images, image size, and processors on the computer. Data is saved after each island to limit data loss in the case of issues during analysis. Data is saved in the folder "data out." 11. Most of the computation time is the displacement field estimation. To rerun the traction field calculation with different settings on a file which already has the boundaries identified and displacement field calculated, run the command run_island_tfm('rerun'). You will be prompted to choose a data file which will be reanalyzed. 15 www.bio-protocol.org/e3418 B. View data and generate summaries 1. In the code provided, the output is saved in the folder "data out" with the file name established during analysis. To explore data from a single file, data can be loaded into the workspace by double clicking in the Current Folder explored or using the load function.
2. This loads the cell array "all_cell_data" into the workspace. Each cell corresponds to a single island from the analysis. The output is intended to provide easy access to all relevant data for the user to explore and analyze in MATLAB or export to other programs for analysis as appropriate for their application. Table 1 provides the organization of the data stored within the output file. Data structure elements can be accessed using dot notation of the form structName.fieldName.
Note: See https://www.mathworks.com/help/matlab/matlab_prog/access-data-in-a-structurearray.html for more information on accessing data in structures. The island boundary is plotted in black. The phase contrast image is also displayed, with the boundary plotted in yellow. The best fit ellipse is shown in red, with the major and minor axes.
4. Using the boundary traced of the island, the script finds a best fit ellipse. This ellipse can be used to align compare many replicates of islands with the same geometry. Use view_one_island_rotandcen to view the data centered and rotated according to the best fit ellipse.
5. The function collect_island_data is provided to collect data from multiple islands across multiple TFM runs. The output of this function is a table with data and information on each of the islands loaded from the files (see Table 1), and a structure holding the displacement and traction data, indexed according to the "summ_ind" field of the summary table This table can be exported to excel or similar. When the function is run without an input, you will be prompted to select data files to load and consolidate. 17 www.bio-protocol.org/e3418 6. The output of the collect_island_data function can further be used to create summary data. The function summarize_islands plots and outputs averaged displacement and traction fields, see Figure 9 for an example. This function uses the aligned the data from each island based on the best fit ellipse. The islands to include in the averaging can be selected by choosing a subset of set of islands of the summary_table by indexing on one or more variables. See data_analysis_examples for an example of using this function. Figure 10. The function summarize_islands_1D is provided to perform this analysis. Here, the XY position of each data point is converted to a radial coordinate, and the radial position is normalized by the measured radius of the island. The traction data is binned by radial coordinate, and a mean is taken. The data used for this analysis can also be selected using the summary table as discussed previously. The function also outputs the data table with the peak traction of each island amended to the relevant lines. See data_analysis_examples for an example of using this function. | 3,822 | 2019-11-05T00:00:00.000 | [
"Engineering",
"Materials Science",
"Biology"
] |
Complete Mitochondrial DNA Analysis of Eastern Eurasian Haplogroups Rarely Found in Populations of Northern Asia and Eastern Europe
With the aim of uncovering all of the most basal variation in the northern Asian mitochondrial DNA (mtDNA) haplogroups, we have analyzed mtDNA control region and coding region sequence variation in 98 Altaian Kazakhs from southern Siberia and 149 Barghuts from Inner Mongolia, China. Both populations exhibit the prevalence of eastern Eurasian lineages accounting for 91.9% in Barghuts and 60.2% in Altaian Kazakhs. The strong affinity of Altaian Kazakhs and populations of northern and central Asia has been revealed, reflecting both influences of central Asian inhabitants and essential genetic interaction with the Altai region indigenous populations. Statistical analyses data demonstrate a close positioning of all Mongolic-speaking populations (Mongolians, Buryats, Khamnigans, Kalmyks as well as Barghuts studied here) and Turkic-speaking Sojots, thus suggesting their origin from a common maternal ancestral gene pool. In order to achieve a thorough coverage of DNA lineages revealed in the northern Asian matrilineal gene pool, we have completely sequenced the mtDNA of 55 samples representing haplogroups R11b, B4, B5, F2, M9, M10, M11, M13, N9a and R9c1, which were pinpointed from a massive collection (over 5000 individuals) of northern and eastern Asian, as well as European control region mtDNA sequences. Applying the newly updated mtDNA tree to the previously reported northern Asian and eastern Asian mtDNA data sets has resolved the status of the poorly classified mtDNA types and allowed us to obtain the coalescence age estimates of the nodes of interest using different calibrated rates. Our findings confirm our previous conclusion that northern Asian maternal gene pool consists of predominantly post-LGM components of eastern Asian ancestry, though some genetic lineages may have a pre-LGM/LGM origin.
Introduction
The territories of northern Asia are of crucial importance for the study of early human dispersal and the peopling of the Americas. Recent findings about the peopling of northern Asia reconstructed by archaeologists suggest that anatomically modern humans colonized the southern part of Siberia around 40 thousand years ago (kya) and the far northern parts of Siberia and ancient Beringia, a prerequisite for colonization of the Americas, by approximately 30 kya [1,2]. Current molecular genetic evidence suggests that the initial founders of the Americas emerged from an ancestral population of less than 5,000 individuals that evolved in isolation, likely in Beringia, from where they dispersed southward after approximately 17 kya [3][4][5][6][7][8][9][10][11][12][13].
Despite the northern Asian populations are still underrepresented in the published complete genome mtDNA data sets, our knowledge of the fine-detailed mitochondrial DNA tree of northern Asians has been considerably improved recently, mainly due to the elaborate analyses of certain mtDNA haplogroups which are the most common in populations of northern Asia and America [4,5,7,8,10,12]. Recently we have analyzed a large set of complete mtDNAs belonging to the most frequent haplogroups A, C and D as well as to some western Eurasian haplogroups found in northern Asian populations [8,12]. As a result, it has been shown that majority of haplogroups C and D subclusters demonstrate the pre-LGM origin and expansion in eastern Asia, whereas the most of the southern and northeastern Siberian variants started to expand after the LGM. The Late Glacial re-expansion of microblade-making populations from the refugial zones in southern Yenisei and Transbaikal region of southern Siberia that started approximately 18 kya has been suggested as a major demographic process signaled in the current distribution of northern Asian-specific subclades of mtDNA haplogroups C and D. It has been shown also that both of these haplogroups were involved in migrations, from eastern Asia and southern Siberia to eastern and northeastern Europe, likely during the middle Holocene [12].
As far as uncovering all of the most basal variation in the northern Asian mtDNA haplogroups require major sampling and sequencing efforts with focusing on as much as possible diverse set of Siberian aboriginal populations we have further sampled two aboriginal populations from two different geographic regions of the northern and eastern Asia -Altaian Kazakhs from southern Siberia and Barghuts from Inner Mongolia, China and completely sequenced and analyzed an essential number of mtDNAs representing the rare and poorly characterized eastern Eurasian haplogroups which were revealed so far in northern Asia. We have paid a special attention to the 55 samples representing haplogroups B (n = 23), F2 (n = 1), M9 (n = 9), M10 (n = 5), M11 (n = 3), M13 (n = 2), N9a (n = 10), R9c1 (n = 1) and R11 (n = 1). Applying the newly updated mtDNA tree to the previously reported northern Asian and eastern Asian mtDNA data sets has resolved the status of the poorly classified mtDNA haplotypes and allowed us to obtain the coalescence age estimates of the nodes of interest using different calibrated rates.
MtDNA haplogroup profiles
Detailed sequence variations and haplogroup assignments of 149 Barghut and 98 Altaian Kazakh mtDNAs are presented in Table S1. A total of 36 haplogroups were observed in our samples, all within the three principal non-African macrohaplogroups: M, N and R. Table 1 presents the haplogroup frequencies of two populations studied. The eastern Eurasian component is represented by haplogroups A, N9a, and Y1, which belong to the major haplogroup N; by haplogroups B, F and R9c, which belong to macrohaplogroup R; and by different branches of macrohaplogroup M, such as C, D, G, M7, M9a, M13, and Z haplogroups. Both populations exhibit the prevalence of eastern Eurasian lineages accounting for 91.9% in Barghuts and 60.2% in Altaian Kazakhs. As in other populations of northern and eastern Asia [8,12] haplogroups C and D are the most common in Barghuts and Altaian Kazakhs studied, accounting together for 55.7% and 34.7% of lineages, respectively. As can be expected, haplogroup G2 lineages, which occur with the highest frequencies in Mongolic-speaking populations [8] [8] were used as input vectors to perform a PC analysis. Figure 1 shows the PC plots for the first three PCs, which account for 54.3%, 13.6% and 8.2% of the total variance, respectively. The first two PCs reveal two major groups of populations. The first one is comprised of populations of Buryats, Barghuts, Khamnigans, Kalmyks and Sojots forming a distinct subcluster as well as populations of Altaian Kazakhs, Teleuts, Telenghits and Koreans, whereas the second cluster is constituted by the populations of Tofalars, Todjins, Tuvinians, eastern and western Evenks, Altaians-Kizhi and Yakuts. The PC3 essentially displays the close genetic proximity of the Indo-European-speaking populations -Persians, Kurds and Tadjiks (Figure 1), who are clearly separated from the other populations studied. The strong affinity of Altaian Kazakhs and populations of northern (Khakassians, Altaians, Altaians-Kizhi, Teleuts and Telenghits) and central (Tadjiks, Turkmens, Uzbeks, Uighurs, Kirghizs and Kazakhs) Asia is also evident from MDS analysis results ( Figure 2), reflecting both strong influences of central Asian inhabitants on maternal diversity of Altaian Kazakhs as was previously reported [14] and essential genetic interaction between Altaian Kazakhs and the Altai region indigenous populations. Meanwhile, MDS plot as PC analysis previously reveals a close positioning of all Mongolic-speaking populations and Turkicspeaking Sojots related with them, thus suggesting their origin from a common maternal ancestral gene pool. The same trend is also evident for some of paternal lineages -a relatively high frequency of subhaplogroup C3d widespread in Mongolicspeaking populations was found in Sojots (53.6%), thus placing them closer to their Mongolic-speaking neighbors, than to other Turkic-speaking groups [15]. However, the Sojots are characterized by a relatively high frequency of the Y-chromosome haplogroup R1a1 (about 25%), which is typical for the Turkicspeaking populations such as Altaians, Teleuts and Shors, all characterized by the highest frequencies of R1a1 (about 50%) in Siberia [16]. Therefore, it seems that the Turkic males might have contributed genetically to the formation of Sojots, imposing a language of the Turkic group. In this scenario, most likely an elite dominance process should be assumed [17].
Phylogeography of eastern Eurasian mtDNA haplogroups infrequent in populations of northern Eurasia
Haplogroups R11'B6 and B4'B5. Haplogroup B is found at relatively high frequencies in Mainland southeastern Asia (20.6%), Island southeastern Asia (15.5%), Oceania (10.2%), eastern Asia (10.5%) and America (24%), but occurs as rarely as 0.1-1% in the Volga-Ural region, the Caucasus, western and southern Asia. It is detected at a very low frequency in some populations of Europe. Haplogroup B is found at ,3% overall in northern and central Asia, although it reaches .10% in a few Siberian populations (Table S2). Haplogroup B is identified by the presence of a 9-bp deletion in the COII/tRNALys intergenic region of mtDNA. Despite the 9-bp deletion has a high recurrence, it seems that together with transition 16189 it defines fairly well a monophyletic cluster, which consists of two subhaplogroups, B4 and B5. A sister clade of B4'B5, keeping the 16189 mutation and having additional polymorphism at np 12950, has been detected in eastern and Island southeastern Asia, being designated as R11'B6 [18,19]. R11'B6 cluster is further subdivided in R11, lacking the 9-bp deletion, and B6, having this deletion. It is worthwhile to mention that R11 mtDNAs have been detected mainly in China, whereas B6 lineages are present both in eastern and Island southeastern Asia ( Figure S1). Previous studies have proposed that haplogroup B4 arose ,44 ka, most likely on the eastern Asian or southeastern Asian mainland, where it is dispersed especially around the coastal regions from Vietnam to Japan. It subdivided ,35 ka into three main subclades: B4a, B4b'd, and B4c (with a subclade of B4b, B2, found exclusively in Native Americans and dated to ,16 ka [5]). Subclades B4a and B4a1 are also likely to have arisen on the mainland, ,24 ka and ,20 ka, respectively, but B4a1a is restricted to offshore populations in Taiwan, Island southeastern Asia, and the Pacific [20]. Subclade B4a1a1a, defined by a transition at the control-region position 16247, also known as the Polynesian motif, is the most frequent subclade within B4a1a and approaches fixation in Polynesians. Based on complete mtDNA analysis data it has been shown that the motif most likely originated .6 ka in the close proximity of the Bismarck Archipelago, and its immediate ancestor is .8 ka old and virtually restricted to Near Oceania [20].
While there has been considerable recent progress in studying complete mitochondrial DNA variation of haplogroup B lineages in America [5], eastern [21] and southeastern Asia [22][23][24][25] and Oceania [20,26] little comparable data is available for northern Asia. To date, only five haplogroup B complete mtDNA genomes from Siberian populations are known, which were sequenced and analyzed only with the aim of searching of the ancestors of Native American mtDNA haplogroups [27].
Here we present the reconstructed phylogeny of haplogroups R11'B6 and B4'B5 based on 247 complete mtDNA genomes including twenty three newly sequenced samples of haplogroup B from different populations of northern (Buryats, Khamnigans, Altaians-Kizhi, Yakut and Shor), eastern (Barghut) Asia and eastern Europe (Chuvashes from the Volga-Ural region) as well as one rare Altaian R11 sample. As can be seen from the phylogeny presented in Figure S1, the only Altaian R11 sample (Alt_158) and Han individual (QD8168) from Kong et al. [28] share transition at np 16390 and insertion of four cytosines at np 8278 and may therefore be ascribed to a new subclade R11b1 within R11b branch of haplogroup R11. Unfortunately, because of the small number of available R11b mtDNA genome sequences, we are unable to obtain unbiased age estimates for this subcluster, but taking into account the nearly exclusively Chinese distribution of R11 mtDNA lineages we may suppose that this specific Altaian R11b sequence points to a gene flow from China to southern Siberia, which might have occurred not earlier than 13-20 kya (Table S3).
Noteworthy, the addition of a substantial set of completely sequenced mtDNAs from northern Asian populations has allowed us to reveal several new subclusters within the haplogroup B4 showing predominantly northern Asian distribution, i.e. B4b1a3, B4c1a2 and B4j ( Figure 3, Figure S1). For example, identical Khamnigan and Buryat samples (Khm_21 and Br_336) bearing variants 16223 and 16362 as well as a series of specific mutations apparently belong to a previously unreported branch of haplogroup B-B4j, which is at the same phylogenetic level as nine other subclades (B4a-B4i) defined previously within B4 [18]. Ten of the new and one previously published sequence (Tubalar from southern Siberia [27]) clustered into uncommon B4b1a-branch, named B4b1a3, harboring the control region diagnostic motif 146-16086 ( Figure S1). With the exception of Tubalar mtDNA having additional coding region transition at np 15007, all other B4b1a3 mtDNAs are characterized by 408A-9055-9388T-9615 motif defining subcluster B4b1a3a, which in turn can be further subdivided into two sister subclusters. The relatively large amount of internal variation accumulated in the northern Asian branch of B4b1a would mean that B4b1a3 arose in situ in southern Siberia after the arrival of B4b1a3 founder mtDNA from somewhere else in eastern Asia. The phylogeny depicted in Figure S1 provides additional information concerning the entry time of the founder mtDNA -the age of B4b1a3 node is estimated as ,18-20 kya using different mutation rates, thus pointing to a pre-LGM/LGM, and apparently before the Holocene origin of this subcluster (Table S3).
Inside haplogroup B4 one more novel subgroup, B4c1a2, specific for northern Asian populations has been revealed ( Figure 3, Figure S1). It is characterized by transition at np 16527 and back mutation at np 16311 which is together with transition at np 3497 thought to be diagnostic for a whole subclade B4c1 [18]. Subgroup B4c1a2 dates to 6-8 kya, demonstrating the Holocene time of divergence, like neighbouring eastern Asian specific subcluster B4c1a1, which is characterized by slightly older coalescence time estimated as 9.5-11 kya (Figure 3, Table S3). The remaining completely sequenced haplogroup B mtDNA lineages identified in the present work belong to different branches of B4 and B5 subgroups. Thus, Barghut sample (Bt_67) bears B4d1 diagnostic mutation at np 15038, whereas Buryat (Br_301) and Khamnigan (Khm_1) mtDNAs share variants 207 and 15758, suggesting their status as haplogroup B5b2b, which is distributed exclusively in eastern Asia; likewise, Altaian sample (Alt_196) is assigned into eastern Asian subgroup B5b*. It is intriguing that unique haplogroup B mtDNA variant revealed in eastern European Chuvashes (CT_45) precedes subcluster B4c1b2b1, which is characteristic for some Island southeastern Asian populations ( Figure S1). Meanwhile, the remaining B-haplotypes detected in Chuvashes belong to southern Siberian subcluster B4b1a3a1a, pointing to Siberian ancestry for some maternal lineages in eastern European ethnic groups.
It should be noted that we have not found in northern Asia any haplogroup B mtDNA lineages ancestral to Amerindian-specific B2 branch. The only Tubalar mtDNA described previously by Starikovskaya et al. [27], designated there as B1 and interpreted as ''closely related to Amerindian-specific B2 branch'', belongs in fact to the northern Asian-specific subcluster B4b1a3 ( Figure S1) which in turns is a part of major subcluster B4b1, distributed predominantly in eastern Asia. Thus, there is no evidence at this time for the occurrence of haplogroup B2 mtDNA ancestors in Siberia, in contrast to the situation for haplogroup A2 and D2 mtDNAs [4,8,12,29].
Other haplogroup shared by eastern Asians and Mainland southeastern Asians is F2. This haplogroup has a slightly higher frequency in China (1.9-3.3%) and Thailand (2.4-5.4%) [30,32] compared to the Laos (0.5%) [33], Taiwan (0.5%) [30], Vietnam (0.7%) and Formosa (0.1%) [24]. It should be noted that the majority of F2 HVS1 haplotypes revealed so far in eastern and southeastern Asia exhibit a base change at np 16291 whereas the single F2 sequence found in Barghuts bears a characteristic mutation at np 16260. The complete mtDNA sequence analysis shows that this variant (sample Bt_124) apparently belongs to a previously unreported branch of haplogroup F2 which we propose to label as F2e ( Figure S2).
In the current study we have reconstructed the phylogeny of haplogroup N9a based on 59 complete mtDNA genomes Figure S1. Time estimates (in kya) shown for mtDNA subclusters are based on the coding region substitutions [11], coding region synonymous substitutions [19] and complete genome substitutions [19]. The size of each circle is proportional to the number of individuals sharing the corresponding haplotype, with the smallest size corresponding to one individual. Geographical origin is indicated by different colors: northern Asian -in blue, central Asian -in pink, eastern Asian -in red, Indian -in grey, European -in white, Mainland southeastern Asian -in orange, Island southeastern Asian -in yellow, Oceania -in green, and Native American -in purple. doi:10.1371/journal.pone.0032179.g003 including ten newly sequenced samples and revised the classification of this haplogroup that was defined earlier as having seven main branches -N9a1'3; N9a2'4'5; N9a6-N9a10 [18]. Information from complete mtDNA sequencing reveals that Buryat sample (Br_623) and previously published Japanese sample (HNsq0240) from Tanaka et al. [21] share mutations at nps 11368 and 15090 and therefore belong to a rare N9a8 haplogroup ( Figure S3). It should be noted that these two sequences showed deep divergence with each other being characterized by unique sets of seven and six mutations respectively. As follows from phylogenetic analysis data, our Barghut sample (Bt_81) shares transversions at nps 4668 and 5553 with two published Japanese samples [21] and therefore can be ascribed to a previously reported subcluster N9a2a3, Tatar sample (Tat_411G) which is identical to Japanese sample KAsq0018 [21] is a part of N9a2a2, Khamnigan (Khm_36) and Korean (Kor_87) mtDNAs belong to N9a1, whereas Korean (Kor_92) and Buryat (Br_433) variants can be identified as members of N9a3. Interestingly, Russian (Rus_BGII-19) and Czech (CZ_V-44) samples bearing transitions at nps 4913 and 12636 apparently belongs to a new subbranch N9a3a within haplogroup N9a3. Despite the low coalescence time estimates obtained for N9a3a (,1.3-2.3 kya) it is quite probable that its founder had been introduced into eastern Europe much earlier taking into account the age of a whole N9a3 estimated as 8-13 kya and the discovery of a N9a haplotypes in a Neolithic skeletons from several sites, located in Hungary and belonged to the Körös Culture and Alföld Linear Pottery Culture, which appeared in eastern Hungary in the early 8th millennium B.P. [41,42].
Until now there were only ten completely sequenced M10 subjects. The addition of our Shor sequence (Sh_27) to the tree ( Figure S4) gives a branching point for M10a1, defined now by the only transition at np 16129. An Altaian sample (Alt_164) nested with Japanese sample (SCsq0008 [50]) formed a subclade, M10a1a2a, characterized by coding region mutation at np 10529 and back mutation at np 16129. Interestingly, our eastern European M10 mtDNAs (Rus_Vo-78 and Km_27) together with Japanese sequence (ONsq0096 [21]) clustered into another branch, M10a2a, within the second major M10a-subclade, M10a2. It should be noted that the results of mtDNA control region study in central Asian populations demonstrate the presence of M10a2a-haplotypes in Kazakhs at frequency of 0.8% [36]. In general, coalescence time estimate for M10a2a corresponds to 6-11 kya (Table S3), suggesting a relatively recent (post-Neolithic or later) origin and diffusion of M10a2a lineages from central Asia to eastern Europe.
We have also sequenced three complete M11 Siberian mtDNA genomes and compared them with all published M11 complete sequences. Figure S5 displays the reconstructed phylogeny of this haplogroup from which follows that our Buryat sequence (Br_444) fell into subhaplogroup M11a, whereas Altaian mtDNA genome (Alt_33) shared insertion of cytosine at np 459 and transition at np 5192 with Japanese mtDNA (HO1019 [51]) and formed a separate subclade, M11b2, within subhaplogroup M11b. It should be noted that one more subclade, M11b1, characterized by one control region (146) and two coding region (10685 and 14790) transitions can be revealed within M11b. Interestingly, a single M11 mtDNA sequence found in our Teleut samples (Tel_20) looks highly divergent being characterized by unique set of twelve mutations and belongs probably to a previously unreported branch of haplogroup M11, which we propose to designate as M11d.
As has been reported earlier haplogroup M13 encompass/ encompasses two major subclades: M13a and M13b [18]. While subhaplogroup M13a was widely presented in eastern Asia and reached its greatest frequency and diversity in Tibet [45,46], lineage M13b is restrictedly distributed in aboriginal populations of Malay Peninsula [47] and India [48]. In addition, subhaplogroup M13a has been detected at very low frequencies (,1%) in southern Siberian Buryats and Khamnigans [8] and central Asian Kirghizs [36] as well as in Barghuts studied here. Phylogenetic analysis showed that our Buryat (Br_389) and Barghut (Bt_43) samples shared transition at np 5045 and formed a separate branch within eastern Asian-specific subhaplogroup M13a1b ( Figure S6). A coalescence time estimate for subcluster M13a1b corresponds to 3-5 kya, suggesting a relatively recent (late Holocene or later) expansion of this lineage in eastern Asia and even more recent arrival of the M13a1b mtDNAs into northern Asia.
Haplogroup M9. Eastern Eurasian haplogroup M9 encompasses two subclades -E and M9a'b, showing a very distinctive geographic distribution. While subhaplogroup E is detected mainly in Island southeastern Asia and Taiwan, haplogroup M9a'b is distributed widely in mainland eastern Asia and Japan and relatively concentrated in Tibet and surrounding regions, including Nepal and northeastern India [31,45,46,48,52,53]. It has been proposed recently that haplogroup M9 as a whole had most likely originated in southeastern Asia approximately 50 kya, whereas M9a'b itself spread northward into the eastern Asian mainland about 15 kya, after the LGM [31]. The complete mtDNA sequence analysis and the coalescence time estimates obtained suggest that certain subclades of M9a'b were likely associated with some post-LGM dispersals in eastern Asia, especially in Tibet [31,45,46,53].
To further assess the variability of haplogroup M9a'b mtDNAs found in mitochondrial gene pools of eastern and northern Asians we have completely sequenced ten M9a samples representing Mongolians, Koreans, Kalmyks, Altaian Kazakhs, Khamnigans and Tuvinians (Table S4). Combining all published haplogroup M9a'b mtDNA genomes and our newly collected samples, we reconstructed a tree of 132 complete sequences ( Figure S7). According to this updated phylogenetic tree, we have not found any northern Asian-specific subclades of M9a, but we were able to efficiently allocate our new M9a variants into already defined and some newly identified subclades of this haplogroup ( Figure S7). For instance, our Korean (Kor_30), Mongolian (Mn_16) and Kalmyk (Km_68) samples appear as singletons within major subclades M9a1, M9a1b1 and M9a1a1a1, respectively. Meanwhile, Altaian Kazakh (Kz_69) and Kalmyk (Km_79) samples bear transversion at np 10951 and belong to subcluster M9a1b2 revealed recently in southwestern Chinese representatives [53], whereas Korean (Kor_10) mtDNA and complete genome of Vietnamese individual (Kinh_88 [53]) share transition at np 6815 and may therefore represent a new subcluster, M9a4b, within M9a4, distributed both in southeastern Asia and southern and northern China ( Figure S7). Interestingly, the remaining of our M9a mtDNA sequences (Br_377, Khm_15, Tv_351c) fall into subclades which were mainly found in Japan (M9a1a1a1), Japan and China (M9a1a1c1a1), southwestern China and Tibet (M9a1a1c1b). Thus, the M9a1a1-lineages revealed in northern Asian populations could be regarded as a traces of northward Late Glacial dispersal(s) originating in southern China about 14-17 kya proposed on the basis of the phylogeographic pattern of haplogroup M9a1a1 [53].
Conclusions
In order to achieve a thorough coverage of DNA lineages revealed in the northern Asian matrilineal gene pool, we have completely sequenced the mtDNA of 55 samples representing haplogroups R11, B4, B5, F2, M9, M10, M11, M13, N9a and R9c1, which were pinpointed from a massive collection of northern and eastern Asian, as well as European control region mtDNA sequences. By comparing with the all available complete mtDNA sequences, these mtDNAs have been assigned into the available haplogroups with a number of novel lineages identified from a comprehensive phylogenetic analysis.
Overall, the new data confirm that the dissection of mtDNA haplogroups into subhaplogroups of younger age and more limited geographic and ethnic distributions might reveal previously unidentified spatial frequency patterns, which could be further correlated to prehistoric and historical migratory events. Thus, the addition of a large number of completely sequenced haplogroup B mtDNAs from northern and eastern Asian populations to available data sets has allowed us to reveal a few new subclusters within the haplogroup B4 (B4b1a3, B4b1a3a, B4c1a2 and B4j) showing predominantly northern Asian distribution. The whole subcluster B4b1a3 showed a coalescent time of approximately 18 to 20 kya, whereas subclusters B4b1a3a and B4c1a3 emerged around 9 to 13 kya and 7 to 8 kya, respectively. As a result, coalescence age estimates placed the origin of subcluster B4b1a3 in the LGM episode, while subclusters B4b1a3a and B4c1a2 are in a more recent post-glacial period (the end of the Pleistocene and the early Holocene). Our findings confirm our previous conclusion that northern Asian maternal gene pool consists of predominantly post-LGM components of eastern Asian ancestry, though some genetic lineages may have a pre-LGM/LGM origin [12].
Notably, the observation that the most ancestral B4b1a3sequence preceding subcluster B4b1a3a, as well as some of our newly recognized highly divergent mtDNA haplotypes (i.e. within subclusters R11b, M10a1 and M11d) originated from Altai region of southern Siberia, further suggested that the southern mountain belt of Siberia acts as a likely main route for pioneer settlement of northern Asia [54][55][56][57].
The results of our study provided an additional support for the existence of limited maternal gene flow between eastern Asia/ southern Siberia and eastern Europe revealed by analysis of modern and ancient mtDNAs previously [12,37,39,48,42,58,59]. Two more mtDNA subclusters which may be indicative of eastern Asian influx into gene pool of eastern Europeans have been revealed within haplogroups M10 and N9a. The presence of N9a3a subcluster only in eastern European populations may indicate that it could arose there after the arrival of founder mtDNA from eastern Asia about 8-13 kya. It is noteworthy that another eastern Asian specific lineage, C5c1, revealed exclusively in some European populations (Poles, Belorussians, Romanians), shows evolutionary ages within frames of 6.6-11.8 kya depending on the mutation rates values [12]. In addition, recent moleculargenetic study of the Neolithic skeletons from archaeological sites in the Alföld (Hungary) has demonstrated high frequency of eastern Asian mtDNA haplogroups in ancient inhabitants of the Carpathian Basin [42]. Specifically, haplogroups N9a and C5 were also revealed in remains, thus indicating that genetic continuity for some eastern Asian mtDNA lineages in Europeans is possible from the Neolithic Period. Prehistoric migrations associated with the distribution of the pottery-making tradition initially emerged in the forest-steppe belt of northern Eurasia starting at about 16 kya and spread to the west to reach the southeastern confines of eastern European Plain by about 8 kya [60] could be suggested as a potential cause for eastern Asian mtDNA haplogroups appearance in Europe. More information from complete mtDNA sequences as well as the other genetic markers in the contemporary and extinct populations of Eurasia would be helpful to validate our conclusions.
Sampling, HVS1 Sequencing and RFLP Typing
Blood samples from 149 unrelated Barghuts were collected in different localities of Hulun Buir Aimak, Inner Mongolia, China. Hair samples from 98 unrelated Altaian Kazakhs were collected in different localities of Kosh-Agach district of Altai Republic. Total DNA was extracted by the standard phenol/chloroform method. The hypervariable segments (HVS1) (from positions 15999 to 16400) and HVS2 (from positions 30 to 407) were sequenced in all samples followed by RFLP screening to resolve haplogroup status in a hierarchical scheme as described earlier [8].
Data Analysis
Descriptive statistical indexes, the Tajima's D [70] and Fu's FS [71] neutrality tests (for HVS1 sequence data) were calculated using Arlequin software, version 3.01 [72]. Principal Component (PC) analysis was performed using mtDNA haplogroup frequencies as input vectors by STATISTICA 6.0 software (StatSoft, Inc., USA). Nonparametric multidimensional scaling (MDS) analysis based on F ST statistics calculated from HVS1 sequences was also performed using STATISTICA 6.0 software (StatSoft, Inc., USA) to visualize relationships between Altaian Kazakhs and Barghuts studied and other Asian populations around. Published data on mtDNA diversity in western, eastern, central and northern Asian populations [8,[73][74][75][76][77] as well as in Mongolic-speaking Kalmyks [8] residing now in eastern Europe but descended from western Mongolians (Oirats) were included in our comparative analysis.
Overall Figure S1 Phylogenetic tree of haplogroups R11'B6 and B4'B5 constructed using the program mtPhyl. Numbers along links refer to substitutions scored relative to rCRS [69]. Transversions are further specified; ins and del denote insertions and deletions of nucleotides, respectively; back mutations are underlined; symbol,denotes parallel mutation. Sequences indicated in red print are new (Table S4) [24]; as well as FamilyTreeDNA project data available at PhyloTree [18]. The particular sequences from these sources are referred to as MI, QK, MT, ES, HU, KT, QPK, EB, JL, AK, MJP, AH, KTH, CN, QP, ET, AA, RJ, YZ, AT, DM, VM, HR, MSP and FTDNA respectively, followed by number sign (#) and the original sample code. Established haplogroup labels are shown in black; blue are redefined and red are newly identified haplogroups in the present study. (XLSX) Figure S2 Phylogenetic tree of haplogroup R9c, constructed using the program mtPhyl. Numbers along links refer to substitutions scored relative to rCRS [69]. Transversions are further specified; ins and del denote insertions and deletions of nucleotides, respectively; back mutations are underlined; symbol,denotes parallel mutation. Sequences indicated in red print are new (Table S4) while the others have been taken from Kong et al. [28]; Tanaka et al. [21]; Tabbada et al. [23]; Bilal et al. [50]; Gunnarsdottir et al. [88]; Wang et al. [90]; as well as FamilyTreeDNA project data available at PhyloTree [18]. The particular sequences from these sources are referred to as QK, MT, KT, EB, EG, CW, and FTDNA respectively, followed by number sign (#) and the original sample code. Established haplogroup labels are shown in black; blue are redefined and red are newly identified haplogroups in the present study. (XLS) Figure S3 Phylogenetic tree of haplogroup N9a, constructed using the program mtPhyl. Numbers along links refer to substitutions scored relative to rCRS [69]. Transversions are further specified; ins denotes insertions of nucleotides; back mutations are underlined; symbol,denotes parallel mutation. Sequences indicated in red print are new (Table S4) while the others have been taken from Kong et al. [28]; Tanaka et al. [21]; Ueno et al. [86]; Kazuno et al. [80]; as well as FamilyTreeDNA project data available at PhyloTree [18]. The particular sequences from these sources are referred to as QK, MT, HU, AK, and FTDNA respectively, followed by number sign (#) and the original sample code. Established haplogroup labels are shown in black; blue are redefined and red are newly identified haplogroups in the present study. (XLS) Figure S4 Phylogenetic tree of haplogroup M10, constructed using the program mtPhyl. Numbers along links refer to substitutions scored relative to rCRS [69]. Ins and del denote insertions and deletions of nucleotides, respectively; back mutations are underlined; symbol,denotes parallel mutation. Sequences indicated in red print are new (Table S4) while the others have been taken from Kong et al. [28]; Tanaka et al. [21]; Bilal et al. [50]; Kong et al. [44]; Chandrasekar et al. [48]. The particular sequences from these sources are referred to as QK, MT, EB, QP, and AC respectively, followed by number sign (#) and the original sample code. Established haplogroup labels are shown in black; blue are redefined and red are newly identified haplogroups in the present study. (XLS) Figure S5 Phylogenetic tree of haplogroup M11, constructed using the program mtPhyl. Numbers along links refer to substitutions scored relative to rCRS [69]. Transversions are further specified; ins denotes insertion of nucleotide; back mutations are underlined; symbol,denotes parallel mutation. Sequences indicated in red print are new (Table S4) while the others have been taken from Kong et al. [28]; Tanaka et al. [21]; Bilal et al. [50]; Nohira et al. [51]; Chandrasekar et al. [48], Qin et al. [46]; as well as FamilyTreeDNA project data available at PhyloTree [18]. The particular sequences from these sources are referred to as QK, MT, EB, CN, AC, ZQ and FTDNA respectively, followed by number sign (#) and the original sample code. Established haplogroup labels are shown in black; blue are redefined and red are newly identified haplogroups in the present study. (XLS) Figure S6 Phylogenetic tree of haplogroup M13'46'61, constructed using the program mtPhyl. Numbers along links refer to substitutions scored relative to rCRS [69]. Transversions are further specified; ins and del denote insertions and deletions of nucleotides, respectively; back mutations are underlined; symbol,denotes parallel mutation. Sequences indicated in red print are new (Table S4) while the others have been taken from Tanaka et al. [21]; Kong et al. [44]; Macaulay et al. [47], Dancause et al. [84]; Fornarino et al. [52]; Chandrasekar et al. [48], Qin et al. [46]; Zhao et al. [45]. The particular sequences from these sources are referred to as MT, QP, VM, KD, SF, AC, ZQ and MZ respectively, followed by number sign (#) and the original sample code. Established haplogroup labels are shown in black; blue are redefined and red are newly identified haplogroups in the present study. (XLS) Figure S7 Phylogenetic tree of haplogroup M9a'b, constructed using the program mtPhyl. Numbers along links refer to substitutions scored relative to rCRS [69]. Transversions are further specified; ins and del denote insertions and deletions of nucleotides, respectively; back mutations are underlined; symbol,denotes parallel mutation. Sequences indicated in red print are new (Table S4) while the others have been taken from Ingman et al. [78]; Kong et al. [28]; Tanaka et al. [21]; Ingman, Gyllensten [58]; Ueno et al. [86]; Chandrasekar et al. [48]; Bilal et al. [50]; Kong et al. [44]; Qin et al. [46]; Zhao et al. [45]; Peng et al. [53]; Soares et al. [20]. The particular sequences from these sources are referred to as MI, QK, MT, IG, HU, AC, EB, ZQ, MZ, MP, PS, respectively, followed by number sign (#) and the original sample code. Established haplogroup labels are shown in black; blue are redefined and red are newly identified haplogroups in the present study. (XLS) | 7,840 | 2012-02-21T00:00:00.000 | [
"Biology"
] |
Size-resolved simulation of particulate matters and CO2 concentration in passenger vehicle cabins
The main aim of this study is to develop a mathematical size-dependent vehicle cabin model for particulate matter concentration including PM2.5 (particles of aerodynamic diameter less than 2.5 μm) and UFPs (ultrafine particles of aerodynamic diameter less than 100 nm), as well as CO2 concentration. The ventilation airflow rate and cabin volume parameters are defined from a previously developed vehicle model for climate system design. The model simulates different filter statuses, application of pre-ionization, different airflow rates and recirculation degrees. Both particle mass and count concentration within 10–2530 nm are simulated. Parameters in the model are defined from either available component test data (for example filter efficiencies) or assumptions from corresponding studies (for example particle infiltration and deposition rates). To validate the model, road measurements of particle and CO2 concentrations outside two vehicles were used as model inputs. The simulated inside PM2.5, UFP and CO2 concentration were compared with the inside measurements. Generally, the simulation agrees well with measured data (Person’s r 0.89–0.92), and the simulation of aged filter with ionization is showing higher deviation than others. The simulation using medium airflows agrees better than the simulation using other airflows, both lower and higher. The reason for this may be that the filter efficiency data used in the model were obtained at airflows close to the medium airflow. When all size bins are compared, the sizes of 100–300 nm were slightly overestimated. The results indicated that among others, expanded filter efficiency data as a function of filter ageing and airflow rate would possibly enhance the simulation accuracy. An initial application sample study on recirculation degrees presents the model’s possible application in developing advanced climate control strategies. Supplementary Information The online version contains supplementary material available at 10.1007/s11356-022-19078-1.
Cin
Inside particle count (N/cm 3 ) or mass concentration (μg/m 3 ) in one size channel or inside CO 2 (ppm) dPaero Pressure difference due to aerodynamic characteristics (Pa) dPmech Pressure difference due to mechanical ventilation (Pa) FAC2 The fraction of predictions within a factor of 2 of observations
Background
During the past decades, we have seen rising air quality problems, especially an increased number of airborne particulate matter. High particle concentrations have been found to influence human health. Smaller particles like PM 2.5 (particles of aerodynamic diameter less than 2.5 μm) and UFPs (ultrafine particles, which have aerodynamic diameter less than 100 nm) have attracted focus due to their easier access into the human respiration system, and thus higher risks of lung and cardiovascular diseases (Mitsakou et al. 2007).
Considering the elevated particle concentrations on the road and increasing time spent in traffic, vehicle passengers are facing up even higher challenges of particle exposures (Zhu et al. 2007). Thus, there is a demand to get a better understanding of the air quality in vehicle cabins, and the influencing factors to support the development of corresponding protection systems, including filters and HVAC (heating, ventilation and air conditioning) system design.
There has been ongoing research on indoor air quality in buildings and different workplaces since decades ago, while the research about in-vehicle cabin air quality has developed during more recent years. Previous studies have reported about field measurements of in-cabin air quality in the form of particle concentrations, as well as the relations between inside and outside particle concentrations (Xu et al. 2018). There is also research based on modelling of the particle concentration, in turn based on field, as well as lab, measurements.
Complete vehicle measurements are relatively straightforward to perform, but they are expensive in terms of time and human resources. Measurements also include some uncontrollable variables, for example the ambient conditions, and measurement uncertainties. Alternatively, a simulation model could be implemented to mitigate the physical limitations of vehicle measurements, and moreover, to investigate scopes that cannot be realized in vehicle testing. A simulation model has, however, to be validated with complete vehicle measurements and laboratory measurements, to give trustworthy results.
Previous studies have modelled the in-cabin particle concentration and their influential factors, such as driving speeds, outdoor particles, ventilation airflows and infiltration Gong et al. 2009;Joodatnia et al. 2013;Lee et al. 2015a;Ding et al. 2016).
However, there appears to be a lack of studies which include the different filter statuses, pre-ionization, sizeresolved filtration and air recirculation degrees, as well as a connection between air quality and climate energy consumption modelling.
Aim of the study
The aim of the current study is to develop and evaluate a model of cabin PM 2.5 , UFP and CO 2 concentration, to study the influence of filter performance and recirculation on cabin air quality and energy use. The model is based on a previous model of the vehicle climate system energy consumption (Nielsen et al. 2015). The previous model, developed in the software GT-SUITE, is complemented with a mass balance model where particle sizes, filter status, ventilation airflow and air recirculation degrees are considered. The influences of deposition and infiltration are also included. The cabin CO 2 concentration, an established indicator for air quality, is also modelled, considering that air recirculation might cause accumulated CO 2 (Kilic and Akyol 2012;Luangprasert et al. 2017). The model is validated against previous vehicle measurements performed under various conditions .
Field measurements, as well as model simulations, suggest that improved filtration is the most important action to improve the air quality in passenger car cabins. The main requirements of a model, besides the possibility to evaluate different filtering arrangements, is also the possibility to evaluate different ventilation strategies to improve the air quality, e.g. by recirculation, and reduce the use of energy for air conditioning.
Methods
The research comprises literature studies, model development and validation, and an initial sample demonstration of the model capabilities. First, the basic model concept is explained. Second, the details of model parameter definition are explained. Third, the model validation process using previous road-testing data is explained.
Background: climate system model
The model development is based on an extension of a previously developed vehicle climate system model (Nielsen et al. 2015). In that model, the vehicle climate components and control strategies were simulated in detail. The software GT-SUITE, which solves the Navier-Stokes equation in one dimension, was used to simulate the climate systems. The climate system model focused on the energy consumption of the climate system, and pure air without pollutants was assumed as incoming air. Filtration of particles was neglected while the pressure drop at the filter was considered. For more details, please see the 'Methods' chapter of the published paper. In that model, the relevant sub-modules for this study are passenger compartment and air handling modules, i.e. the airside model. The next sections will explain how the particle and CO 2 model is developed based on the climate system model.
The vehicles being modelled in this study are the same as in a previous vehicle measurement study , i.e. a Volvo XC90 (model year 2018) with an estimated cabin volume of 4.1 m 3 , and a Volvo S90 (model year 2018) with an estimated cabin volume of 2.9 m 3 . The two test vehicles share the same HVAC system design and climate control systems.
Airside model with particles and CO 2
Figure 1 illustrates the basic particle/CO 2 transport in vehicle cabins. For particles, the outside ventilation airflow (Qoa) with a particle concentration of Cenv enters the vehicle cabin through the HVAC system, and mixes with the recirculated airflow (Qrec) before passing the filter. Besides, the passive ventilation airflow (Qps) refers to the air entering the HVAC system, which is not induced by the operation of the fan, but because of for example the vehicle's speed or wind speed. This flow is accounted for in the total ventilation flow in the studied vehicles and it also passes the filter (Ott et al. 2008;Lee et al. 2015a). Thus, it is not considered infiltration. The filter removes particles by an efficiency value of η (within 0 to 1), which is size dependent. The recirculation degree (%) defines the ratio of Qrec to Qrec + Qoa. The infiltration airflow (Qinf) here refers to the uncontrolled air leakage through cracks and leaks on the vehicle envelope, for instance cracks between the frame and doors (Xu et al. 2010). Particle deposition flow on the interior surfaces like seats and carpets is described as Qdep. Cin and Cenv are inside and outside particle concentrations.
For the CO 2 air transport, the transport is similar except for that CO 2 is not removed by the HVAC filter, and not deposited on the surfaces. Besides, the internal source from human breath is added, where N is the number of passengers, Vbr is minute ventilation in litres per minute and Cbr is the carbon dioxide concentration contained in the exhaled air. Cin and Cenv are inside and outside concentrations of CO 2 .
Based on the transport mechanisms in Fig. 1, the corresponding mass balance equations for the vehicle cabin are given in Eqs. (1) and (2), where the in-cabin concentration (Cin) of particles and CO 2 are estimated correspondingly. The estimation uses inputs from parameters including outdoor particles/CO 2 levels, vehicle speed, ventilation airflow (climate settings), filter status, ionization status, passenger numbers etc. The penetration loss coefficient α is accounting for the loss of particles at cracks through which the infiltration flow passes, and an experienced value of (1) It should be noted that the particles considered in this study vary between 10 nm and 2.5 μm, i.e. PM 2.5 except for particles less than 10 nm. The lack of the smallest particles is due to the instrument detection limit in road measurements. The size range is divided into 25 size channels in accordance with the instruments. Cenv and Cin are concentrations of particles in certain size channels, and η is the corresponding filtration efficiency of particles in that size channel. The balance equation was originally deployed for particle count concentration, considering that the definition of filtration efficiency is based on particle counts (number of particles per unit volume). Under this condition, Cenv and Cin represent count concentration per size channel (N/cm 3 ), while the equation can also be used for mass concentration, since it is size dependent, i.e. assuming particles in the same size channel have the same average aerodynamic diameter and density. Thus, the particle mass concentration per size channel (μg/m 3 ) was simulated with the same equation.
To solve Eqs. (1) and (2), the parameters require definitions based on the application conditions. In this study, the parameters are either defined from available test data (η, Cenv for particles, N), defined from a previously developed model (Qoa, Qrec, Vcabin) or based on experience from relevant studies (Qdep, Qinf, Nbr, Cbr, Qps, α, Cenv for CO 2 ). Then, the steady-state solution of Cin under given conditions can be calculated. The details about all parameter definitions in Eqs. (1) and (2) will be presented now in each corresponding section.
Ventilation airflow and cabin volume For the two modelled vehicles in this study, the previous climate system model calculates steady-state results of airflow rate simulation (Qoa, Qrec) based on relevant model inputs, including HVAC fan speed, vehicle speed, recirculation degrees, HVAC flap positions, ambient temperatures etc. These values were obtained from the validation measurement data, through either the vehicle's own logs or the instrument. The cabin volume Vcabin was also estimated from the same model.
The passive ventilation airflow Qps entering the cabin has been found linearly related to the vehicle driving speed vspeed (Ott et al. 2008). Linear regression of the measured passive ventilation data has reported an experience coefficient of 0.21 m −1 (Lee et al. 2015b). Thus, Qps is calculated as in Eq.
Incoming particles and filtration of particles To investigate the air quality features in this study, particles were added into the incoming air in the previous climate model in the (3) Qps = 0.21 × vspeed × Vcabin software GT-SUITE, to simulate the particles from the environment. Besides, the filtration of particles was implemented in the filter component, where η is defined based on available component data. Details are given in the next paragraphs.
The atmospheric particles consist of varying compositions depending on the particle sources, type, locations etc. To simulate the particles species in the software GT-SUITE, a FluidGas template is used to simulate particles as Tracer Gas, where the concentration of particles could be defined regardless of chemical compositions (Gamma Technologies llc 2019). An injection template is used at the HVAC inlet position to mix incoming air with particles, where the outside particle concentration Cenv (μg/m 3 ), outside airflow rate Qoa (m 3 /s) and passive ventilation airflow Qps (m 3 /s) are defining the injection airflow rate (μg/s).
On the other hand, the filtration process of particles is simulated with an ejection template at the filter component, where the ejection rate of particles is defined with the sizedependent filter efficiency η. This study simulated the same filters used in the validation measurements , which are a newly manufactured filter and a 500-h-aged (end of service interval) filter of the same type. For the new filter status, the efficiency values were applied from several available supplier component tests. For the 500-h-aged status, several component test data are also available for the same filter model type, although less than the new filter status. Similarly, the filter efficiency data with pre-ionization are based on a restricted number of test data, which means the efficiency for an aged filter with ionization was partially estimated based on the ionization improvement on the new filters. The tests were mainly performed under an airflow of 288 m 3 /h (80 L/s), and thus, the influence of airflow on filter efficiency is not considered in the initial model.
During the simulation, given the filter status and ionization status, the corresponding upper and lower limits of all the available efficiencies are used for η, which are given in Table 1. With above application in the model, the items (Qoa + Qps) * (1 − η) are exported from GT-SUITE steadystate simulation results as the input to Eqs. (1) and (2). Later the simulated Cin with two sets of η are averaged as the average simulated in-cabin particle concentration.
Deposition and infiltration
In this section, the definitions of particle deposition flow (Qdep) and infiltration airflow (Qinf) in Eqs. (1) and (2) are explained. The Qdep in the vehicle cabin could be modelled using the deposition rate β (h −1 ) as in Eq. (4). The deposition rate value has been reported to be 0.6-12 h −1 for PM 2.5 by Harik et al. (2017) and 3.2-11.8 h −1 on average for UFPs by Gong et al. (2009). The variation is due to vehicle type, airflow rate and particle size. Based on the studied vehicles and airflows in this study, the size-dependent depositions rates deployed for particles between 10 nm to 2.5 μm are summarized in Appendix Table A1. Lee et al. (2015b) have performed studies on the infiltration airflow through both experimental measurements and modelling analysis. They concluded methods to model infiltration flow as in Eqs. (5) to (8). Equation (5) explains the pressure difference caused by mechanical ventilation of outside air (Qoa) and passive ventilation airflow (Qps). The leakage parameters kf and n depend on vehicle types and was measured for 10 vehicles from cabin pressurization tests (Lee et al. 2015a). While driving, the differential pressure on outer surface of the vehicle caused by aerodynamic changes (dPaero) could be derived as Eq. (6) from vehicle speed (vspeed) and vehicle characteristic parameters a, b and kp. When the pressure at outer surface is higher than the cabin pressure ( ΔPinf > 0) , infiltration could occur due to the pressure difference. This infiltration flow Qinf is calculated as Eq. (8), where Frev is the reverse leakage flow correction factor which considers the difference between infiltration flow and exfiltration flow (Fletcher and Saunders 1994). In this study, the vehicle-related parameter values (kf, n, a, b, kp, Frev) adopted from previous studies are presented and explained in Appendix Table A2.
Assumptions on respiration losses/gains
The passengers' respiration losses/gains of particles in the cabin are considered negligible compared with losses from filtration and deposition. This assumption is supported by Xu and Zhu (2009), that respiration airflow is nearly zero under driving conditions, and even under extreme idling conditions, the deposition losses are 40-210 times higher than respiration losses. Besides, no phase change of particles is included in the model. The model assumes air in the cabin is wellmixed, i.e. the particle concentration is the same in different positions, which was reported in previous four-point particle measurements in vehicles (Joodatnia et al. 2013).
CO 2 parameters
The outside and recirculation airflow rate parameters (Qoa, Qps, Qrec) used in the CO 2 model in Eq. (2) are the same as the particle models in Eq. (1), while the unique CO 2 parameters that require definition are Cenv, Cbr, Vbr and N. Cenv represents the outside CO 2 concentration (ppm). The internal Table 1 Size-dependent filter efficiency η generation of CO 2 from passenger respiration exhalation is another source. It is simulated based on the number of passengers (N), the average CO 2 concentration contained in exhaled air (Cbr) and the average exhaled air volume (Vbr). The definitions of these parameters are explained further in following paragraphs.
Atmospheric CO 2 concentration The outside CO 2 concentration Cenv can be determined either from measurement data or from estimations based on the vehicles' incoming air conditions. As will be explained in next section, the validation process of this study utilized road measurement data which contains in-cabin CO 2 concentrations throughout the whole campaign, while not the simultaneous outside CO 2 concentration. Thus, an estimation of the outside CO 2 (Cenv) is used in the validation. The vehicle measurement was performed in a road tunnel in Gothenburg, Sweden, in 2018. The tunnel environment was identified to have elevated CO 2 concentrations compared to the open-road environment, due to weakened diffusion of vehicular emitted pollutants as well as higher traffic density (De Fré et al. 1994;Cong et al. 2017;Wei and Wang 2020). Considering the tunnel length, sampling location, traffic density and vehicle compositions, the corresponding data from Ho et al. (2009) and Zhang et al. (2015) were considered comparable, which were 710 ppm and 722 ppm respectively. The average of two, 716 ppm, was used for the parameter Cenv in model validation.
Respiration-exhaled CO 2 As described in 'Model validation', the respiration-exhaled CO 2 source is added in Eq. (2) as N * Vbr * Cbr. Vbr represents minute ventilation (or respiratory minute volume) in litres per minute, which is the gas volume exhaled from a person's lung per minute. It varies with physical activity levels and personal characteristics. Minute ventilations under normal sitting conditions vary between 5 and 8 L/min (Levitan 2015). An average Vbr of 6.5 L/min is used in this study since passengers sitting in a standstill car were almost at rest. Cbr is the carbon dioxide concentration contained in the exhaled air (ppm). According to a previous study on carbon dioxide exposure (Scott et al. 2009), Cbr is set to 40,000 ppm. N is the number of passengers, which is defined from measurement logs and is either 2 or 3 people in the vehicle.
Model validation
The model validation uses results from previous vehicle measurement on roads. The measurements were performed in two locations, Sweden and Northern China, under similar testing setups: varied filter status, airflow rates, recirculation degrees, utilization of pre-ionization etc. After the parameters were varied to each test case, the steady-state values were obtained. The measured values are both in-cabin and outside particle counts and mass concentrations for 41 size channels from 10 nm to 35 μm, as well as PM 2.5 and UFP concentrations. The simultaneous in-cabin CO 2 concentration was logged as well in Sweden for all the test cases. More detailed descriptions can be found in the methods section of the previous paper by .
The validation process is illustrated in Fig. 2. To validate the road measurements, the actual HVAC fan speed, flap positions, vehicle speeds, ventilation setup of recirculation degrees and air distribution were read from the test data and input into the air quality model in GT-SUITE, to simulate the measurement conditions. Based on the filter used, the corresponding efficiencies as from Table 1 are used in the model as well. With these setups, the HVAC outside airflow rates (Qoa), recirculation airflow rates (Qrec) and the item Qoa * (1 − η) are obtained from steady-state simulation results. Besides, the measured outside particle or CO 2 concentration were read as Cenv. Together with other input parameters as explained in previous sections (Qdep, Qinf, Nbr, Cbr, α, N), the steadystate solution of Cin, i.e. the simulated in-cabin particle concentration for each particle size, or CO 2 level could be compared with the actual measurements. When the particle concentrations within certain sizes are summarized, the simulated PM 2.5 (μg/m 3 ) and UFP counts (N/cm 3 ) are obtained and can be compared with the real road measurement.
Overall validation of PM 2.5 , UFP and CO 2
The simulation data are compared with steady-state road measurements, both regarding the total particle concentration (PM 2.5 , UFP), the particle concentration per size
ValidaƟon
Other input parameters channel and the CO 2 concentration. As mentioned previously, a filter efficiency range is adopted for the parameter η, which results in a model concentration range. As in Fig. 3, the measured and simulated (average of model concentration range) inside PM 2.5 values (μg/m 3 ) of all steady-state test cases are presented, and both new and aged filter statuses are included, as well as ionization and no-ionization groups are marked. The UFP counts (N/cm 3 ) are also compared. The two particle sizes show similar trends, that most of the simulation values correlate with measurements well, with some separate cases showing deviation. This similar trend is expected since the particles smaller than 100 nm account for a large part of the total measured counts, due to sources from road vehicular emissions (Qi et al. 2008). For part of the tests in Sweden, when the outdoor air is relatively clean, and the new filter is installed, the inside PM 2.5 is lower than 10 μg/m 3 . At this range, the simulation scatters relatively more due to low absolute particle levels. When comparing the filter statuses, the aged filter generally results in higher in-cabin particle levels due to deteriorated filtration, while the overall comparison between simulation and measurement is quite similar when the two locations are compared. The Pearson's correlation coefficient (r) is calculated between simulation and measurement for different groups in Fig. 3 respectively. The new filter group has r of 0.87 compared with 0.90 in the aged filter group. The ionization group has r of 0.92, and the no-ionization group has 0.96 (all p < 0.05).
Similarly, the comparison of CO 2 concentration prediction and measurement is shown in Fig. 4. The prediction generally agrees well within the measurement range and the Pearson's r is 0.89.
To further evaluate the model performance, several model performance factors as in Table 2 are calculated. The definition and explanations of these parameters are given in Eqs. (10)-(15) in Appendix B. These parameters are selected to reflect both mean bias and random scatter (Patryl and Galeriu 2011). Especially since the data of in-cabin particle concentration (PM 2.5 ) exist in a wide range of magnitudes due to different measurement locations, the use of logarithmic forms (MG and VG) are considered appropriate (Hanna et al. 1993). When using these parameters, Joodatnia et al. (2013) proposed that a good atmospheric model prediction should meet the following criteria: Fig. 3 Comparison of simulated and measured in-cabin PM 2.5 values and UFP counts. Data include all test cases (128 samples), including both new and aged filter statuses, ionization and no ionization • The mean bias should be within 30% of the mean: 0.7 < MG < 1.3 and | FB| < 0.3. • Random scatter of predictions within a factor of 2 of the mean: VG < 1.6 and NMSE < 4.
The calculated factors for this model are presented in Table 3. The predictions of particles and CO 2 meet all the above criteria well. Generally, the prediction shows good performance. The CO 2 model prediction shows slightly less deviation, possibly due to the model not containing variance from filtration as particle models.
Validation of filter statuses, ventilation airflows and recirculation
As Fig. 3 provides an overall profile of the simulated and measured particle concentrations in the cabin, these results could be further investigated within different categories. For example different filter ages and ionization statuses were included, which have significant influence on the filtration performance, i.e. the filter efficiency (η). The simulation performance could thus be influenced by these parameters.
So, the results were classified into 4 categories, considering new and aged filters, as well as ionization on and off. It was noted that within each category, the outside particle concentrations (Cenv) were distributed in a wide range. To be able to compare, the indoor to outdoor ratio (I/O ratio) is considered, which is the inside concentration divided by outside concentration for PM 2.5 or UFP counts.
As in Fig. 5a, the simulated and measured average PM 2.5 I/O ratios of each category are compared. The simulations towards new filters give close average I/O ratios to the measurements, where the differences between averages are all within 5%. On the contrary, the aged filter ionization category showed the largest deviation of simulated I/O ratio (31%), and this group also shows larger variance (Std = 0.19) of the measured I/O ratio compared with others, which is possibly due to the particle accumulation not being even throughout the aged filter surface, and thus possibly more unstable performance. It could also be seen from the graph that the simulation tends to overestimate the I/O ratio for the aged filter ionization group, i.e. underestimate the filtration performance.
Furthermore, in graph b, the difference between the PM 2.5 I/O ratio for each sample is calculated, and then summarized under the four categories using the box-and-whisker plots. So, each column shows the distribution of simulation and measurement deviation, in the sense of I/O ratio. It confirms the observation from graph a that the aged filter ionization group has a higher simulation deviation, and the I/O ratio difference also lies in a wider range.
Similarly, the results are also analysed for UFPs, and the trends are similar to PM 2.5 . They are given in Appendix C Fig. 6. The simulated and measured PM 2.5 I/O ratios are compared under 4 different ventilation levels (Xlow, Low, Medium and High). The estimated airflow rates at these 4 levels are around 23, 40, 59 and 86 L/s respectively. It should be noted that the relatively large standard deviations are due to the variation of filter statuses, ionizations etc.
When comparing the simulation and measurement, the Medium airflow category is closer to reality, which could be related to the filter efficiency (η) estimation as shown in Table 1. These filter efficiency values are mainly from filter component tests under standardized airflow rates of 288 m 3 /h (80 L/s), which is between Medium and High levels in the simulated cars. Since the filter efficiency is influenced by the ventilation airflows in reality (Knibbs et al. 2010;Shi 2012), this estimation could cause the deviation for the other airflows when only efficiencies under one airflow are utilized. Furthermore, 4 paired samples t-tests between the simulated and measured PM 2.5 I/O ratios in each airflow level are performed, at a significance level of 0.05. The corresponding p values are 0.00, 0.05, 0.56 and 0.01, which confirms that the Medium airflow category showed no statistically significant difference between simulation and measurement averages.
Moreover, the general trend is that higher ventilation airflow rates lead to higher I/O ratios, since a shorter residence time deteriorated the filtration capability (Qi et al. 2008). Another possible cause could be that the relative importance of the effect of deposition diminishes as the ventilation rate increases. But the Low ventilation category in Fig. 6 contains half of the samples with recirculation while the other three Similarly, the 4 air recirculation degrees (%) are compared in Fig. 6, where higher recirculation degrees lead to lower measured I/O ratios. It is also seen that the average of simulation is close to average of measurement in all 4 levels and the recirculation estimation is not influencing the simulation performance to a high degree. This agrees with the simulation process since the recirculation is not influencing the estimated filter efficiency.
Validation of particle size
The simulation of particle concentration is also evaluated for each size channel, which includes 25 size channels from 10 nm to 2.5 μm. Firstly, two example test cases are chosen for visualization of comparison in Fig. 7. The measured inside particle count concentration (N/cm 3 ) per size channel, the predictions and the corresponding outside measurement are shown. The grey area represents the model range due to variations in the filter efficiency parameter, and the model average is also given. The measurement value mostly lies in the prediction range, and the range around 100-200 nm is slightly overestimated in case b. It is obvious that the aged filter is less efficient at removing outside particles.
To further investigate the bias in each size channel, the fractional bias (FB) between average simulated and measured particle counts was calculated in different size channels for the new and the aged filter separately, and sizes larger than 352 nm are excluded since particle counts are nearly zero. The dimensionless FB is used since inside particle concentrations (Cin) alter widely within different sizes. FB reflects the mean bias between prediction and measurement, i.e. an evaluation of overestimation or underestimation, as in Eq. (9), where P is a predicted concentration and O is the corresponding observed concentration. Thus, a negative FB value shows an overprediction and reversely positive FB indicates an underprediction. FB that is equal to 0.67 is equivalent to underprediction by a factor of 2 (Patryl and Galeriu 2011).
The FB values of each size channel for the two filter types are shown in Fig. 8. The FB values are mostly within ± 0.3, which could be considered good. The 52-352-nm range normally contains the most particle counts, and the model prediction here mostly shows a slight overestimation of concentrations in both new and aged filter groups.
Sensitivity analysis on parameters in the simulation
The parameters utilized in the simulation for estimation of filter efficiency, ventilation airflow, infiltration and (9) deposition are investigated in this section, and the corresponding sensitivity analyses are presented. The UFP results are similar to that of PM 2.5 and are thus not presented.
Filter efficiency
Filter efficiency (η) is a crucial factor in the filtration modelling (Xu et al. 2011;Shi 2012), especially within the size range where most particles are distributed. As shown in Fig. 7, the highest inside particle concentrations normally are found in a narrow size range around 100 nm. One reason for this is that the outdoor concentration is high in that size range. Another reason is that the most penetrating particle size range of the cabin air filter is within, or close to, that size range (Xu et al. 2013). On average, the particle count concentration (N/cm 3 ) in the size range 52-352 nm Fig. 7 Two example cases are presented regarding measured particle count concentration (inside and outside) and simulated particle count concentration range/average per size channel. The two examples show mean two 5-10-min stabilization measurements under these corresponding settings: a aged filter, no ionization, Low airflow rate, Sweden; b new filter, no ionization, Middle airflow rate, Sweden comprises 68% of the total inside count concentration. The mass concentration (μg/m 3 ) comprises 83% of the total inside mass concentration in our measurements.
Thus, to investigate the influence from filter estimation, the filter efficiency parameter (η) is altered ± 0.05 in the size range 52-352 nm. The changes of the simulation results are shown in Fig. 9.
Based on Fig. 5, the average simulated PM 2.5 I/O ratios obtained using altered filter efficiencies are added into the same four filter categories. For the two categories comprising new filters, the original simulation has an average I/O ratio that is closer to the measurement than is the case for the altered simulations. For the categories with aged filters, the results are different. An increased η is performing closer to the measurement for the aged filter with ionization category, while a decreased η is better for the aged filter no-ionization category. This agrees with the validation results in Fig. 5 that the aged filter ionization group originally underestimates the η much, but the new filter groups are already performing well.
Furthermore, all the total of 128 sample cases were compared individually, in the sense of original and altered simulations of the PM 2.5 I/O ratio, when the filter efficiency was changed. This is presented as in Fig. 10. When the efficiency was decreased by 0.05, 47 cases reported an absolute change in the PM 2.5 I/O ratio larger than 0.05, while all of them smaller are than 0.1. Reversely, when the efficiency was decreased by 0.05, all the cases reported an absolute change in the PM 2.5 I/O ratio smaller than 0.05.
It could be concluded that the filter efficiency is a relatively crucial parameter for predicting PM 2.5 and UFPs in the cabin, especially when the aged filter is simulated.
Ventilation airflow
The ventilation from outside air airflow (Qoa) and ventilation airflow from recirculation (Qrec) in this model are simulated based on the vehicle climate model developed in a previous study (Nielsen et al. 2015), which uses the same control strategies as in the climate control unit of the vehicles tested in the present study. Within the four airflow levels simulated, the Low airflow level is common in normal user setups and contains more available data samples. To investigate the influence from airflow estimation on the model, the airflow rates were varied for all the Low cases in a sensitivity study. Qoa and Qrec are altered together by ± 10%, ± 30%, ± 50% and ± 70% of original values. This also ensures that the recirculation degree (%), i.e. the relationship between these two, is maintained the same as the original simulation. The results mainly showed that the simulated particle concentration in each size bin (Cin), PM 2.5 and UFPs, as well as I/O ratios, for all the cases are nearly not changed at all compared to the original simulation results.
To conclude, ventilation airflow variation within a common deviation range in this study is not influencing the simulation to a high degree. One reason is that the influence from ventilation airflow on filter efficiency is not considered in this model due to limited data. Furthermore, when solving Eqs. (1) and (2), Qoa exists in both the source and loss terms and is mostly more than 10 times larger than the other airflow items; thus, changing Qoa would not directly influence the particle results dramatically. When the high recirculation and infiltration both happen, the particle simulation results would be more sensitive to the Qrec variation.
Infiltration and deposition
The infiltration airflow (Qinf) was estimated from relevant studies using vehicle characteristic values as in Table A2 for the two cars in this study (Lee et al. 2015b). The leakage flow coefficient kf and pressure exponent n adopted correspond to reported values from similar vehicle types and cabin volumes, which are relatively low. The validation results showed that the infiltration values (Qinf) were almost all zero in all 128 data samples, except for a few cases that have Qinf in the magnitude of 10 −4 m 3 /s. This could be expected since, in general, the newer cars are predicted to have better sealing performance and the measurement cases always have the ventilation fan on, which pressurizes the cabin. The cases with positive Qinf are all with high recirculation degrees, where cabin pressurization from outside air (Qoa) is less.
In our validation data, the ventilation airflows from outside air (Qoa) are between 44 and 291 m 3 /h (12-81 L/s) and in 116 of 128 cases they are higher than 58 m 3 /h (16 L/s). The driving speeds are between 13 and 114 km/h in China (S90) with an average of 76 km/h and are all zero in Gothenburg (XC90). Of 128 cases, 120 are below 103 km/h. This supported that our result mainly agrees with studies from Lee et al. (2015a, b), where they concluded that average outside air ventilation airflows between 58 and 133 m 3 /h could prevent infiltration when driving speeds are correspondingly below 103-123 km/h.
Although the adopted kf and n values were from similar vehicles and showed expected results, they were still varied in the sensitivity analysis to investigate the variation of Qinf.
Higher kf values from two other vehicle models (Lee et al. 2015b) were utilized, where a kf of 69.39 is the maximum in all vehicle models. Figure 11 presents the Qinf variations of all test cases using various kf values. For comparison, the HVAC ventilation airflow (Qoa + Qrec) ranges of all test cases are shown, and the range represents four (Xlow to High) ventilation levels that were measured.
When kf equals 18.78, the results are similar to those of the original simulation. However, the infiltration is positive for some cases with driving speeds higher than 105 km/h, and these cases all have Xlow or Low fan settings, which mean lower ventilation airflow. When kf is 69.39, a few cases above 80 km/h would give Qinf values almost equal to the ventilation airflow, and they also have lower fan settings. While this highest kf value corresponds to cabin volumes and vehicle types different from our studied vehicles, it could be considered not relevant. To conclude, the simulation of infiltration flow would possibly be more sensitive to high-speed conditions, when kf and n values are deviating from those relevant for the studied vehicles.
The particle deposition rates β (h −1 ) were utilized from relevant studies as in Table A1 (Appendix) in the simulation. The results showed that deposition has a relatively small contribution and the deposition flow Qdep was 30-170 times smaller than ventilation airflow (Qoa + Qrec).
During the sensitivity analysis, the deposition rate β (h −1 ) was varied within the literature-reported range of 0.5-12.6 (h −1 ) (Ott et al. 2008;Ding et al. 2016;Harik et al. 2017). A higher β (h −1 ) leads to a higher Qdep. While using the Fig. 11 Simulation of infiltration airflow Qinf (m 3 /s) vs. vehicle driving speed, when using different kf and n values, compared with original simulation. At lower speeds of 0-80 km/h, the infiltration airflows are mainly all zero; thus, all the markers are overlapping at y = 0 in the figure. The HVAC ventilation airflow (Qoa + Qrec) range is given as the grey area for comparison. Data include analysis on all test cases (128 samples) highest value of 12.6 (h −1 ), Qdep is still on average 3-4 times smaller than ventilation airflow for the two studied vehicles.
Sample modelling for air quality and energy use
Most of today's passenger cars have a climate system designed to heat up and cool down the cabin in a rather short period of time and then keep the temperature at a desired level. Possible future requirements on in-cabin air quality in passenger cars may require more advanced climate system controls including sensors (e.g. particle and CO 2 concentrations). Such developments will likely rely on system simulation models that include the parameters that will be involved. The ventilation settings affect the particle and CO 2 concentrations in the cabin and would also possibly affect the energy use for climate system in the car. For example the air recirculation degree could potentially benefit energy use and reduce particle concentration under certain outdoor conditions, since the HVAC-treated cabin air is reused. Meanwhile, it could also increase CO 2 levels in the relatively condensed cabin. The following is an initial example of how the developed model can be used to further investigate these relationships under common user-case outdoor conditions.
The studied example case chosen has a measured indoor PM 2.5 of 48 μg/m 3 when using the installed aged filter, which was higher than the WHO recommendation of 25 μg/m 3 24-h mean. 1 The measured inside CO 2 level is 968 ppm with two persons in the cabin. Given the guidelines from ASHRAE (The American Society of Heating, Refrigerating and Airconditioning Engineers) (ASHRAE 2018) that inside CO 2 levels should not be more than 700 ppm higher than the outdoor levels, the target value of CO 2 is set to 1500 ppm for the vehicle cabin, which is also considered the reference in the development of the studied vehicle's climate strategy.
This case has the setting of aged filter, Low airflow rate and ionization off. The original measured outdoor particle distributions and outdoor temperature are used as model input. The in-cabin desired temperature is 22 °C, and the airflow rate is low (the same as in the measurement), while the recirculation degree was varied, as 0, 30%, 50% and 70%. The corresponding inside PM 2.5 , CO 2 and steady-state power consumptions are compared. The power consumption refers to the major components in the climate system, i.e. air compressor, blower, air heater and cooling fan.
The results are shown in Fig. 12. As recirculation increases, the PM 2.5 in cabin obviously decreases, and the CO 2 increases. Considering the CO 2 target value of lower than 1500 ppm, 70% recirculation reduced the PM 2.5 level under the target of 25 μg/m 3 while maintaining an acceptable CO 2 level. The blower and cooling fan consumptions under these settings are not varying to a high degree. The heater power and compressor power were reduced by around 13% and 18% when recirculation increases from 0 to 70%.
However, the development of advanced ventilation strategies needs further model development, as well as development of suitable driving cycles enabling evaluation of the value of the strategies.
Discussion
The filter efficiency in real road conditions depends on many factors. Outside particle concentration and distribution (Knibbs et al. 2009), pollutant sources (Kaur et al. 2005;Qiu et al. 2017), filter ageing status and ventilation airflow rate (Abi-Esber and El-Fadel 2013) are among the factors that influence the actual filtration in the vehicle. Thus, using Fig. 12 Simulated inside PM 2.5 , CO 2 concentration and major climate power consumption at varied recirculation degrees (%). The studied example case has these settings: aged filter, Low airflow rate and ionization off laboratory component data in exclusive standardized test conditions to estimate road testing would originally include possible deviations. Component tests with extended scenarios would be beneficial in better predicting the filter performance. Specifically, regarding this study, the filter efficiency and pressure drop data as a function of ageing status and airflow rate would improve the simulation of particle concentrations and the fan power in the HVAC system.
The 500-h-aged filter was simulated using filter efficiency data available from companion aged filters in different environments for the same model type. The ageing sources of these filters are not the same, since they were all aged with outdoor pollutants instead of standardized dusts. This ageing method is aiming at achieving as close to real road pollutant conditions as possible, but it naturally includes more variance and makes each aged filter not entirely the same. A better controlled ageing environment would be helpful to improve the repeatability, as well as to provide meaningful data for further prediction usage.
The influence from airflow rate or face velocity on filter efficiency was not investigated in this study due to lack of corresponding data. The study from Xu et al. (2011) has reported that change of vehicle fan level from 1 to 5 would decrease the filtration efficiency by 10-20% for particles in the 10-to 50-nm size range. A further simulation considering this could be achieved either from more filter component data, or from estimation based on these similar studies.
The CO 2 concentration in cabin is simulated based on well-mixed assumption, while it is observed that the mixing requires a longer time than the sampling period of 5-10 min. This would have contributed to the deviation because the measured CO 2 concentration has not reached the stabilized value. In the future study, the sampling period could be extended. On the other hand, the transient solution of the CO 2 mass equation can be solved and compared with road measurements for further validation. Besides, it also would be of interest to test with a 100% recirculation degree, to understand the CO 2 accumulation in this condition and provide inputs for designing the running duration of recirculation cases in the vehicle for example during a quick heat-up period or in tunnel environments.
Conclusion
In this study, a vehicle cabin air quality model was developed for particles including PM 2.5 and UFPs, and for CO 2 . Particle mass and count concentrations for particle sizes between 10 nm and 2.5 μm were simulated. The model uses inputs from parameters including outdoor particles/CO 2 levels, vehicle speed, ventilation airflow (climate settings), filter status, ionization status, passenger numbers etc. A previously developed model for the same vehicle platform climate system was used to provide inputs of airflows to the air quality model. The filter efficiency was size dependent and varies according to filter age and ionization status. This study also estimates particle deposition and infiltration with experienced vehicle characteristic parameters from corresponding studies.
Previous road tests with the modelled two vehicles were used to validate the model. From the results, it turned out that in general the model simulation correlates well with the measured data with regard to PM 2.5 , UFPs, particle concentration per size channel and CO 2 even though filter data are incomplete.
Different filter statuses and ionization statuses exist in the validation data. In general, the estimations are good and could reflect the road measurements, except for aged filters with ionization which exhibit relatively higher overestimation of particle concentrations. In the sense of different airflows, the predictions for Medium airflow are better, due to the fact that the filter efficiency values adopted in the model were tested at airflows close to the Medium airflow. When it comes to details in each size channel, model prediction within the particle size range 52-352 nm mostly shows a general overestimation of concentrations in both new and aged filter groups.
The model is used to investigate sample cases further with different recirculation degrees, and the corresponding particle and CO 2 concentrations are simulated. The climate model simulates the corresponding power consumption of the climate system. This indicates the usefulness of the model to provide inputs for usage of recirculation in the vehicle to improve air quality and improve energy efficiency.
Further studies will be enhanced using improved aged filter efficiency estimations -including the influence from airflow on filter efficiency.
Author contributions
All authors contributed to the study design. DW and FN contributed to the model development. The model validation, data analysis and writing of the first draft of the manuscript were performed by DW. All authors read and approved the final manuscript.
Funding Open access funding provided by Chalmers University of Technology. This project is funded by the Swedish Energy Agency (Energimyndigheten). They are not involved in the design, data collection, analysis and manuscript writing of this study.
Data availability
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. | 11,167.8 | 2022-02-10T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Adaptive Output Tracking for Nonlinear Network Control Systems with Time-Delay
The problem of adaptive output tracking is researched for a class of nonlinear network control systems with parameter uncertainties and time-delay. In this paper, a new program is proposed to design a state-feedback controller for this system. For time-delay and parameter uncertainties problems in network control systems, applying the backstepping recursive method, and using Young inequality to process the time-delay term of the systems, a robust adaptive output tracking controller is designed to achieve robust control over a class of nonlinear time-delay network control systems. According to Lyapunov stability theory, Barbalat lemma and Gronwall inequality, it is proved that the designed state feedback controller not only guarantees the state of systems is uniformly bounded, but also ensures the tracking error of the systems converges to a small neighborhood of the origin. Finally, a simulation example for nonlinear network control systems with parameter uncertainties and time-delay is given to illustrate the robust effectiveness of the designed state-feedback controller.
Introduction
Network control system is a real-time closed-loop feedback control system composed of sensors, controllers, actuators, etc.The advantages of network control systems are its easy installation and maintenance, and its high reliability and flexibility [1,2].In recent decades, there are lots of progresses in the study of stability of the network control systems [3][4][5][6].
However, in the closed-loop control of the network control system, the data transmission process is often produce time-delay.The time-delay of network control system often affects the stability and performance of the system, and may even cause the instability of the entire system [7].Therefore, the impact of the time-delay on network control system needs to be considered when studying network control systems and designing controllers.In [8], the authors analyzed the source of the time-delay of network control system.For the time-delay problem of network control system, a maximum allowable delay bound satisfying the requirement of stability was proposed in [9], and the maximum delay caused by the network was estimated in [10].For designing controllers of network control systems, in [11], the authors discussed a class of uncertain systems' adaptive control scheme, and in [12] authors analysis robust stability of networked control systems with uncertainty.Although some progresses are made in linear network control systems, nonlinear network control systems with parameter uncertainties and time-delay needs to be studied.For example, in [13][14][15][16][17], the authors study the problems of adaptive robust control for uncertain systems and highorder uncertain nonlinear systems, and analyze the stability of the systems by Lyapunov stability theory.But these papers did not consider the situation of the systems with time-delay.
Therefore, in this paper, the system is modeled as a class of nonlinear network control system with parameter uncertainties and time-delay.A new program is proposed to design controller for this system, and a robust controller is designed by using the backstepping method.According to Lyapunov stability theory, Barbalat lemma and Gronwall inequality, it is proved that the designed controller not only guarantees the state of nonlinear network control systems with parameter uncertainties and time-delay is uniformly bounded, but also ensures the tracking error of the systems converges to a small neighborhood of the origin.The rest parts of the paper are organized as follows: in Section 2, a class of nonlinear network control system is introduced, and the assumption and lemmas are proposed.In Section 3, the controller is designed by using the backstepping method.In Section 4, a simulation example is presented.Finally, a conclusion is given in Section 5.
Problem Description
In this paper, we consider a class of nonlinear network control systems with parameter uncertainties and timedelay, this system is described as , where are respectively the states, the control input and system output, is also an unknown smooth functions, τ is time-delay, and τ ≥ 0.
The objective of this paper is to design an adaptive feedback controller.The designed controller ensures the state of the closed-loop systems is bounded and the trajectory of output y(t) can asymptotic track reference signal y r (t).
Assumption 1 For smooth function Assumption 2 Because we have h i (0) = 0, then the h i (x 1 (t)) can be expressed as h i (x 1 (t)) = γ i (x 1 (t)), and γ i (x 1 (t)) satisfies the following assumption where p i (x 1 (t)) is a known and smooth enough function.
Lemma 1 If the real number a ≥ 0, b ≥ 0, m ≥ 1, then there exist the following inequality Proof for any real number x ≥ 0, y > 0, n > 0, by Young inequality, we have then we can release to Lemma 1.
Adaptive Controller Design
In this section, by using the backstepping recursive method, we design a robust adaptive output tracking controller.The designed ideas of this method are described as follows: for the i-th equation of the system, constructed a suitable Lyapunov function, and designed virtual control law α i , the designed α i makes the subsystem consist of previous i equations is stable, therefore, in step n, the designed controller u which makes the system consist of n equations stability is the true controller that makes the closed-loop control systems globally stable.
Step 1 Reference signal y r is a smooth function and bounded, and its derivative r is also bounded, the output tracking error is defined by , is estimates of the unknown constant parameter θ.Calculating the derivative of V 1 along with system (1), we have Because is bounded, presence non-negative smooth function By Lemma 1, for any real number σ that greater than zero, let By using Young inequality, let constant ξ 1 > 0 we have , where 1 (•) is smooth function that is greater than zero.
So that we can release to where η 1 = 0.
Step 2 Let ε 2 = x 2 -α 2 , constructed Lyapunov function as Calculating the derivative of V 2 along with system (1), we have There exists a non-negative smooth function By Lemma 1, let And because 1 2 , combined with Lemma 1, there exists a smooth function By using Young inequality, let constant ξ 2 > 0, μ 2 > 0, we have Then we have Substituting ( 7), ( 8), ( 9) into ( 6), we have where 2 (•) is a smooth function that is greater than zero.
By assumption 1, we have Step i After the recursive design step i-1, we can get a group of smooth virtual controller as Constructed Lyapunov function as Similar to step 2, we can prove ( 10) is also established in the step i.
Constructed Lyapunov function as Its derivative is given by There exists a non-negative smooth function w i (•), satisfies By the Lemma 1, there exists a smooth function Similar to step 2, there exists a smooth function By using Young inequality, let constant where i (•) is a smooth function that is greater than zero.
By assumption 1, we have Step n After repeated recurrence and proof, in the step n, constructed Lyapunov function as
Its derivative is given by
Copyright © 2012 SciRes.
From ( 14), we can obtain adaptive control law u and parameter following as ˆn z where n (•) is a smooth function that is greater than zero.
Then, we have When n is large enough, then we have So that the entire design procedure is reasonable.Theorem 1 Considering closed-loop systems (1), under assumption and Lemma, there exist a state feedback control law u and control law parameter .The closedloop system is bounded for all allowable uncertainties and the output tracking error converges to a relatively small area, which satisfies In summary, for any real number ε 0 > 0, in limited time T > 0, the closed-loop system satisfies
Simulation Example
In order to show the effectiveness of the design scheme, we choose the nonlinear network control system with parameter uncertainties and time-delay as following: In the simulation, for the closed-loop system (17), we choose the reference signal y r (t) = sint, time-delay τ = 0.01s, θ = 0.2, ξ 1 = 1, ξ 2 = 2, σ = 0.02, λ = 1, the initial conditions x 1 (0) = 1, x 2 (0) = 0.5, (0) = 0.1, According to (15) and ( 16), the control law u and the parameter of The simulation results are shown as in Figures 1 and 2. It can be observed that the output of closed-loop system can track the reference signal well, and the tracking error converges to a small neighborhood of the origin.Therefore the robust adaptive controller is effective.
Conclusion
By using the backstepping method, we design a controller for nonlinear network system with parameter uncertainties and time-delay.Through theoretical analysis, it is shown that the designed robust adaptive output tracking pressed the effectiveness of the scheme.controller is feasible.The simulation results further ex-
Acknowledgements part by the Natural Science
, j = 1, •••, n.So that we get lim 0 | 2,100 | 2012-09-28T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Physicochemical Performance of Collagen Modified by Melissa officinalis Extract
: Collagen-based materials are widely used as adhesives in medicine and cosmetology. However, for several applications, their properties require modification. In this work, the influence of Melissa officinalis on the properties of collagen films was studied. Collagen was extracted from Silver Carp skin. Thin collagen films were prepared by solvent evaporation. The structure of films was researched using infrared spectroscopy. The surface properties of films were investigated using Atomic Force Microscopy (AFM). Mechanical properties were measured as well. Antioxidant activity was determined by spectrophotometric methods using DPPH free radicals, FRAP, and CUPRAC methods. Total phenolic compounds were determined by the Folin – Ciocalteau method. It was found that the addition of Melissa officinalis modified the roughness of collagen films and their mechanical properties. Moreover, the obtained material has antioxidant properties. The parameters mentioned above are very important in potential applications of collagen films containing Melissa officinalis in cosmetics.
Introduction
Collagen comprises one of the most appropriable biomaterials due to its eminent biocompatibility, biodegradability, natural origin, and non-genicity. It is used in medical applications such as drug delivery systems, material matrices, and scaffolds in tissue engineering [1][2][3][4][5]. Collagen is the main component of the extracellular matrix containing fibrils and microfibrils that enable cell attachment and their migration in the materials as well as modifying their mechanical properties [6]. From a biomaterial and cosmetic perspective, the most important types of collagens are type I, which constitutes the major component of the skin, ligament, and tendon tissue, type II-the cartilage collagen, and type III, which is eminent for blood vessels [7][8][9]. Evaluating the capability of the materials is all about the role and function that the potential device needs to perform. The structure of collagen type I is described by three chains that form a triple-helical conformation. Each of the polypeptide chains creates hydroxyproline II type helix. It is firmed by amino acid content. Chains are furled in right inclination to form a triple helix. The amino acid chain can be featured as a Gly-X-Y sequence, where X constitutes proline, while Y is represented by hydroxyproline [10][11][12][13][14][15]. Collagen resistance requires enhancing and optionally addition of functional substances [16]. Nowadays, the possibilities of green chemistry are immeasurable and may offer several solutions. Testing new natural cross-linkers may lead to new solutions that can meet the expectations of modern collagen biomaterials. How-ever, the structure of modified collagen is both plant-dependent and type-of-extract-dependent, so it is necessary to research each plant extract and its influence on collagen properties separately.
The antioxidative potential of lemon balm has been documented due to its chemical composition [31][32][33][34]. Melissa officinalis extract contains flavonoids, gallic acid, phenolic acid, and rosmaric acid. Focusing on Melissa officinalis leaf extract, the main flavonoids are quercetin, rhamnocitrin, and luteolin. Polyphenolic compounds which can be found in Melissa officinalis leaf extract are as follows: caffeic acid, protocatechuic acid, and rosmarinic acid [35]. There are also mono-, triterpenes, and sesquiterpenes present alongside tannins and essential oils [36].
The Melissa officinalis antiviral effect against HSV-1 and HSV-2 due to the presence of monoterpenaldehydes and citronellal have been examined. The results have shown that melissa is affecting the virus before the adsorption into the host cell [37]. This effect is also due to caffeic, rosmarinic, and ferulic acids, which are present in melissa.
Topically, melissa has also been tested in balm form that turned out to be effective in the herpes simplex infection treatment. It has prevented spreading the infection and also mellowed the symptoms like itching, straining, and redness of the skin [38,39]. As the research has shown, hydroalcoholic extract of lemon balm leaves indicated activity against the herpes simplex virus type 2. The antiviral activity was compared to acyclovir activity. It has also been mentioned that Melissa officinalis extract had reduced cytopathic effect to HSV-2 in a nontoxic amount [40].
The antimicrobial activity of Melissa officinalis was documented against Escherichia coli, Pseudomonas aeruginosa, Proteus mirabilis [41], and resistant strain Shigellasonei [42]. Ethanol, water, or ethyl acetate extracts of Melissa officinalis ameliorate antibiotic activity for streptomycin, amoxicillin, tetracycline, and chloramphenicol. The raised biological activity is dictated by the presence of phenol and flavonoid content [43][44][45].
From the biomaterial perspective, the most desirable functions of the adhesive material are antimicrobial, antiviral, antifungal, and antioxidative activity, which can be provided by the addition of the Melissa officinalis extract. Additionally, the anti-inflammatory properties of the mentioned natural extract help to reduce potential swelling [46].
Melissa officinalis has various applications in pharmacognosy, cosmetics, and biomaterial fields due to its antimicrobial, antifungal, antiviral, and antioxidative properties. In cosmetic formulations, melissa protects the skin from oxidative stress, irradiations, and blue light. Due to high content of rosmaric acid, melissa shows antioxidant activity by reducing ROS, which prevents UV damage. Polyphenols and flavonoids in Melissa officinalis demonstrate radical scavenging activity comparable to ascorbic acid, whereas tyrosinase inhibitory activity by melissa extract was higher than arbutin. This makes Melissa officinalis an effective antioxidant, anti-inflammatory, and whitening cosmetic ingredient.
The aim of this research was to prepare collagen materials modified by Melissa officinalis. The mentioned above materials are designed for cosmetic applications. In this work, fish collagen was used, which is already commercially applied in cosmetic products. To the best of our knowledge, the influence of Melissa officinalis on collagen properties has not been studied yet.
Materials
Fish collagen was delivered by WellU sp.z.o.o. (Gdynia, Poland). Such collagen is used in cosmetic formulations. Melissa officinalis dry extract was delivered by Greenvit (Poland).
Collagen Solution Preparation
Collagen (Col) was extracted from Silver Carp skin. Residues such as fat tissue, meat, or scales were obviated manually and purified with chilled tap water to get rid of the clinging tissues. Then, the material was disinfected with a 3% hydrogen peroxide water solution. Side elements were rinsed thoroughly. The cleared skin was placed in 0.1 M acetic acid solution and left for three days to extract the collagenous proteins. The obtained solution was pressed through the properly chosen material, which allowed for collagen separation [47]. In the next stage, samples were lyophilized (ALPHA 1-2 LDplus, CHRIST, −20 °C, 100 Pa, 48 h), then lyophilized collagen was dissolved in 0.1 M acetic acid at the 5 mg/mL concentration.
Melissa Solution Preparation
First, the dry Melissa officinalis (ML) extract weighing 0.3702 g was transferred quantitatively into a 10 mL volumetric flask, then filled to 10 mL with water and mixed to dissolution. Then prepared in the previous stage collagen solution was moved to a 25 mL volumetric flask. Melissa water solution in the volume of 1 mL was transferred to the collagen solution and mixed. The amount of melissa extract in collagen was 29.62%.
Film-Forming Stage
Collagen solution as control and mixed collagen-melissa solution were perched into the adequate plates, previously checking the proper surface level. Collagen solutions filled the plated evenly. Then the samples were left to dry for six days. Dried films were carefully detached from the plates, and their properties were investigated.
Infrared Spectroscopy (IR)
Infrared spectra were examined by Nicolet iS10 spectrophotometer equipped with an ATR device with a germanium crystal (Thermo Fisher Scientific, Waltham, MA, USA). All the spectra were recorded with the resolution of 4 cm −1 with 64 scans. The spectra were evaluated in the range of 400-4000 cm −1 . The data were obtained using the Omnic Spectra 2009 program.
Scanning Electron Microscopy (SEM)
Scanning Electron Microscopy (SEM) was carried out by Scanning Electron Microscope (SEM) (LEO Electron Microscopy Ltd., Cambridge, England, UK). Micrographs of all samples were taken at 300× magnification.
Energy-Dispersive X-ray Spectroscopy (EDX)
Energy-Dispersive X-ray Spectroscopy (EDX) was performed using the Energy-Dispersive X-ray Spectrometer EDX Quantax 200 with detector XFlask 4010, Bruker, AXC, Germany, to assess the elemental composition of a material.
Atomic Force Microscopy (AFM)
The surface structure of collagen/melissa materials was examined by an Atomic Force Microscope. The pictures were obtained by MultiMode Scanning Probe Microscope Na-noscope IIIa (Digital Instruments Veeco Metrology Group, Santa Barbara, CA, USA) operating in the tapping mode, in air, in room temperature. Surface images were acquired at fixed resolution (512 × 512 data points) with a scan rate of 1.97 Hz. Silicon tips with a spring constant of 2-10 N/m were used. Roughness parameters were calculated from 10 μm × 10 μm scanned areas using Nanoscope software.
Mechanical Properties
The shaped pieces cut from collagen and collagen-melissa films were prepared using manual press Optimum DDP10 (Germany). Mechanical properties of collagen and collagen/melissa films like Young modulus and tensile strength were tested using a Zwick&Roell Z.0.5 testing machine in constant condition at room temperature. Parameters of the program: 200 mm/min speed starting position, 0.1 N initial force, 5 mm/min speed of the initial force. In this study, seven samples of each kind of film were measured to evaluate average mechanical parameters.
Preparation of Samples
Collagen film and collagen film with lemon balm (Melissa officinalis) extract weighing from 0.0030 to 0.0060 g were placed in a 10 mL graduated flask. About 5 mL of 0.1 M acetic acid solution was added to dissolve the samples. The samples were placed in an ultrasonic bath for approximately 0.5 h. After dissolving, the contents of the flask were diluted with distilled water to the mark.
Spectrophotometric Method for Determination of the Total Polyphenols Content Using the Folin-Ciocalteu Reagent (F-C Method)
A UV-Vis spectrophotometer Shimadzu UV-1601 (Japan) double beam spectrophotometer was used to measure the absorbance. Folin-Ciocalteau reagents caffeic acid (50 µg/mL), and Na2CO3 (0.13 g/mL) were used:. The measurements were done in standard glass cuvettes.
Preparation of the Calibration Curve
To the 10 mL volumetric flask 0.00, 0.10, 0.20, 0.30, 0.60, 0.70, and 0.80 mL of 50 µg/mL caffeic acid solution were added. Then 0.5 mL of Folin's reagent was added and set aside in a dark place for 5 min. After this time, 4 mL of water was added, mixed, and 1 mL of a sodium carbonate solution was added. The flasks were made up to the mark with water. The absorbance of the sample was measured after 30 min at λ = 725 nm against a blank reference (0.5 mL F-C reagent + 1 mL Na2CO3 solution and make up to 10 mL with distilled water). On the basis of the measurement and the obtained results, the dependence of absorbance on the concentration of caffeic acid was plotted.
Sample Analysis
The volume of 1 mL of the previously prepared collagen film solution and collagen film with lemon balm extract solution was taken into 10 mL volumetric flasks, 0.5 mL of the F-C reagent was added and left in a dark place. After 3 min, 1 mL of Na2CO3 solution was added and made up to the mark with distilled water. After 30 min, the absorbance at λ = 725 nm was measured against a reference blank. For each tested film, five parallel determinations were made.
Determination of Antioxidant Activity by FRAP Method
For the determination of antioxidant capacity by FRAP method, the UV-Vis spectrophotometer previously mentioned was used. The following reagents were used: acetic buffer solution, pH = 3.6; 20 mM iron(III) chloride solution, 10 mM solution of 2,4,6-tripyridyl-s-triazine (TPTZ); the MR-FRAP reaction mixture was prepared as follows: 25 mL of an acetic buffer solution at pH 3.6 was pipetted into a 50 mL beaker; 2.5 mL of TPTZ solution (10 mmol/L) and 2.5 mL of iron(III) chloride solution (20 mmol/L). All the reagents were mixed and incubate at 40˚C (for 15 min). 0.001 M 6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid solution (Trolox) was used as standard.
Preparation of the Calibration Curve Into 10 mL volumetric flasks 0.05, 0.10, 0.15, 0.20, and 0.25 mL of the Trolox solution at a concentration of c = 0.001 M was pipetted. Then, 2 mL of the reaction mixture was pipetted into each of them and made up to the mark with distilled water. The prepared solutions were left for 20 min in a dark place. After this time, the absorbance of the solutions was measured at the wavelength λ = 593 nm, using the blank as a reference.
Sample Analysis
Into 10 mL volumetric flasks, 3 mL of analyzed solution and 2 mL of the reaction mixture were added and next they were filled up to the mark with distilled water. The prepared solutions were placed for 15 min in a dark place. After this time, the absorbance of the solutions was measured at the wavelength λ = 593 nm, using the blank as a reference.
Determination Antioxidant Activity by CUPRAC Method
For the determination of antioxidant capacity by the CUPRAC method, the UV-Vis spectrophotometer previously mentioned was used. The following reagents were used: 0.0075 M neocuproine solution, 0.01 M copper chloride solution, 1 M acetate buffer (pH = 7.0), and caffeic acid solution at a concentration of 50 mg/L as standard.
Preparation of the Calibration Curve
The volume of 2 mL of copper(II) chloride solution, neocuproine solution, and acetate buffer were pipetted into 10 mL volumetric flasks. Then 0.05, 0.10, 0.25, 0.30, and 0.35 mL of caffeic acid was added and made up to the mark with distilled water. The flasks were placed in a dark place for 30 min. After this time, the absorbance was measured at a wavelength of λ = 450 nm against the blank.
Sample Analysis
For the measurement of the antioxidant activity of the studied films, 2 mL copper(II) chloride, neocuproine, and buffer were pipetted into 10 mL volumetric flasks. Then, 2 mL of the tested film solutions were added to the flasks. The flasks were made up with distilled water and set aside in a dark place for 30 min. After this time, the absorbance was measured at a wavelength of λ = 450 nm against the blank.
Preparation of the Calibration Curve
In order to prepare a calibration curve, the following volumes of Trolox were pipetted into 10 mL volumetric flasks: 0.00, 1.00, 4.00, 7.00, 8.00, and 10.00 mL. Then the flasks were made up to volume with ethanol. Next, 1.5 mL of ethanol, 0.5 mL of the previously prepared DPPH solution, and 0.5 mL each of Trolox solutions of increasing concentration were added to plastic measuring cuvettes. A blank test was also made by adding 2 mL of ethanol and 0.5 mL of DPPH solution to the measuring cuvette. The solutions prepared in this way were placed for 15 min in a dark place. After this time, the absorbance was measured at a wavelength of λ = 517 nm. Ethanol was used as a reference. In order to draw the calibration curve, the percentage of the scavenged radical was calculated, which is expressed by the formula: where: A0-absorbance of the blank sample (Trolox volume = 0.00 mL), An-absorbance of the sample.
Sample Analysis
In order to test the antioxidant activity of the tested collagen films, 1.5 mL of ethanol, 0.5 mL of DPPH solution and 0.5 mL of the tested solution were pipetted into plastic cuvettes. A blank test was also performed by measuring 2 mL of ethanol and 0.5 mL of DPPH solution into a plastic cuvette. The blank test was performed separately for each measurement. The solutions prepared in this way were placed in a dark place for 15 min. After this time, the absorbance against ethanol as reference was measured at a wavelength of λ = 517 nm.
Physicochemical Properties
To affirm the presence of melissa in collagen films, the infrared spectra were registered. Listing of IR bands have been discussed ( Table 1). The IR spectra have been shown in Figure 1. The band at 3291 cm −1 represents amide A (N-H stretching) and OH in collagen [48][49][50][51][52]. Collagen strands are represented in 1631 cm −1 for amide I through C=O bond, 1541 cm −1 : amide II (N-H), and 1233 cm −1 : amide III (C-N). As we can see in Figure 1 for the collagen/melissa sample the following bands can be observed: Amide A and O-H groups at around 3298 cm −1 and new band at 1377 cm −1 . The mentioned new band may represent O-H bending in carboxylic acid and O-H stretching due to phenol group in gallic acid, phenolic acid, and rosmaric acid. For collagen with the addition of melissa extract the shift of amide A band was observed (from 3291 cm −1 to 3298 cm −1 ). It may suggest the interaction between collagen and extract components via hydrogen bonds. In fact, the melissa extract contains flavonoids, gallic acid, phenolic acid, and rosmaric acid, which can form several hydrogen bonds with collagen molecules [35][36][37]. The shift of amide A can also be caused by the different amount of water bounded to collagen in the presence of melissa. For amide I and amide II bands we did not observe the shift. This fact may suggest that the secondary structure of collagen type I was not changed after the addition of melissa to collagen solution. The structure of collagen in general before and after the addition of melissa is very similar, except the new band at 1377cm −1 .
Morphological Properties
For investigation of the surface structure of prepared films, SEM and EDX microscopy were performed. Figure 2 represents the SEM image of the collagen/melissa film. The film exhibits a smooth and heteroclite surface. Collagen fibrils are in loose conformation. Energy-Dispersive X-ray Spectroscopy (EDX) was conducted to examine the elemental composition of the material. The percentage elements range in the sample was measured and is presented in Figure 3 with proper voltage attribution represented in seconds per electron-volt at accelerating voltage range (keV) for EDX analysis. The mean value of the mass percentage of C element in the Col/ML sample was 38.34%, whereas for N it was 19.77%, for O it was 39.72%. The mass percentage in the control sample (collagen)was similar and indicated 37.79% of the C element, 21.22% for N, and 39.76% for O.
Atomic Force Microscopy (AFM)
Atomic Force Microscopy (AFM) was used to assess the material surface structure. The AFM image of the surface of collagen film is shown in Figure 4. In Figure 5 one can see the AFM visualization of the surface of collagen/melissa (Col/ML) film. Root mean square average of height deviations (Rq) for each film was measured. The Rq value of collagen film was 172.05 nm, while for melissa-incorporated collagen film was 170.9 nm. Values of Ra (the arithmetic roughness average of the surface) for collagen film and collagen film with melissa were 205.9 nm and 140 nm, respectively. It has been proven that the addition of melissa changes the superficial properties of collagen films. The roughness of collagen films changes eminently as a result of melissa addition.
Mechanical Properties
Tensile strength of collagen and melissa-incorporated collagen films were examined ( Table 2). The arithmetic mean from seven samples was properly assessed for collagen film (41.7 MPa) and melissa-incorporated collagen film (9.8 MPa), which indicates that the addition of melissa decreases the mechanical properties of collagen films. Table 2. The mean average of the Young modulus for collagen film was 0.627 GPa, whereas for melissa-incorporated collagen film it was 0.321 GPa. Tensile stiffness is weaker in collagen film with the addition of melissa. The decrease of the mechanical properties of collagen films after melissa addition indicates that the structure of the material has been changed. Mechanical properties of collagen film modified with melissa extract are much worse than for pure collagen films. It may suggest that hydrogen bonds between components of melissa extract are stronger than hydrogen bonds between melissa extract and collagen. The above results show that the addition of melissa to collagen does not improve the mechanical properties of collagen film. However, melissa is known for its antioxidant properties [53][54]. In the next step, the antioxidant properties of melissa-incorporated collagen films were assessed.
Spectrophotometric Method for Determination of the Total Polyphenols Content of Using the Folin-Ciocalteu Reagent (F-C Method)
The Folin-Ciocalteau method is used to determine the content of phenolic compounds. The phenolic concentration can be read from the gallic acid (or caffeic) calibration curve, which is used as the phenol reference standard [55]. The reaction of gallic acid with molybdenum, a component of the Folin-Ciocalteu reagent, is presented in figure 6. It is a simple and sensitive method; however, it is not selective, and the reaction is slow at low pH. The Folin-Ciocalteau reagent can react with various compounds contained in the sample, especially sugars, aromatic amines, sulfur dioxide, ascorbic acid, and many other phenolic and non-phenolic compounds (e.g., amino acids, hydrazine, proteins, urea), which may ultimately affect the final result of the analysis of polyphenolic compounds [55,56]. The most important stage in this method is preparation a proper calibration curve for the experiment. For the Folin-Ciocalteu method, the data obtained for the calibration curve are collected in Table 3. Based on the obtained results, a standard curve was drawn as the dependence of the absorbance value on the concentration of caffeic acid (Figure 7). In Table 4 the statistical parameters of the calibration curve are presented. Based on the parameters of the reference curve, the polyphenol content in terms of caffeic acid equivalent in the tested samples was calculated. The results are presented in Table 5.
Determination of Antioxidant Activity by FRAP Method
The FRAP (ferric ion reducing antioxidant parameter) method was proposed by Benzie et al. in 1996 to determine the antioxidant activity of plasma, and a few years later, it was used to study plant antioxidants [56]. It is based on the determination of AA through the ability to reduce Fe 3+ ions to Fe 2+ ions under the influence of an antioxidant, and Fe(II) is complexed by TPTZ (2,4,6-tripyridyl-S-triazine) (Figure 8). The reduction reaction leads to the formation of a blue complex (λmax = 595 nm) [55,57].
AA is determined by comparing the value of the change in absorbance of the analyzed sample and the standard solution. The FRAP unit determines the ability to reduce 1 mole of Fe(III) to Fe(II). The change in the absorbance value is linear in a wide range of concentrations, which is the advantage of this method [57]. The optimum pH for this method, necessary to stabilize the iron ions, is 3.6, and the redox potential of the samples must be lower than 0.7 V because the redox potential of [Fe( The FRAP method does not require time-consuming sample preparation, is simple and quick to perform, and ensures repeatability of the obtained results. FRAP has been used in the determination of the antioxidant capacity of cells and tissues; however, it cannot measure the main thiol antioxidant-glutathione. Moreover, Fe(II) ions are easily oxidized, creating a very harmful OH • radical [56].
The results obtained for the reference curve have been shown in Table 6. Based on the obtained results, the dependence of the absorbance value on the concentration of Trolox was plotted (Figure 9). In Table 7, the statistical parameters of the calibration curve are presented. Based on the parameters of the calibration curve, the total antioxidant content in terms of Trolox equivalent in the tested samples were calculated. The results have been shown in Table 8.
Determination Antioxidant Activity by CUPRAC Method
The CUPRAC (cupric ion reducing antioxidant capacity) method is based on the same operating mechanism as the FRAP method. In the CUPRAC method, copper ions are reduced instead of iron ions. Under the influence of antioxidants in the tested sample, the copper(II) complex is reduced to a colored copper(I) complex, for which the absorbance value is measured spectrophotometrically. In this method, two compounds are used interchangeably: a) bathocuproine (2,9-dimethyl-4,7-diphenyl-1,10-phenanthroline) as a copper(I) complexing compound in a ratio of 2:1 to form an orange complex with an absorption maximum at 490 nm; b) neocuproine (2,9-dimethyl-1,10-phenanthroline) forming yellow-orange complexes with copper(I), the highest absorbance of which is at 450 nm [58] (Figure 10). Antioxidant activity is expressed as the amount of uric acid, caffeic acid equivalents in the sample or in Trolox equivalents. The redox potential of the Cu(II) -Nc/Cu(I) -Nc complex is 0.6 V and is higher than the standard Cu(II)/Cu(I) −0.16 V potential, which positively affects the speed and efficiency of polyphenol oxidation. The CUPRAC method is quick, easy, and selective. It allows the determination of hydrophobic and hydrophilic antioxidants as well as compounds contained in samples of plant origin and thus well reflects the total power of antioxidants contained in the sample. It does not require the use of an acidic reaction medium (such as FRAP) or basic (as in the F-C method) [55,58].
The results obtained for the calibration curve by CUPRAC method have been shown in Table 9. Based on the obtained results, the dependence of the absorbance value on the concentration of caffeic acid was plotted ( Figure 11). In Table 10, the statistical parameters of the calibration curve are presented. Figure 11. The calibration curve for the CUPRAC method (absorbance vs. caffeic acid concentration). Based on the parameters of the calibration curve, the total antioxidant content in terms of caffeic acid equivalent in collagen and collagen/melissa films were calculated and presented in table 11.
Determination of Antioxidant Activity by the DPPH Method
This method uses a strong and stable DPPH radical (2,2′-diphenyl-1-picrylhydrazyl), which in an alcoholic solution has an intense purple color with maximum absorption at a wavelength of 517 nm (for a methanol solution) ( Figure 12). The DPPH radical captures electrons from substances with antioxidant properties, which causes the color of the solution to change from violet to yellow, and the absorbance of the tested sample decreases, which is measured spectrophotometrically. The stronger the antioxidant properties of a given sample, the greater the decrease in absorbance reflecting the reduction of the DPPH radical [59]. The antioxidant activity of test samples is expressed as the percentage of reduction of the DPPH radical by the sample with respect to the control sample. The content of antioxidants can also be expressed as the amount of reference substance equivalents (e.g., Trolox, ascorbic acid) or as the degree of DPPH radical scavenging [59,60]. This method is fast and accurate. The obtained results are reproducible and comparable with the results obtained by other methods. It is widely used to measure the antioxidant capacity of natural raw materials such as fruit, juices, food, and plant extracts [59].
The result of measurement by DPPH method are presented in Table 12. In Figure 13 the dependence of the percentage of the scavenged radical on Trolox concentration is presented. Based on the parameters of the calibration curve, the total antioxidant content in terms of Trolox equivalent in the tested samples were calculated (Table 14). Antioxidant activity of collagen/melissa based materials has been proved by several independent methods. The antioxidative properties can be promising for future applications in cosmetics.
Discussion
Melissa officinalis exhibits several properties which can be used in biomaterial preparation [53,54]. The increasing attention of topical application of Melissa officinalis extracts and oils as novel antimicrobial and antiviral pharmaceuticals induce research on the incorporation of Melissa officinalis in natural biomaterials, which will be compatible with human skin, and comprise a matrix for the active substance. In this research, we tried to obtain collagen material modified by melissa. The performed infrared spectroscopy analysis confirmed the presence of collagen and indicated a band at 1377 cm −1 , which represents O-H stretching in carboxylic acid and O-H band phenol group present in rosmaric, gallic, and phenolic acid in the Melissa officinalis extract. The shift of amide A observed in the collagen and Melissa officinalis sample may be caused by creating the hydrogen bonds between the natural extract and collagen. As mechanical properties of collagen films were worse after the addition of melissa, we can conclude that only weak hydrogen bonds can be formed between collagen and components of melissa, so melissa is not a good crosslinking agent for collagen. The performed Atomic Force Microscopy proved that the ad-dition of melissa modifies superficial and film-forming properties of collagen. The roughness of collagen films varied depending on Melissa officinalis addition. It may influence the adhesion of collagen to the skin.
The research on the antioxidant activity of collagen films without the extract and with lemon balm extract clearly shows that biologically active compounds with an antioxidant nature have been associated with the collagen matrix without losing their properties. This increases the potential anti-aging effect of the collagen film on the skin surface. The antioxidant activity (AA) determines the ability of the tested material to counteract a specific oxidation reaction, i.e., it determines the measure of the ability of the substance to delay oxidative processes. This value describes the antioxidant properties of a given system better than the concentrations of all antioxidants contained in the sample determined separately [55]. Radical scavenging activity is significant because of the damaging role of free radicals to the skin, food, and biological systems. Tests based on the capacity to scavenge free radicals employ diverse radical-generating methods for detection of the oxidation end point. Using in vitro assays such as FC, FRAP, CUPRAC, and DPPH the antioxidant activity of Melissa officinalis was confirmed. The antioxidant ability of Melissa officinalis may be utilized as an effective factor in anti-aging cosmetics. As the literature has shown, such materials like silk fibroin, mulberry, and melissa constitutes potential antioxidative ingredients for cosmetic products [61] as well as Lentinus edodes, Acacia dealbata flowers, or grape pomace [62]. Confirmation of the antioxidant activity and the proper choice of active plant extract as well as synergic effect with other ingredients in the cosmetic formulation comprises an important factor to create an effective cosmetic product. The quantity of the Melissa officinalis extract used in this study modifies the structure of collagen films and changes its superficial and mechanical properties. The variety of natural active substances present in the Melissa officinalis extract makes it the valuable agent of broad application in the fields of cosmetics, pharmacy, and medicine. When creating a biomaterial including Melissa officinalis extract, it should be taken into account that the hydrogen bonds created among the gallic acid, phenolic acid, and rosmaric acid may be the reason for the impediments in forming a new biomaterial.
Conclusions
Melissa officinalis extract can be incorporated into collagen solutions and films. However, the results showed that the addition of melissa extract led to the decrease of mechanical properties of collagen films. Tensile strength of melissa-incorporated collagen films is significantly lower than in collagen film, likewise for tensile stiffness. The addition of melissa extract slightly modifies the superficial properties of collagen films. The decrease of mechanical properties and only slight modification of the surface properties are probably caused by hydrogen bonding between melissa components, and only part of them can form hydrogen bonds with collagen. The addition of melissa to collagen leads to materials with antioxidant activity, which can be potentially useful in anti-aging cosmetic products. Further research is required to study biological activity and the cosmetic potential of melissa extract incorporated collagen films. Institutional Review Board Statement: Ethical review and approval were waived for this study, due to the fact that we used the waste of food production.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 7,246.4 | 2021-09-30T00:00:00.000 | [
"Materials Science"
] |
Performance Analysis of Graph Heuristics and Selected Trajectory Metaheuristics on Examination Timetable Problem
ABSTRACT
INTRODUCTION
In the last few decades, the examination timetabling problem has been studied vastly in the artificial intelligence (AI) and operational research (OR) communities due to its complexity and practical significance in educational institutions [1]. Academic institutions frequently face a considerable amount of challenges in the effective scheduling of their examinations with limited resources in a reasonable time. An examination timetabling is a system of allocating a set of examinations into a limited number of time slots and rooms in order to satisfy all hard constraints and to minimize the soft constraint violations as much as possible. The satisfaction of all hard constraints leads to a feasible solution, while the quality of a solution depends on soft constraint satisfaction. It is observed that, according to the requirements and resources of educational institutions, these constraints can be different [2,3].
Examination timetable problems can be categorized as capacitated and un-capacitated problems [4]. In an un-capacitated branch, room capacity is not considered. In a capacitated variant, however, room capacity is considered as a hard constraint. Usually, capacitated datasets are more complex to solve than un-capacitated datasets. Also, capacitated datasets closely resemble the real world timetabling problem, and they are highly constrained. An example of an un-capacitated problem is Toronto datasets, whereas ITC2007 datasets are a capacitated problem [5]. Educational institutions require considerable work to generate an examination timetable due to accommodating examinations, which frequently conflict with each other, into limited resources being inherent difficulties. Moreover, the number of different constraints and the size of the examination instances makes examination timetabling more complex to solve. Violating constraints, such as assigning two conflicting examinations at the same time, is fatal and directly affects students' careers. Considering all of the constraints, an optimal solution is rare to be obtained in a reasonable time, and therefore, researchers focus on sub-optimal solutions, i.e., quality solutions [6]. Examination timetabling is the ideal example of a combinatorial optimization problem where the best solution has to be searched from a very large solution space. Lately, considerable attention has been driven to heuristic and meta-heuristic search techniques for addressing the examination timetabling problem. This is because, for many larger instances (examinations) in real-world settings, these search techniques are capable of finding a good quality solution in reasonable time and limited resources. These include graph-based sequential techniques [7], trajectory-based meta-heuristics[8], population-based meta-heuristics [9], and recently hyper-heuristics [10]. Detail description of various methods related to examination timetabling can be found in different surveys in the examination timetabling domain [11] [12,13] as well as in PATAT series of conference proceedings held from 1995 to 2018 (available at http://www.patatconference.org/). This paper focuses on graph colouring heuristics and well-known trajectory metaheuristics for addressing the examination timetabling problem. In the first step, feasible solutions are constructed using six graph heuristic algorithms. The first three approaches are largest degree (LD), largest weighted degree (LWD), largest enrolment degree (LE). The rest of the three is the hybridization of above three with a saturation degree (SD) separately producing SD-LD, SD-LWD, and SD-LE. In the second step, the graph heuristic algorithm that produces the best quality solution is used for constructing an initial solution for each of the trajectory metaheuristic algorithms. Each meta-heuristic then optimize the solution vector and produce quality solutions. Five trajectory algorithms, which are tabu search (TS), simulated annealing (SA), late acceptance hill-climbing (LAHC), great deluge algorithm (GDA), and variable neighborhood search (VNS), are tested on two popular and complex benchmark datasets, namely Toronto datasets and ITC2007 datasets.
The remaining of this paper is organized as follows. Section 2 describes the examination timetabling problem and mathematical formulation. Section 3 presents related works on the examination timetabling problem. Section 4 describes the heuristic and metaheuristic approaches being investigated. Section 5 contains materials and methods of the study, while section 6 presents experimental results. Finally, section 7 draws the overall conclusion.
EXAMINATION TIMETABLING PROBLEM AND FORMULATION
In this paper, two examination timetabling benchmark datasets are used to assess the performances of proposed approaches. The datasets are Toronto benchmark dataset, un-capacitated datasets, and ITC2007 benchmark datasets, capacitated datasets. The motivation for using these datasets is that they are highly studied in research communities, and ITC2007 is more realistic and more complex. The description of two examination timetabling benchmarks is given below.
Un-capacitated benchmark datasets
The most widely used examination benchmark datasets were introduced by Carter and Laporte [14]. These datasets are also known as Toronto datasets. It is available from http://www.asap.cs.nott.ac.uk/resources/data.shtml. These datasets are un-capacitated examination timetabling benchmark datasets, assuming an unlimited number of seats are available during exam assignments. The Toronto datasets consist of 13 problem instances. Table 1 summarises the datasets.
The Toronto examination timetable hard constraint insists that no students are allowed to attend two or more exams simultaneously (also known as the clashing constraint). The soft constraint (i.e., how the quality of the timetable is measured) is to spread the exams evenly for all students.
The objective function is shown in Eq.1. A penalty value of 16 is given for assigning two examinations consecutively for a given student. A penalty value of 8 is assigned if there is one timeslot between exams followed by a penalty value 4, 2 and 1 for 2, 3 and 4 timeslot gaps between exams, respectively.
Capacitated benchmark datasets
The 2nd international timetable competition (ITC2007) examination datasets were established to facilitate researchers to explore real-world examination timetabling problem and to reduce the gap between theory and practice. The ITC2007 examination datasets contain eight instances (see Table 2), comprising a variety of hard and soft constraints. Referring to Table 2, A1 is the number of students registered, A2 shows the number of exams, A3 is the number of timeslots, A4 indicates the number of available rooms, A5 is the period hard constraints, A6 is the room hard constraints, and A7 is the conflict density. They are available for download from the link http://www.cs.qub.ac.uk/itc2007/examtrack. The hard constraints for ITC2007 examination datasets are defined as follows: H1. One student can sit only one exam at a time.
H2. The capacity of the exam will not exceed the capacity of the room. H3. The exam duration will not violate the period duration. H4. Three types of exam ordering must be respected.
-Precedences: exam i will be scheduled before exam j.
-Exclusions: exam i and exam j must not be scheduled at the same period.
-Coincidences: exam i and exam j must be scheduled in the same period. H5. Room exclusiveness must be maintained. For example, exam i must take place only in room number 206.
Soft constraints for ITC2007 examination datasets are summarized as follows: S1. Two Exams in a Row ( 2 ): Restriction is imposed for a student to sit successive exams on the same day. S2. Two Exams in a Day( 2 ): Restriction is imposed for a student to sit two exams in a day. Restriction is imposed to assign larger exams at the late of the timetable. S6. Room Penalty ( ): Some rooms are restricted for assigning some exams with the associated penalty. S7. Period Penalty( ): Some periods are restricted for assigning some exams with the associated penalty.
The objective of solving these instances is to satisfy the hard constraints and minimize the soft constraint violations (penalty) as much as possible in order to produce a good quality timetable. The objective function can be formularized as in Eq.2 [15].
Where W indicates weight for each soft constraint, and S defines the set of students. Table 3 shows the weights for the ITC2007 examination datasets. Note that the weights are not included in and in the equation. This is because these weights are already included in their definition. A more detailed description of this examination track, as well as their objective functions, can be found in [15,16]
RELATED WORKS
The examination timetabling problem has been widely investigated, and a wide range of approaches have been reported in AI or OR literature over the last few decades. Popular techniques used often for solving examination timetabling are described below.
The examination timetabling literature focuses on graph heuristics frequently because they are simple and tend to be useful in constructing a feasible solution quickly. Kahar and Kendall [3] used four graph heuristics, which are LD, LWD, LE, and SD, to solve University Malaysia Pahang examination timetabling problem. The authors reported that these heuristics produced feasible solutions for all instances of the datasets and better quality solutions than the university's existing software. Sabar et al. [17] used graph colouring hyperheuristic for constructing an examination timetable. In their approach, four lists were prepared using hybridization of low-level heuristics, and 'difficulty index' (a parameter) was issued for the selection of examinations for scheduling. Abdul Rahman et al. [18] proposed the adaptive ordering strategy whereby adaptive mechanism was enabled by adding a heuristic modifier to graph heuristics. Another work is a fuzzy graph heuristic, where a fuzzy combination of LD, SD, and LE was investigated for ordering examinations [19]. Besides, several graph colouring approaches were hybridized with hill climbing for successfully solving both ITC 2007 and Toronto datasets [20,21].
Pais and Amaral [22] implemented an improved tabu search for the examination timetabling. Here, tabu list is automatically tuned by a fuzzy inference rule-based system (FIRBS), and this improves tabu search in exploring a promising area of solution space. Abdullah et al. [23] hybridized tabu search with a memetic algorithm for solving university timetabling problem. This algorithm employed a set of neighbourhood structures that was controlled using a tabu list.
Simulated annealing has been extensively used for examination timetabling problem. Battistutta et al. [24] presented simulated annealing with a feature-based tuning approach for solving ITC2007 examination timetabling. The tuning stage was started by selecting the most important parameters. Then a regression model was developed that correlated the value of the most important parameter to the features of the instances. Results indicate that proper tuning can produce competitive results. Simulated annealing for solving examination timetabling can also be found in [25] and [26]. Burke and Bykov [27] observed that late acceptance hill-climbing produced better solutions than other local search methods while implemented in Toronto and ITC2007 datasets. Subsequently, Alzaqebah and Abdullah [28] combined late-acceptance hill-climbing as well as simulated annealing with bee colony optimization algorithms for solving Toronto and ITC2007 datasets. Besides, Bykov and Petrovic [29] recently have proposed another single-parameter local search similar to LAHC, which is step counting hill climbing. The approach was tested on all instances of ITC2007 examination datasets and produces some good results.
Another successful metaheuristic for the examination timetabling problem is great deluge algorithm. Kahar and Kendall [30] proposed a modified-great deluge for solving UMP examination timetabling problem. They used a dynamic decrease of boundary level, and when there was no improvement for certain iterations, the boundary level was increased. They conducted experiments with different initial solutions and neighbourhood structures. Mandal and Kahar [8] proposed a novel approach where partially selected examinations were constructed using hybrid graph heuristics, and then these scheduled examinations were improved using a modified great deluge algorithm. Great deluge algorithm was hybridized with an electromagnetic-like mechanism for moving simple solution(s) to high-quality solution(s) avoiding local optima [31].
Population-based metaheuristics work with more than one solution for the optimization process. Pillay and Banzhaf [32] implemented an informed genetic algorithm (IGA) to solve Toronto benchmark datasets. A two-phase approach was used. In the first phase, the timetable was evolved with satisfying hard constraints, and soft constraints were considered during the improvement phase. In both cases, genetic algorithm was employed to be guided by some domain knowledge. Results indicate that IGA tends to produce better examination timetabling than other evolutionary algorithms. Hosny and Al-Olayan [33] proposed a mutation-based genetic algorithm for examination timetabling problem whereby crossover was avoided, and mutation was used as the main genetic operator during the evolutionary process. Alinia Ahandani et al. [34] investigated the discrete PSO algorithm for solving the examination timetabling problem. The particles' positions were updated using genetic operations like mutation and crossover. The quality of particles' position was improved using three approaches of local search applied to hybridize discrete particle swarm optimization. The approach showed satisfactory results while tested on the Toronto datasets. Sainte and Larabi [35] proposed a hybrid PSO that produces stable solutions for examination timetabling. Alzaqebah and Abdullah [28] used the artificial bee colony algorithm by incorporating late acceptance hill-climbing, adaptive approach in neighbourhood selection, and disrupting selection strategy. They observed that disrupting selection strategy diversifies the population and prevents early convergence. Bolaji et al. [36] proposed a hybridization of an artificial bee colony with a local search and harmony search algorithm for solving un-capacitated examination timetabling.
Memetic algorithms are well-known population-based approaches that are known as the hybridization of evolutionary algorithms and local search methods. The combination of genetic algorithm with modified violation directed hierarchical hill climbing (VDHC) was used for solving examination timetabling problem [37]. Memetic algorithm for examination timetabling problem was also found in research conducted by Abdullah and Turabieh [38], Lei et al. [39], and Leite et al. [40].
Hyper-heuristic is a relatively new domain that can effectively solve the educational timetabling problem. It is the domain independence high-level search strategy that modifies solutions indirectly by adequately selecting and employing some low-level heuristics Pillay [41]. Anwar et al. [42] investigated the harmony search hyper-heuristic approach for solving the ITC2007 examination problem. A basic harmony search algorithm was employed in the high-level heuristic, which controlled the low-level heuristics. These were two neighbourhood structures: move and swap operation on examinations. Results revealed that the approach can produce some competitive results compared to state-of-the-art approaches. Demeester et al. [43] presented a hyper-heuristic framework based on the mechanism of tournament selection in genetic algorithms. In low-level heuristic, the number of moves operations based on problem types was selected. Recently, Muklason [10] has proposed hyper-heuristic for multi-objective examination timetabling problem.
GRAPH COLORING HEURISTICS AND TRAJECTORY META-HEURISTICS
In examination timetabling literature, AI techniques like heuristics and meta-heuristics are used for solving examination timetabling. For instance, heuristics are often employed for solution construction. These heuristics contain domain-specific knowledge that guides search in finding feasible timetabling or even better timetabling from the solution space. Meta-heuristics are usually used widely to optimize the timetabling. They are problem-independent algorithms that guide subordinate heuristics with some intelligent strategies for exploring and exploiting the search space so that efficient solutions (optimal or near-optimal) can be found [44]. It is often classified into two main branches, including trajectory-based (i.e., local search meta-heuristics) and population-based approaches. Trajectory-based methods take one single solution at a time and explore the search space to generate near-optimal solutions. Simulated annealing and tabu search are the two examples of trajectory-based methods. In population-based search, more than one different initial solutions (i.e., population) are considered at a time for generating near-optimal solutions. Genetic algorithm and ant colony algorithm are two population-based approaches. The algorithms used in this study are described below.
Graph Heuristics (GH)
Examination timetabling problem can be modeled using a graph colouring algorithm, and therefore, it exhibits similarity with graph colouring problem [45]. In graph colouring problem, an undirected graph is a representation comprising a set of n vertices. In graph colouring problem, an undirected graph = ( , ) is a representation comprising a set of vertices = ( 1 , 2 , 3 , … , ) and a set of edges . If ( , ) is an edge in a graph = ( , ) then vertex is adjacent to vertex . The graph colouring problem involves assigning -colours in = ( 1 , 2 , 3 , … , ) such that no two adjacent vertices are assigned the same colour. It is straightforward to convert the graph colouring problem to the examination timetable problem (vice-versa) by considering all vertices as events (i.e., examinations) and an edge between any pair of vertices as conflicting examinations. That is, these examinations have taken at least one student and do not exist in the same time slot. Finally, k-colours is equivalent to the number of time slots.
In graph theory, colouring a graph with a predefined limited number of colours ( k-colouring of a graph) is complex tasks. This graph problem is in the class of NP-complexity [46,47]. Graph colouring heuristics (Graph heuristics) are popular sequential approaches that are used for constructing an initial timetable [17]. As the brute-force approach of solving graph colouring problem is NP-hard, graph heuristics encompass some heuristic colouring techniques, such as vertex ordering, to find optimal or near-optimal colourings in polynomial time. In the context of examination timetabling, graph heuristics are based on ordering strategies where examination with most 'difficulty' is chosen for scheduling first so that finally, a feasible solution can be obtained. The examination difficulty is measured with various graph heuristic techniques. The most commonly used graph heuristics ordering strategies seen in the literature are described as follows: Largest degree (LD): In this ordering, the number of conflicts is counted for each examination by checking its conflict with all other examinations. Then, examinations are ordered in decreasing manure such that exams with the largest number of conflicts come fast. Largest weighted degree (LWD): This ordering has a similarity with LD. The difference is that in the ordering process, the number of students associated with the conflict is considered. Largest enrolment (LE): The examinations are ordered decreasingly with the value of registered students of these examinations. Saturation degree (SD): Examination ordering is based on the availability of remaining time slots where unscheduled examinations with the lowest number of available time slots for scheduling are given priority for scheduling first. The ordering is dynamic as it is updated after scheduling each exam.
Tabu search (TS)
Tabu search is a local search meta-heuristic algorithm, which is firstly proposed by Glover [48]. The basic mechanism of tabu search is based on hill-climbing algorithm. However, it can avoid trapping into local optima by accepting the worst solutions. A memory structure called tabu list is used for avoiding the exploration of the same neighbourhood solutions for a certain number of iterations. In other words, tabu list occupies recently visited solutions and remains it tabu for avoiding cycling. A mechanism like aspiration criteria is also used to allow promising solutions with tabu free status if the penalty value of the solution vector is better than that of the current best-known solution. Figure 1 illustrates the simple tabu search approach.
1. Create initial solution 2. Initialize tabu list T 3. while termination criterion not satisfied do 4.
Determine complete neighbourhood N of current solution 5.
Choose best non-tabu solution ′ from N 6.
Switch over to solution ′ (current solution s is replaced by ') 7.
Update tabu list T 8.
Update best found solution (if necessary) 9. end while 10. return Best found solution
Simulated annealing (SA)
Simulated annealing (SA) is a local search meta-heuristic technique based on a physical annealing process that probabilistically accepts some worst solutions to escape from the local optimum. It was introduced by Kirkpatrick and Vecchi [49]. Simulated annealing starts with a randomly generated initial solution, and in each iteration, it tries to improve the solution quality. If the neighbouring solution is better than or equal to the current solution, it is replaced with the current one. Otherwise, acceptance of neighbouring solution is decided on probability function exp (−( ( ′ ) − ( ))/ ), where ( ′ ) is neighbouring solution, ( ) is current solution, and is a parameter known as temperature. Initially, the algorithm starts with a high and periodically decreases the value using the cooling schedule until the temperature is zero or any terminal condition. Figure 2 illustrates the simulated annealing process for the minimization problem. 1. Choose, at random, an initial solution for the system to be optimized 2.
Initialize the temperature 3. while the stopping criterion is not satisfied do 4. repeat 5.
Late acceptance hill-climbing (LAHC)
LAHC is a single point meta-heuristic inspired room hill-climbing search proposed by Burke [50]. Unlike Hill-climbing, LAHC can escape local optimums by maintains a list of a given length , which is a kind of memory unit. This list retains solutions of several iterations earlier for comparison with the current candidate solution. LAHC starts with a single feasible solution, and iteratively improves the solution in order to get a new improve one. Each time the candidate solution is compared with the last value of the list, and if better, it is accepted. When the acceptance procedure is activated, the new cost is added at the beginning of the list, and the last element is deleted. The procedure is performed base on = formula, where is the length of the frame, is the i th iteration and is the position. Figure 3 shows the LAHC procedure. Construct a candidate solution s* 6.
Calculate its cost function C(s*) 7.
v ← I mod L 8.
Increment the number of iteration I ← I+1 14. end do Figure 3. Late acceptance hill-climbing search algorithm.
Great deluge algorithm (GDA)
Great deluge algorithm (GDA) is a local search algorithm developed by Dueck [51]. The inspiration of this algorithm originated from the behaviour that a hill climber seeks a higher place to avoid the rising water level during the deluge. Like SA, this algorithm devises a mechanism to avoid local optima by accepting the worst solution. However, SA uses a probabilistic function for accepting the worst solutions, whereas GDA uses a more deterministic approach for this purpose. It is also considered that GDA depends less on parameter tuning compared to SA. The only parameter in the GDA is decay rate, which is used for controlling the boundary or acceptance level. In the minimization problem, the initial boundary level (water level) usually starts with an initial solution. During the search, a new candidate solution is accepted if it is better than or equal to the current solution. However, the solution worse than the current one will be accepted if the quality of the candidate solution is less than or equal to a predefined boundary level B. The boundary level then is lowered by Figure 4 describes the procedure of the GDA algorithm in a minimization context. 1. Set the initial solution 2. Calculate initial cost function ( ) 3. Initial level 0 = ( ) 4.
Variable neighbourhood search (VNS)
Variable neighbourhood search (VNS), a local search descent method, was first introduced by Mladenović and Hansen [52]. VNS does not accept non-improving neighbours. The basic idea of VNS is based on changing the landscape of the problem using more than one neighbourhood searches. The algorithm has three basic steps: shaking, local search, and move. Initially, a set of neighbourhood structures and initial solutions is defined. In each iteration, an initial solution is shaken from the current neighbour (x' is generated in current neighbour). A local search approach is then used to transform this ′ solution to ′′ solution. When ( ′′) is better than ( ), current solution will be replaced by the solution ′′ , and the search starts over from the first neighbourhood ( = 1). Otherwise, the algorithm uses the next neighbourhood ( = + 1). Several variations of VNS are found. Figure 5 presents a basic variable neighbourhood search approach.
output: Best found solution Figure 5. Variable neighborhood search algorithm.
Hybridization of Graph heuristics
The first step is to construct initial feasible solutions for examination timetabling problem. We use LD, LWD, and LE, which are static ordering. Besides, we also hybridize each of LD, LWD, and LE with SD as dynamic ordering heuristics. It is observed that SD tends to perform better than the three heuristics on many occasions [18,53]. However, at the very beginning of the solution construction, SD may not be efficient like LE, LWD, and LD due to most of the time slots being unoccupied, resulting in difficulties for SD in the appropriate ordering of examinations [17]. Here we describe three hybridized graph heuristics SD-LD, SD-LWD, and SD-LWD and provide an illustrative example for better understanding the procedure.
Definition
SD-LD: It means ordering the examinations according to SD followed by LD and taking the most crucial examination from the top of the list for scheduling. SD-LWD: It indicates ordering the examinations according to SD followed by LWD and taking the most crucial examination from the top of the list for scheduling. SD-LE: It denotes ordering the examinations according to SD, followed by LE and taking the most crucial examination from the top of the list for scheduling.
An illustrative example
Conflicting of examinations and ordering procedure can be illustrated using the following examples. Consider a conflict matrix M consisting of 9 examinations ( 1,2,3,4,5,6,7,8,9) in Figure 6. Here an entry ( 2, 7) has value 2. It means that there is a conflict between examination 2 and examination 7 and 2 students have taken these two courses. Similarly, the entry ( 1, 4) has value 0, indicating no conflict between these two examinations (i.e., no common students have taken these two courses). In this way, other entities can be defined. Note that ( 2, 7) = ( 7, 2), as the matrix is symmetrical and diagonal items in the matrix have zero values, meaning no conflict exists between two same examinations. Each row is considered a vector of the matrix is dedicated to a particular examination (i.e., 1), and its column values with non-zero are all conflicted examinations with that particular examination. For example, 2, 3, 5, 6, 7, 9 are conflicting with 1. For getting LD ordering, all the column examinations which conflict with each row of the matrix that is not zero are considered first, and then the number of conflicting exams is counted. This specifies LD value of the examination in this row. Now the LD ordering is obtained by sorting these LD values in decreasing order. Figure 7 (a) illustrates the LD ordering. Here 2 examination is top of the LD ordering as it has the maximum value 7, whereas 4 is at the bottom of the list because of its lower ordering value 1. e2 7 e6 7 e7 7 e1 6 e3 6 e5 6 e8 4 e9 4 e4 1 (a) Exams LWD e7 18 e6 12 e1 11 e5 9 e3 8 e2 8 e9 6 e8 5 e4 1 (b) Exams LE e7 6 e1 4 e3 4 e4 4 e6 4 e2 3 e5 3 e9 3 e8 2 (c) Exams SD LD e7 2 7 e5 3 6 e3 3 6 e9 3 4 e1 4 6 e8 4 4 e4 4 1 e6 5 7 e2 5 7 (d) Exams SD LWD e7 2 18 e5 3 9 e3 3 8 e9 3 6 e1 4 11 e8 4 5 e4 4 1 e6 5 12 e2 5 8 (e) Exams SD LE e7 2 6 e3 3 4 e5 3 3 e9 3 3 e1 4 4 e4 4 4 e8 4 2 e6 5 4 e2 5 3 (f) In the case of LWD ordering, all column examinations that are conflicting with each row examination are collected. Next, the sum of all conflicting values of all column examinations in each row of the matrix produces LWD value of that row examination. Finally, when all the LWD values are arranged in decreasing order, LWD ordering is found. Figure 7 (b) describes LWD ordering where 7 is at the top of the list due to its largest value of 18 followed by 6 with the second largest value of 12.
Exams LD
LE ordering, however, considers student enrolment data and avoids conflict matrix for the ordering process. Examination with the largest enrolment of students is considered at the top of the list. For instance, the enrolment of students is like this: e1 has been taken by 4, e2 has been taken by 3, e3 has been taken by 4, e4 has been taken by 4, e5 has been taken by 3, e6 has been taken by 4, and e7 has been taken by 6 students. Arranging them in decreasing order based on enrolments, LE ordering of these examinations is obtained, which is shown in Figure 7 (c).
SD is a dynamic process that needs information about the current timetabling state. In a particular time, each unscheduled examination checks the number of available time slots for scheduling without violating hard constraints. This number indicates SD values of that examination. For example, if 6 has SD value 5, it means that 6 has 5 free time slots where it can be assigned. Unlike other orderings, SD ordering is obtained by sorting the unscheduled examinations in ascending order so that examination with the least number of available time slots gets the first priority for scheduling. From Figure 7 (d-f), it can be seen that 7 is at the top of the SD ordering list. This is because 7 has the least number of free time slots, only two-time slots available SD-LD, SD-LWD, SD-LE orderings are produced by hybridizing SD with other heuristic orderings in such a way that SD is employed for ordering examinations fast followed by employing other orderings. Figure 7(d), Figure 7(e), and Figure 7(f) indicate the ordering of SD-LD, SD-LWD, and SD-LE, respectively. In these cases, SD orders the examinations first, and then the adjacent heuristic is employed for ordering. For example, in SD-LWD ordering, it is observed that e7 is at the top of the list because it has the lowest SD value. It is observed that 5, 3, and 9 have the same SD value, but they have different LWD values. If two or more examinations have the same SD value, then LWD is considered. Examination e5 comes first because its LWD value is higher than both 5 and 3. Therefore, the ordering of these three examinations will be 5 followed by 3 and then e9. Since SD is solely unable to order the examinations 5, 3, and 9 properly, second time ordering (in this example LWD) assists in producing robust ordering.
Improvement with Trajectory search
In this step, the initial feasible solution is further improved by trajectory-based methods in order to produce a near-optimal solution(s). The initial solution for trajectory metaheuristic is calculated using a graph heuristic that produces the best solution during the construction phase. Five trajectory-based methods comprising of tabu search (TS), late acceptance hill-climbing search (LAHC), simulated annealing(SA), great deluge algorithm(GDA), and variable neighbourhood search (VNS) have been used during improvement phase.
Experimental setup
We have considered two commonly used benchmark datasets in examination timetabling research, which are Toronto and ITC2007 datasets, to assess the performance of our approach. We have used 12 instances of Toronto benchmark datasets and 8 instances of ITC2007 benchmark datasets.
Neighbourhood structure for Toronto datasets during the improvement phase is described as below: N1: Movean examination is selected randomly and moves it to a random time slot. N2: Swap -Two examinations are selected randomly, and swapping is occurred between their time slots. N3: Swap time slot -Two-time slots are selected randomly, and all examinations between the two-time slots are swapped. The above three (3) neighbourhood structures are used during the improvement phase. However, a neighbourhood structure is only accepted that gives an improvement on the penalty value in each iteration.
The neighbourhood operations employed in the improvement phase for ITC2007 exam datasets are as follows: N1: An examination is selected randomly and moves it to a random time slot and room N2: Two examinations are selected randomly and swapping is occurred between their time slots and rooms N3: An examination is selected and moves it to a different room within the same time slot N4: Two random examinations are selected and move them to different time slots and rooms During the improvement phase, a neighbourhood move from these neighbourhood structures is selected randomly and applied only if the solution is feasible; otherwise, a different neighbourhood move is selected. Besides, stopping criteria for Toronto and ITC2007 are set to 30 min and one hour, respectively. Finally, each experiment is run 30 individuals using different random seeds to obtain computational results.
The programs were implemented in Java (Java SE 7) and performed on Intel Core-i7 PCs with 8 GB RAM running Windows 7 Professional SP3. For getting the appropriate values of the parameters in metaheuristic algorithms, some preliminary experiment has been conducted. Table 4 shows the details of the parameters used for the study.
RESULTS AND DISCUSSION
A comparative study of different graph colouring algorithms on the Toronto datasets for constructing initial solutions is highlighted in Table 5. In this table, the best and the corresponding average value produced by graph colouring algorithms for each instance is highlighted. Note that, from now to subsequent tables, the best results obtained from all the approaches for each problem instance are highlighted in the table with bold font, while '-' indicates no solution obtained. As it is observed from the table, SD-LD achieved the best results on 5 instances (car-f-92, kfu-s-93, rye-s-93, ute-s-92, yor-f-83), whereas SD-LWD outperformed others on 4 instances (ear-f-83, lse-f-91, sta-f-83, tre-s-92). The rest of the 3 instances SD-LE produced the best results. It is also noticed that without the hybridization of SD, individually 3 heuristics LD, LE, and LWD could not produce the best solutions for any of the instances. Table 6 shows the comparison of the best and average results of six graph colouring approaches on ITC2007 datasets. It is observed that 4 out of 8 instances (Exam_2, Exam_4, Exam_5, Exam_8), SD-LD reported the best results, while SD-LWD reported the best results for the other four instances. LD, LE, and LWD, however, could not produce any solution for Exam_4. Besides, for the rest of the instances, they were not able to perform well than their hybridization with SD counterparts, which indicates the strength of the hybridization approaches for solution construction. Table 7 shows the performance of different trajectory algorithms employed on Toronto datasets in obtaining quality solutions. The best and average values are shown for each instance. It is apparent from the table that GDA outperformed other algorithms because, in 7 out of 12 cases, it resulted in the best results. SA is the second best algorithm that reported the best values for four instances. There are two instances (kfu-s-93 and ute-s-92) in which LAHC produced the best solutions. However, VNS and TS could not produce better results in comparison with GDA, LAHC, and SA. Table 8 highlights the best and average results of the instances of ITC2007 datasets when the trajectory metaheuristic approaches are employed for improving solution quality. It is observed that the performance of the algorithms under investigation here produces comparable results. However, a closer look reveals that GDA is the most successful in producing quality solutions. It produced the best results for Exam_1, Exam_3, Exam_5, and Exam_6. The next best metaheuristic is LAHC, which produced the best results for three datasets (Exam_6, Exam_7, Exam_8). The rest of the instances (Exam_2 and Exam_4) had the best solutions with SA approach. Results also reveal that, during the improvement process, TS and VNS are not as robust as the rest of the approaches in terms of producing the best solutions. Tables 9 and 10 show the best results obtained in our experiment and a selection of the best results available in the literature on Toronto and ITC2007 datasets, respectively. As shown in Table 9, our results are better than both Carter et al. [14] and Pillay and Banzhaf [54] for 8 problem instances, Sabar et al. [17] for 10 problem instances, Abdul Rahman et al. [18] for 7 problem instances, Caramia et al. [55] for 6 problem instances, and Turabieh and Abdullah [56] for 4 problem instances. Finally, from Table 10, it is observed that our method obtains the best results on five out of 13 instances compared to Pillay [57]. Besides, our results are better than Atsuta et al. [58] and Abdul Rahman et al. [18] for 3 and 2 problem instances, respectively. De Smet [59] produced better results than our method, but they could not produce feasible solutions for three instances. Overall, our results are competitive with other approaches in the literature. | 8,470 | 2020-01-01T00:00:00.000 | [
"Computer Science",
"Mathematics",
"Engineering"
] |
Surveying Financial Decisions? From Homo Economicus to Historical Specificity and J.S. Mill’s Cultural–Institutional Individualism
The aim of this paper is to evidence that non-economic factors, such as culture, emotions and ethics, can be seen as an important force in influencing human economic behavior and human action. This is conducted by putting the homo economicus notion into the perspective of the history of economic thought and, more specifically, of John Stuart Mill. More specifically, Mill’s institutional individualism, as is presented in his System of Logic (1843), and his relativity of economic doctrines construction, as is included in his Principles of Political Economy (1848), are synoptically delineated. Through Mill’s analysis, it is supported that cultural differences between different states of societies are determinant in understanding different behaviors. The paper concludes that Mill’s historical specificity and his more pluralistic version of cultural–institutional methodological individualism are more compatible in understanding human decision making.
Introduction
According to the mainstream neoclassical economic theory, financial decisions (and capital markets) are driven by rational human beings who either maximize their utility, as consumers, or their profit, as producers-investors. The rational individual, named homo economicus in the economic literature, is considered as an ideal decision maker who is a master of rationality and logic. Homo economicus is completely self-interested, knows his wants better than anyone else, has concrete preferences and possesses (frequently) full information. This methodological point of departure was transformed into one of the (fundamental) pillars of the neoclassical paradigm in economics. However, as many scholars indicated, this super-hero, economic rationality is incompatible with the influence of other non-economic factors such as culture, customs, ethics and emotions. This literature points out that a great variety of non-cognitive factors bind the economic rationality of the individual (Urbina and Villaverde, 2019, p. 67). Thus, the role of culture, customs and institutions in the economic literature is currently being rediscovered. More specifically, regarding individual decisions and actions, the cultural framework is of prime importance in determining humans' behavior (De Jong, 2009).
The homo economicus notion has been under heated discussion for decades (Urbina and Villaverde, 2019, p. 63). However, before recent studies such as behavioral economics, institutional economics, economic anthropology and ecological economics, it was the nineteenth century political economy which provided the first critical appraisal of homo economicus. One of the leading economists of the classical period of political economy, who delineated homo economicus's limitations, was John Stuart Mill. John Stuart, the first child of the great philosopher and historian, James Mill, was raised by his beloved father on Ricardian political economy, on utilitarian philosophy and on Benthamite principles. Being Bentham's apprentice, he was the first political economist who was influenced by him and applied his hedonistic views in political economy (Drakopoulos, 1990, p. 361). In his first essays, such as The Definition of Political Economy (1843), the methodological and epistemological dominance of the self-interested individual is more than explicit. However, during his intellectual maturity, he initiated to regard culture, customs and institutions as decisive factors in influencing individual decisions and the course of economic development (Mill, [1848(Mill, [ ] 1909). Mill, already from his Principles of Political Economy (1848), was ready to point out that "national" character is an important factor in influencing human behavior and decision making. Naturally, therefore, this view impelled him to differentiate himself from Bentham's abstract universalism. In his critical appraisal of the British utilitarian philosopher, he notes that Bentham "was precluded from considering, except a very limited extent, the laws of a country as an instrument of national culture: one of their most important aspects, and in which they must of course vary according to the degree and kind of culture already attained; as a tutor gives his pupil different lessons according to the progress already made in his education" (Mill, 1969, p. 105, emphasis added).
The aim of this paper is to discuss Mill's views on the importance of non-economic factors in studying economic phenomena and economic decisions. More specifically, in Section 2, we present Mill's "state of society" notion while we stress how his political economy is (tightly) associated with non-economic factors such as culture, emotions and ethics. This association impelled Mill to note that a (new) social science, ethology, is of crucial importance in understanding social and economic phenomena. In Section 3, we propose that Mill's cultural-institutional individualism is methodologically proper in surveying individual economic (and financial) decisions. The final section summarizes our discussion.
2. "State of Society" and the Science of Ethology: The Importance of Culture In his System of Logic, which is his chief methodological essay, Mill introduced the notion of the "state of society". This notion crystallizes the importance of the socio-historical context regarding individual decisions and actions. More specifically, culture constitutes one of the main tenets of this notion. For Mill, the individual, in spite of its evident agency, which is animated by certain instincts (i.e., self-interest), is also influenced by culture. Mill defined the "state of society" as: the simultaneous state of all the greater social facts or phenomena. Such are: the degree of knowledge, and of intellectual and moral culture, existing in the community, and in every class of it; the state of industry, of wealth and its distribution; the habitual occupations of the community; the division into classes; and the relations of those classes to one another; the common beliefs which they entertain on all the subjects most important to mankind, and the degree of assurance with which those beliefs are held; their tastes, and the character and degree of their aesthetic development; their form of government, and the more important of their laws and customs. The condition of all these things [ . . . ] constitute the state of society, or the state of civilisation, at any given time (Mill, [1848] 1909, p. 595, emphasis added).
Thus, the rational wealth maximization assumption presented is improper in Mill's political economy. Homo economicus is super-historical despite the fact that different "states of society" produce different decisions. Mill solved this analytical inadequacy by introducing an interrelated (and relational) relation between structure and human agency. Mill moved against the classical universalism which denied the role of institutions, culture and social norms and believed that every human act or a given social structure of a society could be, at once, the effect and the cause of the interaction between men and their social context (De Mattos, 2005). On the other hand, he believed that individuals are not passive (historical) beings shaped univocally by their "state of society". They are active creatures, with intrinsic instincts, which are acting in a given temporal and spatial framework. Thus, Mill proposed a via media between the Scylla of extreme individualism and the Charybdis of doctrinaire necessity. On the one hand, Mill's analysis moved against the ultra-utilitarian tradition of both Jeremy Bentham and his father, James Mill, while, on the other hand, he also speared "The Doctrine of Philosophical Necessity" (Mill, 1977, p. 217).
This methodological stance had a direct influence in his political economy. Mill was consistent with the classical tradition and had pointed out, already from the Preliminary Remarks of his Principles, that the subject of political economy is the investigation of the causes and the subsequent typification of the laws concerning the production and distribution of wealth. However, he remained skeptical of the universal character of these laws and devoted the first page of his magnum opus to illustrate his skepticism by noting that there are many causes that determine political economy: Not that any treatise on Political Economy can discuss or even enumerate all these causes; but it undertakes to set forth as much as is known of the laws and principles according to which they operate (Mill, [1848] 1909, p. 1).
Mill believed that the laws of political economy are not rigid theorems but are associated with an inborn relativity. Their hypothetical (and non-rigid) nature is connected with the fact that individual decisions and actions "could not be predicted with scientific accuracy, were it only because we cannot foresee the whole of the circumstances in which those individuals will be placed" (Mill, [1843] 1889, p. 554). For Mill, political economy is the social science which "concerns itself only with such of the phenomena of the social state as take place in consequence of the pursuit of wealth", and "It makes entire abstraction of every other human passion or motive, except those which may be regarded as perpetually antagonising principles to the desire of wealth" (p. 588). However, as Mill himself believed, in many (historical) instances, custom, culture and even history are antagonizing principles which have to be taken into account by economists. A thorough understanding of these principles will render economic science methodologically and epistemologically valid. Thereof, it is impossible to obtain general principles, embracing the complication of circumstances, which may affect the final result in any individual decision without taking into account factors such as culture, ethics and other emotions (Mill, [1848(Mill, [ ] 1909. 1 According to Mill's political economy, different historical, social, political and cultural conditions imply different economic conditions and are associated with different theoretical conclusions. Mill was always conscious of this "relativity": It often happens that the universal belief of one age of mankind-a belief from which no one was, nor, without an extraordinary effort of genius and courage, could at that time be free-becomes to a 1 Mill believed that political economy, like the science of tides, and unlike astronomy, is an inexact science. He noted that in an inexact science: "the only laws as yet accurately ascertained are those of the causes which affect the phenomenon in all cases, and in considerable degree; while others which affect it in some cases only, or, if in all, only in a slight degree, have not been sufficiently ascertained and studied to enable us to lay down their laws, still less to deduce the completed law of the phenomenon, by compounding the effects of the greater with those of the minor causes" (Mill, [1843] 1889, p. 553). subsequent age so palpable an absurdity, that the only difficulty then is to imagine how such a thing can ever have appeared credible (Mill, [1848(Mill, [ ] 1909. We may say that the Millian political economy is historically specific as it is bounded on historically and spatially (specific) socioeconomic systems. As such, in manifold historical situations, the behavioral axiom of wealth maximization is historically violated. In these situations, according to Mill's epistemology, the science of political economy requires the science of ethology, as he defines it, to understand the influence of culture, traditions, habits, thoughts and mores of a given society. This understanding is crucial as long as these factors determine people's decisions and actions. As Zouboulakis (2001, p. 33) observed, according to Mill, when historical conditions are transformed, economists need to modify their conclusions and "take account of circumstances almost peculiar to the particular case or era". Mill stressed the importance of ethology. In his own interesting verba: The more highly the science of Ethology is cultivated, and the better the diversities of individual and national character are understood, the smaller, probably will the number of propositions become, which it will be considered safe to build on as universal principles of human nature (Mill, [1843] 1889, p. 591).
Cultural-Institutional Individualism
Thus, according to Mill's political economy, non-economic factors (i.e., culture or institutions) play a prominent role in forming individual decisions. These decisions are either activated or bounded by the "state of society". Naturally, therefore, "different" decisions are partially influenced by social structures, institutions, etc. Following Mill's analysis, we may say that various "abnormalities" in markets do not reflect irrational decisions but are associated with different cultural and institutional frameworks.
Following this (heterodox) epistemology, Mill proposed a different sort of methodological individualism from both Bentham and modern (neoclassical) economists. This kind of individualism, which is highly institutional and cultural in its content, moves away from the motif of the in extremis motif of homo economicus. Mill's individualism is workable within the limits set by institutions, customs and cultures (Wilson, 1998, p. 219). Mill's methodological crystallization is a methodological individualism which is not reduced to simplistic psychological terms. Mill accepted the priority of the individual in taking decisions. His locus classicus, On Liberty, "is famously concerned to shield eccentricity, particularly of opinion, and notably of religious heterodoxy" (Claeys, 1987, p. 192). In principle, Mill accepted the basic tenet of classical methodological individualism, namely, the fact that individuals are greatly motivated by the "desire of obtaining the greatest quantity of wealth with the least labour and self-denial" (Mill, [1836(Mill, [ ] 1844. However, he believed that political economy is the science of collectives as "we shift our point of view, and consider not individual acts, and the motives by which they are determined, but national and universal results" (Mill, [1848(Mill, [ ] 1909. In this way, the eccentricity of the individual is subsumed into the intricate spectrum of social, cultural and moral relations. As such, his individualism does not resemble the Benthamite psychological calculus type; rather, it is an individualism of an institutional and cultural kind as women (and men) are still acting in a social, institutional, cultural and historical context (Zouboulakis, 2002, p. 2). 2 Therefore, in Mill's analysis, agents, either as producers or as consumers and investors, "do not act in conditions of social vacuum but inside a pre-existing and anticipated 'particular state of society' " (p. 7). Naturally, therefore, Mill recanted the views of an abstract human nature free from cultural, institutional and social conditions in which men are historically suited (Bouton, 1965, pp. 569-570). Thus, his approach impelled him to stress the fact that human actions' consequences are not always intentional but may be unintended. Therefore, their final outcome cannot be predicted with predefined accuracy. For Mill, the economic man (and woman) searches to fulfill his (her) interests but (always) "wears the clothes of a particular society" (Bonar, 1911, p. 720). Mill himself, when he came to elaborate his own political economy, deserted the monotonal and abstract economic man in favor of a broader (social and cultural) approach (Persky, 1995, p. 224). In a typical Aristotelian fashion, he paid attention to the portrayal and evaluation of human beings as social and not lonesome animals. For Blaug (1980, p. 56): What Mill says is that we shall abstract certain economic motives, namely, those of maximizing wealth subject to the constraints of a subsistence income and the desire for leisure, while allowing for the presence of noneconomic motives (such as habit and custom) even in those spheres of life that fall within the ordinary purview of economics (emphasis added).
Moreover, Mill departed from the Benthamite calculus regarding that the total sum of private interests is identical to society's general interest. In his discussion of Representative Government (1869), he observed that: Whenever the general disposition of the people is such, that each individual regards those only of his interests which are selfish, and does not dwell on, or concern himself for, his share of the general interest, in such a state of things good government is impossible (Mill, [1869).
Conclusion
The aim of this paper was to show that Mill's historically specific political economy and his (idiosyncratic) version of cultural and institutional individualism are tightly connected with his relativity of economic doctrines construction. According to Mill, economic theorems are always connected with the historical, cultural and social framework of individuals. More specifically, his formulation of tendency laws-through which "Political Economy is able to explain only what people tend to do during their economic activities" (Zouboulakis, 2016, p. 5)-is one of the most astonishing expressions of his proposed relativity of economic doctrines and illustrates the relative character of economic theorems. According to Mill, due to the persistent presence of both economic and non-economic disturbing causes, the laws of political economy should be viewed as tendencies and not rigid formulations. These views move against the mainstream neoclassical economic theory. Evidently, they provide the methodological and epistemological ground to test more complicated scenarios in analyzing individual decisions (and actions). | 3,736.6 | 2021-03-09T00:00:00.000 | [
"Economics"
] |
The effect of RAFT polymerization on the physical properties of thiamphenicol-imprinted polymer
The necessity to overcome limitation of conventional free radical polymerization, technology has shifted the way to find an effective method for polymer synthesis, called controlled radical polymerization (CRP). One of the most studied controlled radical system is reversible additionfragmentation chain transfer (RAFT) polymerization. The method relies on efficient chain-transfer processes which are mediated typically by thiocarbonyl-containing RAFT agents e.g., dithioesters. The presented study revealed the potential benefit in applying RAFT polymerization towards the synthesis of molecularly imprinted polymer for thiamphenicol. They were synthesized in monolithic form using methacrylic acid, ethylene glycol dimethacrylate, azobisisobutyronitrile and acetonitrile as a functional monomer, cross-linker, initiator and porogen, respectively. The surface morphology was studied by scanning electron microscopy (SEM), structural characterization by Fourier transformed infrared (FTIR) and pore structures of polymers produced were characterized by nitrogen sorption porosimetry. SEM analysis showed MIPs produced by RAFT have smoother surface while porosity analysis showed the specific surface area was slightly larger compared to conventional polymerization methods. However FTIR showed the same pattern of spectra produced due to the same co-monomers used in the production. The results upon the uses of RAFT polymerization enables the production of imprinted polymers enhanced the physical properties compared to conventional polymerization.
Introduction
Molecular imprinting is a facile and versatile approach for the generation of synthetic receptors with tailor-made recognition sites [1]. Molecularly Imprinted Polymers (MIPs) are normally prepared by conventional free radical polymerization (FRP) due to the tolerance of FRP for a wide range of functional groups in the monomers and templates, but also because conventional FRP can normally be carried out in a facile manner under mild reaction conditions. However, conventional FRP allows for only limited control over the polymer growth processes with regard to chain propagation and termination, as well as the chemical structures of the polymeric products [2,3] plus polymer networks with heterogeneous structures are normally produced when such networks are synthesised using FRP.
The necessity to overcome these limitations urged synthetic polymer chemists to develop new concepts, which would permit for the preparation of MIPs with more homogeneous network structures, a better understanding of the structure-property relationship of MIPs, and for obtaining MIPs with improved binding properties [1]. In this respect, controlled/living radical polymerization (CRP) techniques have been evolved and it is well understood that CRP processes offer many benefits [4,5] Reversible addition-fragmentation chain transfer (RAFT) is one of the most versatile ways to confer "living" characteristics onto radical polymerization [6].
As an alternative to conventional FRP for the production of MIPs, our hypothesis was that the controlled nature of 'living' radical polymerization would translate into MIPs with properties superior to those displayed by MIPs prepared by conventional FRP, e.g., improved homogeneity of binding sites and enhanced chromatographic performance.
The aim of this work was to explore the potential benefits in applying RAFT polymerization techniques towards the synthesis of MIPs, with thiamphenicol as a model template. In the present study, polymers were prepared via conventional free radical polymerization and controlled radical polymerization in the form of polymer monolith. The RAFT agent used was 2-(2′cyanopropyl)dithiobenzoate (CPDB). CPDB was selected as the RAFT agent because it has been used previously for the successful polymerization of methacrylates and styrenes [7,8].The physical properties of these polymers were studied using SEM and porosimetry analysis by Brunauer-Emmett-Teller (BET) technique.
The synthesis of the polymers was based upon a procedure reported by [9][10][11]. Thiamphenicol (0.5 mmol), MAA (2.3 mmol), EGDMA (11.6 mmol) and AIBN (0.76 mmol) were dissolved in acetonitrile (4 mL) in a thick-walled glass Kimax culture tube together with CPDB (1.5 mmol). The solution was deoxygenated by sparging with oxygen-free nitrogen for 10 minutes while cooling in an ice-bath. The tube was sealed under nitrogen by means of a screw-cap and placed in an oilbath for 48 hours with the temperature maintained at 60 ºC. The thiamphenicol-imprinted polymer, P1_MIP, was obtained as a monolith; the monolith was subsequently crushed, mechanically ground and wet-sieved using acetone. Particles of < 25 μm were collected after sedimentation (3x) from acetone. In order to remove traces of unreacted monomers and the template, the polymer was extracted overnight in a Soxhlet apparatus using methanol, and then dried at 40 ºC under vacuum. A non-imprinted control polymer (NIP), P2_NIP, was prepared in the same manner as P1_MIP but in the absence of thiamphenicol.
The thiamphenicol-imprinted polymer synthesised via conventional free radical polymerization, P3_MIP, was prepared in the same manner as P1_MIP but in the absence of CPDB. A non-imprinted control polymer, P4_NIP, was prepared in the absence of both CPDB and thiamphenicol.
Physical characterizations
All polymers obtained were characterized by using Scanning Electron Microscopy (SEM) model JEOL JSM-6360LA. The nitrogen gas adsorption method were applied using surface area and porosity analyzer (ASAP 2020 V4.02) manufactured by Micromeritics. This instrument operates by adsorp the nitrogen gas on the surface of polymer samples at 77 K. Each sample was degassed at 150 °C for 8 hours before measurement. The data obtained were evaluated though the techniques of Brunauer-Emmett-Teller (BET) to calculate the specific surface area and Barret-Joyner-Halenda (BJH) for specific pore volume.
Physical characterizations
The size and shape of the various polymer particles were analysed by SEM. As expected for polymer particles produced through the mechanical grinding of monoliths, the particles obtained were irregularly shaped. The particle sizes were defined by the grinding, sieving and sedimentation processes. Only particles with sizes of <25 m were collected and used in this study. P1_MIP and P2_NIP had the typical appearance of a gel-type polymer when in the dry state and they were optically transparent (Fig. 2). In contrast, P3_MIP and P4_NIP were scattered white light, suggestive of well-developed pore structures (Fig. 3). The fact that the presence of RAFT agent in the P1_MIP had a profound impact upon the morphology of the product [9]. These observations were confirmed by nitrogen sorption porosimetry experiments which will explain further. Porosimetry analysis has been done by BET technique based on the adsorption of gas onto the polymer surface. Table 2 is the summarized data from nitrogen sorption porosimetry experiments. The specific surface area for P3_MIP was slightly larger compared to P1_MIP. Moreover the average pore diameter in P3_MIP is also larger than P1_MIP, in which the polymer obtained was classified as mesoporous polymer (range between 2-50 nm) [12]. The larger the size of average pore diameter in P3_MIP might be due to the fact that less control during polymerization process. However, in the presence of RAFT agent, the morphology of the polymer could be controlled. This is the fact that the main equilibrium in RAFT polymerization process which leads to the pore volume smaller than in conventional polymerization. This also was in agreement with SEM results that showed rougher surface for P3_MIP with larger pore volume compared to P1_MIP. Furthermore, the template molecule (thiamphenicol) is a small molecule (< 1 nm) and aperture of pore is less than 4 nm in P1_MIP, it may be thought that the more effective adsorption capacity could be occured. On the other hand, the smaller size distribution could raise the selectivity and increase the imprinting factor value for the polymer [13]. For adsorption/desorption isotherms studies, it was revealed more about the MIPs porosity characteristics. From the isotherm plots obtained, it was obviously showed that MIP/NIP synthesized using RAFT polymerization and conventional FRP having different isotherm types (Fig. 4 and Fig. 5). By comparing with isotherm types [14], the adsorption of thiamphenicol was significantly affected by the size of average pore diameter.
According to nitrogen adsorption/desorption isotherm and pore size distribution, P1_MIP and P2_NIP (Fig. 4) belong to Type IV isotherm. A Type IV isotherm is an indication of porous material containing micropores (<2 nm) and mesopores (2 to 50 nm). It means that the polymer obtained having a formation of monolayer followed by multilayer. The concept of Type IV isotherm stated that at the low pressure end, monolayer adsorption and micropore filling occurs until the adsorption levels off as the micropores are filled. Then the mesopores continue filling by capillary condensation and once again adsorption levels off as the mesopores are filled. During desorption, as pressure is lowered, the mesopores are emptied by capillary evaporation, but when capillary condensation and capillary evaporation do not take place at the same pressure, a hysteresis loop is created [13,15]. However, hysteresis loop obtained was much steeper for P2_NIP compared to P1_MIP. This might due to the template effect. For polymers produced via conventional FRP (P3_MIP and P4_NIP), the isotherm type indicated belongs to Type III isotherm (Fig. 5). This isotherm also explains the formation of multilayer polymer and the adsorption/desorption of gas occurs in the same rate. This isotherm also explains adsorbed layer (surface of polymer) have a week interaction. From the plot, lack of knee could be represented extremely weak adsorbateadsorbent interaction. It was indicated that the polymers produced by conventional FRP having week interaction between them.
The monolithic polymers were characterized by using FTIR spectroscopy. The results showed that the P1_MIP/ P3_MIP and P2_NIP/P4_NIP have rather similar FTIR spectra ( Fig. 6 and Fig. 7). As expected, it was because the same comonomers were used in the production of them. The presence of bands around 1738 cm -1 (C=O ester stretch), 1228 cm -1 and 1217 cm -1 (C-O ester stretching) indicated the presence of EGDMA residues in the polymers produced. The signal at 1365 cm -1 showed to the C=C stretch vibration peak from the pendant vinyl groups.
The imprinting process begins with complexation between MAA and thiamphenicol. The broadening at 3400-3500 cm -1 indicated that a hydrogen bonding interaction take place between hydroxyl group and amide group from thiamphenicol and carbonyl group from MAA residues. The FTIR spectra showed the absorption O-H and N-H stretch have been overlap each other. However, this signal more intense for both non-imprinted polymers, as this suggested that they have a lower cross-link density than MIPs. Unfortunately, the functional group C=S and C-S stretching derived from RAFT agent (CPDB) cannot be seen clearly in the FTIR spectra at the wavelength around 1050-1200 cm -1 might be due to only small amount of RAFT agent used. Moreover, since the particles obtained in irregular size and shape, some interaction sites were destroyed during grinding and thus give weak stretching absorption in FTIR spectra.
Conclusion
The study was successfully demonstrated the production of thiamphenicol-imprinted polymers prepared by both RAFT and conventional free radical polymerization. This preliminary studies have suggested that additional benefits may arise from exploitation of the "living" character of controlled radical polymerization to produce MIPs. Further research will be focused on the adsorption studies of the polymers towards targeted molecule.
We would like to thank to Universiti Malaysia Terengganu for project funding. | 2,504.6 | 2018-01-01T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Predictive mathematical modeling for EC 50 calculation of antioxidant activity and antibacterial ability of Thai bee products
Article history: Received on: 12/06/2017 Accepted on: 24/07/2017 Available online: 30/09/2017 Antioxidant activities of bee products from Thailand (honey, bee pollen and propolis) via the 2,2-diphenyl-1picrylhydrazyl (DPPH) and 2,2’-azinobis-(3-ethylbenzothiazoline-6-sulphonate)(ABTS) assays were determined. The prediction of the EC50 (the half maximal effective concentration) were studied using the logistic, sigmoidal, dose response, and asymmetric 5 parameters (5P) regression models. The antimicrobial ability was tested against Staphylococcus aureus (TISTR 517), Bacillus cereus (TISTR 687), and Escherichia coli(TISTR 1261). Propolis extract with higher total phenolic content (TPC) exhibited more effective antiradical action against the DPPH and ABTS, followed by bee pollen extract and honey. All four regression models could be used to estimate the EC50 of the bee products. However, the dose-respond and 5P provide the better EC50 prediction for the bee products than the others based on the comparability of their results to those of right-angled triangle method. Thai bee products had effective antimicrobial activities on each test microorganism. The antimicrobial potency of the bee products was ranged in the order: propolis> bee pollen >honey. Results revealed that antioxidant activity and antimicrobial ability of the bee products correlated with the TPC values.
INTRODUCTION
Anti-oxidative action is one of the physiological functions of many compounds found in foods (Nagai et al., 2001).This action is assumed to protect living organisms from oxidation, resulting in the prevention of various diseases such as cancer and diabetes (Nagai et al., 2001).The antimicrobial activity of chemical compounds, including antibacterial, antifungal and antiviral activity, is important against infections incited by microorganisms (Bogdanov, 2011).Plant polyphenols are potential natural alternatives to synthetic antioxidant and antimicrobial compounds (Siripatrawan et al., 2013).Bee products, one of the essential sources of polyphenols, are well known in traditional medicine dating back to ancient times.
Man utilized bee products in many ways, and now a days their applications have expanded from healthy foods to medicinal products.Bee products such as honey, bee pollen, royal jelly and propolis from various geographical locations around the world have been found to possess antioxidant and antimicrobial activities (Buratti et al., 2007;Choi et al., 2006;Graikou et al., 2011).Honey the nectar that the honey bees collect and process from many plants (Ferreira Isabel et al., 2009)has been used in food as a sweetening agent (Nagai et al., 2001) and food preservative since ancient times (Ferreira Isabel et al., 2009;Nagai et al., 2001).Honey normally consists of more than 150 substances, including complex mixture of sugars and small amount of polyphenolic compounds such as flavonoids and cinnamic acid derivatives (Buratti et al., 2007;Ferreira Isabel et al., 2009).Bee pollen is a fine, powder-like material produced by flowering plants and collected by worker honey bees formed into granules with added honey or nectar (Bogdanov, 2011).Bee pollen contains lipids, proteins, sugars, amino acids, vitamins, carotenoids and polyphenolics (Graikou et al., 2011).
Bee pollen is considered to be a nutrient-rich perfect food and is commercially promoted as a dietary supplement.Propolis is a sticky substance derived from plant resins collected by honeybees (Bogdanov, 2011).
Propolis contains more than 300 constituents such as polyphenols, sesquiterpene quinones, coumarins, steroids, amino acids, and inorganic compounds (Choi et al., 2006;Siripatrawan et al., 2013).The composition of propolis varies depending on the season and on the botanical origin from which the plant resins have been collected (Bosio et al., 2000).Propolisis now recognized to have a wide range of biological activities, such as antibacterial, anti-inflammatory, antioxidative, hepatoprotective, and tumoricidal activities (Bosio et al., 2000;Miorin et al., 2003).
Determination of the antioxidant power of the bee products involved the use of different methods such as the DPPH (2,2-diphenyl-1-picrylhydrazyl), ABTS [2,2´-Azino-bis(3ethylbenzthiazoline-6-sulphonate)], TBARS (thiobarbituric acid reactive substances) and β-carotene bleaching assays (Buratti et al., 2007;Ferreira Isabel et al., 2009;Lachman et al., 2010;Siripatrawan et al., 2013).The DPPH and ABTS assays have been widely used to determine antioxidant activity of various plants and other materials since they are stable free radicals and the determination is simple.The half maximal effective concentration (EC 50 ), the concentration of antioxidant that causes a 50% decrease in the radical absorbance, is commonly expressed by measuring antioxidant results.In 1999, Alexader et al. (1999) presented a simple and accurate mathematical method for calculation of the EC 50 called right-angle triangle.They suggested that the rightangle triangle method is simple, accurate, and non-computational technique for the calculation of the EC 50 is needed.Now days, a number of methods and software have been developed which contain the functions for non-linear curve-fitting of the experimental data and estimation of the EC 50 value, making this determination fast and particularly useful for laboratory test.Chen et al. (2013) studied the EC 50 estimation of antioxidant standards (quercetin, catechin, ascorbic acid, caffeic acid, chlorogenic acid and acetylcysteine) with DPPH assay using various computer programs and mathematical models.All the statistical programs they used provided similar EC 50 values, however, the asymmetric five-parameter equation in the GraphPad Prism software was found to point out a best fit for their experiment.Recently, the estimation of EC 50 values for fungi with different methods using computer programs were also reported by Li et al.(2015).Results showed that among all the statistical programs they used, IBM SPSS, GraphPad Prism, DPS were appropriated for EC 50 calculations of their samples.
To the best of our knowledge, there has been no research that publishes the EC 50 prediction of antioxidant activity for honey, bee pollen and propolis using comparative different regression models.
Thus, the objectives of the present work are: (1) to evaluate the antioxidant activity of the extracts of propolis, bee pollen and honey from Thailand, and (2) to identify the best model for the prediction of EC 50 from experimental data obtained via DPPH and ABTS assays.The in vitro antimicrobial activity was also investigated and it is reported.
Preparation of bee pollen extracts
Bee pollen extract was prepared according to the procedure described by Morais et al. (2011).Bee pollen was soaked in methanol at pollen-to-methanol ratio of 1:2 (w/v).The mixture was left to macerate for 72 h at room temperature and shaken by hand for 5 min twice a day.The pollen extract was filtered through a Whatman filter paper No. 4 using a Buchner funnel.The methanol extract was evaporated in a vacuum evaporator (Thailand) and stored in an amber glass bottle at 4ºC for further analysis.
Determination of total phenol content
The total phenol content (TPC) of the bee product samples was determined using the Folin-Ciocalteau method as described by Ahn et al. (2004) with slight modifications.The sample (0.3 mL) was put in a test tube, and 3 mL of distilled water, 0.25 mL of 2.0 N Folin-Ciocalteu reagent and 2.5 mL of 7% (w/v) sodium carbonate were added.Each tube was covered with a cap and shaken with a vortex mixer (Dragon Lab, China).After 30 min of incubation in a dark place at 25˚C, the absorbance was measured at 760 nm with a spectrophotometer (Labomed, USA) and compared to a calibration curve of gallic acid.The results are presented as means of triplicate analyses and expressed in mg gallic acid equivalents/g of sample (mg GAE/g sample).
Determination of antioxidant activity
The DPPH (2,2-diphenyl-1-picrylhydrazyl) scavenging capacity of the bee products was monitored according to the method described by Brand-Williams et al. (1995).A different dilution of the samples (0.3 mL) was mixed with 0.06 mM DPPHmethanolic solution (2.7 mL).The mixture was placed in a dark room for 30 min.The absorbance at 516nm (A) was determined with a spectrophotometer (Labomed, USA).This activity was given as % DPPH scavenging and calculated using equation 1: where A control is absorption of DPPH solution, and A sample is absorbance of the test sample.The half maximal effective concentration (EC 50 )is the amount of sample necessary to decrease the absorbance of DPPH by 50%.It was calculated by interpolation from the graph of inhibition percentage against sample concentration using a simple mathematical method based on the principle of right-angled triangle (Alexander et al., 1999).Ascorbic acid and α-tocopherol were used as positive controls.All the analyses were carried out in triplicate.
Determination of Trolox equivalent antioxidant capacity (TEAC)
For the TEAC assay, the procedure followed the method described by Re et al.(1999).The TEAC assay is based on the scavenging of the 2, 2'-azinobis-(3-ethylbenzothiazoline-6sulphonate) (ABTS) radical (ABTS•+).ABTS•+ was produced by reacting 7 mM ABTS solution with 2.45 mM potassium persulfate solution (1:1) (v/v) and storing it in the dark at room temperature for 16 h before use.The ABTS•+ solution (1 mL) was diluted to get an absorbance of 0.700±0.025at 734 nm with methanol (40 mL).Bee product sample (0.3 mL) was added to the ABTS•+ solution (2.7 mL) and the absorbency was measured after 6 min.The %inhibition of the sample was calculated using the formula mentioned in the DPPH assay.The result was then compared with a standard curve made from the corresponding readings of Trolox (0-0.2 mM).Results were expressed as mg trolox equivalents/g dried sample (mg TE/g sample).
EC 50 prediction using statistical models
Data analysis for the free radical scavenging activity of bee product samples was performed using the logistic, Boltzmann (sigmoidal), log (agonist) vs. normalized responsevariable slope (doseresponse), and asymmetric sigmoidal (five parameter, 5P) mathematical models indicated in equations 2 to 5, respectively, using JMP 10 (SAS Institute Inc., Cary, NC, USA) and SciDAVis (version 2, Boston, MA).
Doseresponse
Asymmetric sigmoidal where x is log of concentration, y is response, y' is normalized response (0 to 100%), A 1 is the baseline, A 2 is the maximum response, x 0 is center or logEC 50 , p is power, dx is time constant, Hillslope is the steepness of the curve which has no units and s is the symmetry parameter, which is unit less and x b is concentration at the inflection point.For asymmetric sigmoidal model, the EC 50 can be calculated from the x b , Hillslope and s parameters by using equations 6 as followed:
Antimicrobial ability Preparation of inoculums
Gram-positive (Staphylococcus aureusTISTR517 and Bacillus cereus TISTR687) and Gram-negative (Escherichia coli TISTR1261) organisms were obtained from the Division of Biotechnology, Faculty of Agro-Industry, Chiang Mai University (Chiang Mai, Thailand).S. aureus, B. cereus and E. coli were cultured in NB at 30˚Cfor 24 h.The optical density (OD) of the bacteria was adjusted to the standard of McFarland No. 0.5 (Hindler et al., 1992)with 0.85 g sodium chloride/100 mL sterile solution to achieve a concentration of approximately 10 8 CFU/mL.The final concentration of the cell numbers was approximately 10 5 -10 6 CFU/mL obtained by diluting 100 times with sterile sodium chloride solution.
Determination of minimum inhibitory concentrations (MIC) and minimal bactericidal concentration (MBC)
The MIC of the bee product samples was determined using a broth dilution assay according to the procedure described by Mazzola et al. (2009).One mL of NB medium was dispensed in each of the 12 numbered test tubes (16 mm x 150 mm), except for tube # 1.The tubes were autoclaved (IWAKI, Japan) at 121ºC.For tubes#1 and # 2, 1 mL of test sample was introduced; tube # 2 was stirred and 1 mL was withdrawn and transferred to tube #3.This serial dilution was repeated for all tubes up to tube # 11.Then 1 mL was removed from tube # 11.One mL of each test microorganism was added to each tube.All tubes were incubated at 30ºC for 24 h and the results were evaluated.The MIC of bacteria was defined as the lowest concentration at which no growth occurred.Tube # 12 is the positive control (NB + inoculation).
The MBC test determines the lowest concentration at which an antimicrobial agent will kill a particular microorganism.The MBC is defined using a series of steps, undertaken after the MIC test has been completed.The dilution representing the MIC and at least two more concentrated test sample dilutions was touched with a loop and streaked on a NA plate and incubated at 30 ºC for 24 h.The MBC was determined as the lowest concentration at which no growth appeared (Taemchuay et al., 2009).The plates with streaking of each inoculation were used as the control.
Statistical analysis
The data were analyzed by a one-way analysis of variance (ANOVA) and Tukey HSD's multiple range test (p≤0.05)using the SPSS software (Version 11, SPSS Inc., Chicago, IL).
Total phenol content
Thai honey, bee pollen and propolis were analyzed by the proposed procedure and the results were expressed as mg GAE/g.Polyphenols were found in our bee products.The values of their total phenol content are shown in Table 1, and they are in the following decreasing order: propolis extract > pollen extract > honey.Buratti et al.(2007) also found a similar trend for bee products.The TPC of our propolis (237.18mg GAE/g sample) agreed with values obtained by Ahn et al. (2004) in Korean propolis (85-283 mg GAE/g) and by Moreira et al.(2008)in Portuguese propolis (151-329 mg GAE/g).However, the TPC of our propolis was higher than the values reported by Siripatrawan et al.(2013) and Kumazawa et al.(2004)in Thai propolis,22.8-77.5 mg GAE/g and 31.2 mg GAE/g, respectively, and Choi et al. (2006) in Brazilian propolis (120 mg GAE/g).On the other hand, the TPC of Chinese propolis showed slightly higher values as ranged from 262 to 299 mg GAE/g (Kumazawa et al., 2004).Meanwhile, the TPC of our bee pollen extracts (24.22 mg GAE/g) was in agreement with those of bee pollen reported from Portugal (25.3-28.8mg GAE/g) and Spain (18.6-32.2mg GAE/g) (Pascoal et al., 2014).The TPC of our honey (0.57 mg GAE/g)was comparable to honey collected in Thailand (0.23-0.73 mg GAE/g) by Jantakee and Tragoolpua (2015) and in Portugal (0.23-0.73 mg GAE/g) by Ferreira et al. (2009).However, this value was lower when compared to honey from Brazil (1.05 mg GAE/g) (Sant'ana et al., 2014).The variation of TPC from the bee products from various origins could be attributed to climate and environmental factors such as humidity, temperature and soil composition.
Antioxidant activity
Antioxidants from natural sources are attractive alternatives to synthetic antioxidants.Antioxidants can be used to prevent diseases and oxidation of food products (Morais et al., 2011).According to the complex nature of natural antioxidants, Sakanaka and Ishihara (2008) suggested that the use of at least two methods is recommended to evaluate and compare the antioxidant capacity of a sample.In this research, we used the procedure based on the reduction of DPPH and ABTS, stable free radicals, to investigate the free radical-scavenging activity of the bee products.The DPPH and ABTS assay has been widely employed to determine the free radical scavenging ability of a variety of natural antioxidants (Chen et al., 2013;Lachman et al., 2010;Siripatrawan et al., 2013).The underlying mechanisms of determining the activity using DPPH and ABTS can be represented as Reaction 1 and 2, respectively (Boligon et al., 2014): (1) (2) In the DPPH assay, the purple DPPH• is reduced by hydrogen-donating of antioxidant to the pale yellow DPPH-H.In the ABTS scavenging process, first, ABTS•+ is generated by reacting a strong oxidizing agent, potassium persulfate, with ABTS salt.The blue-green ABTS•+ is converted back to its colorless ABTS by hydrogen-donating of antioxidant.
The free radical scavenging activity of the methanolic fraction of the honey, pollen extract and propolis extract was measured at various sample concentrations by the DPPH assay and ABTS assay and expressed as the EC 50 values in Table 1.Ascorbic acid and -tocopherol, well known natural antioxidant, were used as standards.The EC 50 values calculated from DPPH and ABTS assays for the bee products ranged between0.159-286.8mg/mL and 0.059-93.19mg/mL, respectively.The average antioxidant activities determined by the ABTS assay were two to three times lower compared to values determined by the DPPH assay (Lachman et al., 2010).This may probably because DPPH may have limitation and show lower sensitivity to the bee products than ABTS.ABTS•+ is applicable to both hydrophilic and lipophilic antioxidants due to its solubility in both aqueous and organic solvents while DPPH• is useable to hydrophobic systems since it is dissolvable in organic media (Floegel et al., 2011).Therefore, the ABTS method is reactive towards most antioxidants; whereas some compounds react very rapidly by the DPPH assay.
The lower EC 50 value indicates a higher antioxidant activity for the product.The antioxidant activity values determined by these two different assays (Table 1)revealed that among the bee products propolis had the stronger antioxidant power when compared to bee pollen and honey but lower than the standards.This outcome may be attributed to the large concentration of et al. (2001).The data collected by Buratti et al. (2007) via DPPH assay showed that within the Italian bee products, propolis (IC 50 = 1.0-2.1 mg/mL) had the highest antioxidant capacity followed by royal jelly (IC 50 = 1.4-2.3mg/mL) and honey (IC 50 = 5.0-15.5 mg/mL).Nagai et al. (2001) studied the anti-oxidative effects of some honeys, royal jelly, and propolis from Japan using a lipid peroxidation model.They discovered that the superoxide scavenging activities of the bee product decreased in the following order: propolis> royal jelly > honey.
Interestingly, the extracts of the bee products, which exhibited higher activity, were those that contain a high phenol level.Propolis was obviously most active among all the bee product samples.The antioxidant activity seemed to be related to the total phenol content of the extract.Similar phenomena have been reported for propolis from Korea (Choi et al., 2006), Italian bee products (Buratti et al., 2007) et al., 2013).Flavonoids and phenolic components played an important role in the free radical scavenging capacity of the extract (Graikou et al., 2011).Likewise, the different origins of the extracts may provide different types and contents of the phenolic compounds in propolis.Rutin, quercetin and naringenin were found to be the main phenolic compounds in propolis collected from Nan province, Thailand (Siripatrawan et al., 2013).Kumazawa et al. (2004), who study the antioxidant activity of propolis of various geographic origins, found that propolis contained antioxidative compounds such as kaempferol and phenethylcaffeate showing the strong antioxidant activity.Phenols exhibits an excellent property of reducing spontaneous autoxidation of organic molecules (Ingold, 1961) through a general class of mechanism called chainbreaking.Chain breaking antioxidants operate by neutralizing peroxide radicals to stop chain propagation of the radicals.To inhibit the oxidation, an H atom from the phenols is transferred to the oxidative chain carrying peroxyl radicals (ROO•)as exemplified in Reaction 3 (Foti, 2007): Phenoxyl radical (Phenol-O•), generated as a product, is normally nonreactive to oxygen (O2) and substrates (RH) (Reaction 4 and 5).This reduces the rate of the oxidation reaction (Ingold, 1961).The Phenol-O• is then degenerated via the bimolecular selfreaction or the reaction by another ROO• radical (Reaction 6 and 7). (4) (5) (6) (7) Figure 1 shows the correlation between the total phenol and antioxidant activity with 1/EC 50 values measured from DPPH assay of the tested bee products.Correlations of some natural products from previous studies (Barreira et al., 2008;Harzallah et al., 2016;PhomkaivonAreekul, 2009) are also showed (Figure 1).The antioxidant activity of the products correlated with the total phenolic contents.Ferreira et al.(2009)have tested honey form Northeast Portugal and found that the higher antioxidant contents and the lower EC 50 values for antioxidant activity were obtained in the darker honey which contained higher total phenolics.In the study of Moreira et al.(2008), propolis from northeast and center of Portugal was analyzed.Lower values of EC 50 on DPPH scavenging assay were obtained for northeast of Portugal, which could be related with the higher total phenols content.However, the strong relation between the phenolic compounds and antioxidant activity was not found for the bee pollen studied by Pascoal et al. (2014) andMorais et al. (2011), and they did not give any reason.Previous studies have presented the effect of phenolic compounds on antioxidant activity of other natural products.He et al.(2015) studied antioxidant activity of Pyruspashia flowers in China, and they found that antioxidant effect of P.pashia flowers was related with phenolics content.Barreira et al. (2008) determined an antioxidant activity and polyphenols content of the extracts from various part of chestnut such as flowers, leaves and fruits.They found that chestnut flowers and leaves presented very good antioxidant activity while chestnut fruits revealed the highest EC 50 values.Their obtained results are in agreement with the phenol contents determined for each sample.In this work, we also show a correlation of the EC 50 value with the total phenolic contents.
EC 50 prediction using statistical models
EC 50 is an important parameter to evaluate the antioxidant activity of materials and it could be used to compare the antioxidant capacity of various materials.The EC 50 could be determined by interpolating data from an appropriate curve or by a non-linear regression of the data by using different models (Chen et al., 2013).Various models can be used to determine EC 50 .In this work, we fit the experimental data to the logistic, Boltzmann (sigmoidal), log (agonist) vs. normalized responsevariable slope (doseresponse), and asymmetric sigmoidal (five parameters, 5P) to predict EC 50 .
Figure 2 shows the effect of different concentration of honey, bee pollen and propolis in free radical scavenging tests: (a) DPPH assay and (b) ABTS assay.The data was fitted with sigmoidal model, as an example.The results showed that the relationship between radical inhibition and logarithm of the bee products concentration is not a straight line, but a sigmoidal or Sshape.Four mathematical model including logistic, sigmoidal, dose-response and 5P were selected to fit the curve and estimated EC 50 of the bee products.The results indicated that these four mathematic models could be used to fit our antioxidant data sets and provide the EC 50 values.The EC 50 values of the same sample among the four models did not show large difference (Table 2).This might be because of the log-logistic based equation they used.For the DPPH-assay, no statistical differences were found between the EC 50 of each model and the right-angled triangle method (simple method) for honey (P>0.05).For bee pollen, the doseresponse and 5P models show no significant different between their EC 50 to that of the simple method (P>0.05) while the logistic and sigmoidal models show opposite results (P0.05).Significant differences were found between the EC 50 values of propolis estimated by the four models to the simple method (P0.05).However, the EC 50 value obtained from the 5P model was the closest to the simple method.The ABTS-assay estimated EC 50 of the bee products is also represented in Table 2.The results of honey for the ABTS assay are similar to those for the DPPH assay (P>0.05).For bee pollen, there are no statistic significant between the EC 50 from dose-respond and the 5P model to that of the simple method (P>0.05).For propolis, only doseresponse model shows no significant difference between its EC 50 to that of simple method (P>0.05).These results indicated that it might be better to use dose-response and 5P models for prediction of the EC 50 of the bee products via DPPH assay and to use doseresponse via ABTS assay.The reason that the 5P model was more appropriate than the logistic and sigmoidal models in estimation of EC 50 for DPPH assay might be an impact of number of its parameters in the log-logistic model.Dose-response was more appropriate to estimate the EC 50 than the other models for ABTS assays might be because the normalizedresponses were used in an equation.Dose-response, a log-logistic model, has four-parameter as logistic and sigmoidal models but the response is normalized to run from 0% to 100%.This model assumes that the data have been normalized thus forces the curve to run between 0100%.Then the EC 50 is reflected as a response equal to 50%.Nonlinear modeling with data normalization and constrains has been found to produce more sigmoidal curves than nonlinear modeling without data manipulation (Wenner et al., 2011). Wenner et al. (2011) have reported that normalizing and constraining parameters increased statistical power and minimized the need to exclude data because of poor curve fitting.Although the overall interpretation of the two modeling; with and without normalization methods of curve fitting was similar.
Antimicrobial ability
S. aureus is a gram-positive bacterium, which is an important cause of gastroenteritis resulting from the consumption of contaminated food (Loir et al., 2003).B. cereus is a grampositive bacterium, common soil saprophyte and is easily spread to many types of foods (Granum Lund, 1997).E. coli is a gramnegative bacterium that can be found in contaminated water or food, especially raw vegetables and raw meat products (Siripatrawan et al., 2013).It has been identified as a particularly dangerous pathogen due to its resistance to many commonly used antibiotics (Rahman et al., 2010).These three bacteria are commonly recognized to cause food poisoning or food spoilage.
The MIC values of Thai honey, bee pollen and propolis against S. aureus, B. cereus and E. coli are listed in Table 3.A turbidity assay was used to identify the growth of microorganisms compared to the positive control.Each bee product tested caused inhibition of bacterial growth.The MIC values of honey were of 340 mg/mL for S. aureus and B. cereus while it was680 mg/mL For E. coli.Concentrations of 153.71-614.83mg/mL of pollen extract inhibit growth against these three microorganisms.The MIC values of the propolis were 1.88, 0.94 and 3.75 mg/mL against S. aureus, B. cereus and E.coli, respectively.The MBC values of the three types of bee products against the microorganisms grown in NA plate are listed in Table 3.The digital photographs of the MBC determination for honey, bee pollen and propolis against S. aureus, B. cereus and E. coli are shown in Figures3, 4 and5 respectively.NA plates streaked from positive control tubes show the appearance of colonies of S. aureus (Figure 3d), B. cereus (Figure 4d) and E.coli (Figure 5d).The MBC assays showed growth, no growth and inhibition of growth following bacterial streaks (Figure 3-5).Honey was lethal to B. cereus and E. coli at the same concentration (680 mg/mL), but did not kill S. aureus.A concentration of 614.83 mg/mL of the pollen extract demonstrated effectiveness in killing S. aureus while only 307.42 mg/mL was sufficient to kill B. cereus and E. coli.The MBC of the propolis extract was 15.01 mg/mL for all the bacteria S. aureus, B. cereus and E. coli.It was found that the propolis extract solution had the lowest MIC and MBC values for these bacteria.The MIC and MBC techniques are simple and easily used to investigate the inhibitory doses of antibiotics or disinfectants for particular bacteria (Rahman et al., 2010).In this work, honey, pollen and propolis demonstrated antibacterial activity against S. aureus, B. cereus and E. coli via the MIC and MBC techniques.However, honey did not show an MBC against S. aureus.The antimicrobial activity of honey is due to its osmotic effect, natural acidity, hydrogen peroxide, phenolic acids, flavonoids and lysozyme (Boukraâ et al., 2013).The antimicrobial activity of pollen is attributed to its phenolic compounds (Boukraâ et al., 2013), and the antimicrobial activity of propolis is caused by the phenolic compounds such as flavonoids (Boukraâ et al., 2013).Different antimicrobial agents possess different mechanisms.
The mechanism of antimicrobial resistance of honey, bee pollen and propolis is involved degrading the cytoplasm membrane of the bacteria (BellikBoukraâ, 2012).This leads to a loss of potassium ions and the damage effected provoking cell autolysis (BellikBoukraâ, 2012).Quercetin, a flavonoid found in both honey and propolis, can increase membrane permeability of the bacterial and dissipates bacterial potency (Mirzoeva et al., 1997).This makes the bacteria lose their motility, membrane transport and capacity to synthesis adenosine triphosphate (ATP).The present study revealed that these bee products seemed to inhibit the grampositive bacteria more than gram-negative.This is in an agreement with Brazilian propolis studied by Schmidt et al. (2014).In general, plant extracts normally have a higher activity against gram-positive bacteria than gram-negative bacteria (Rahman et al., 2010).Gram-negative bacteria are more resistant than the grampositive bacteria because they have more complex chemical structures (Morais et al., 2011).This bacterial group has a polysaccharide as one of the components in the cell wall, which is involved in the antigenicity, toxicity and pathogenicity of the microorganisms.Furthermore, the gram-negative bacteria possess a higher lipid amount than that observed in gram-positive bacteria (Morais et al., 2011).This lipid is a component of an endotoxin, which has responsible toxicity in the cell wall of gram-negative bacteria.
Graikou et al. (2011) reported that the MIC values of a Greek pollen-methanol extract were 0.74 and >10 mg/mL against S. aureus and E. coli, respectively.Morais et al. (2011) found that the honeybee-collected pollen from Portuguese Natural Parks provided the MIC of 0.17% (w/v) for B. cereus, 0.21% (w/v) for S. aureus, and <5% (w/v) for E. coli.Choi et al. (2006) verified that the propolis from Korea had much more powerful antimicrobial activity than another from Brazil.Rahman et al. (2010) .investigated propolis and honey from Canada against S. aureus and E. coli.Propolis and honey concentrations at rates of 2.74-5.48 and 375.0 mg/mL could inhibit S. aureus.For E. coli, negative growth was found in propolis at a concentration of only 5.48 mg/mL, but honey was no effective.These MIC and MBC values are different than the MIC and the MBC found to be active in this work.These variations of antimicrobial property of honey, bee pollen and propolis may be because of the different plants in the sources where the bee lives.In addition, the results showed that the antimicrobial activity of bee products in this study related to total phenol content of the extracts as shown in Table 1.This is because the phenolic compounds are the main sources of antimicrobial action of honey, bee pollen and propolis.The other works in bee products by Choi et al. (2006) and Morais et al. (2011) were also in agreement.Miorin et al. (2003) suggested that the effectiveness of honey or propolis depends on differences in chemical composition, bee species and geographic region.Another factors such as the nature of the phenolic fraction might be involved (Morais et al., 2011), and they should be further study.
CONCLUSIONS
This work emphasized the value of honey, bee pollen and propolis from Thailand as essential sources of natural antioxidant and antimicrobial.Among these bee products, propolis possessed the most powerful anti-free radical and antibacterial activities following by bee pollen and honey, respectively.Phenolic compounds play an important to their strong effectiveness.In order to evaluate the prediction of the EC 50 , four mathematic equations logistic, sigmoidal, dose-response and 5P models could be applied for calculations of EC 50 values.Among these four models, dose-response and 5P gave the nearest results to a rightangled triangle method as a reference.Thus we recommended dose-response and 5P model as the effective methods for the curve fitting and prediction the EC 50 via DPPH and ABTS assays.Future work, we are interested to study the incorporating of the bee products in the packaging materials in order to extend shelf life of the prospective product.
Fig. 2 :
Fig. 2: Effect of different concentration of honey, bee pollen and propolis in free radical scavenging tests: (a) DPPH assay and (b) ABTS assay.Data are fitted with sigmoidal model.
Fig. 3 :
Fig. 3: Digital photographs of the MBC of (a) honey.(b) bee pollen and (c) propolis against S. aureus and (d) control.
Table 1 :
Total phenol content and antioxidant activity determined by DPPH and ABTS assays of bee product samples.
Table 3 :
MIC values (mg/mL) and MBC values (mg/mL) of Thai bee products against food borne microorganisms.Abbreviations:N.D.without efficacy. | 7,204.4 | 2017-09-01T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Electrofusion Stimulation Is an Independent Factor of Chromosome Abnormality in Mice Oocytes Reconstructed via Spindle Transfer
Oocytes reconstructed by spindle transfer (ST) are prone to chromosome abnormality, which is speculated to be caused by mechanical interference or premature activation, the mechanism is controversial. In this study, C57BL/6N oocytes were used as the model, and electrofusion ST was performed under normal conditions, Ca2+ free, and at room temperature, respectively. The effect of enucleation and electrofusion stimulation on MPF activity, spindle morphology, γ-tubulin localization and chromosome arrangement was compared. We found that electrofusion stimulation could induce premature chromosome separation and abnormal spindle morphology and assembly by decreasing the MPF activity, leading to premature activation, and thus resulting in chromosome abnormality in oocytes reconstructed via ST. Electrofusion stimulation was an independent factor of chromosome abnormality in oocytes reconstructed via ST, and was not related to enucleation, fusion status, temperature, or Ca2+. The electrofusion stimulation number should be minimized, with no more than 2 times being appropriate. As the electrofusion stimulation number increased, several typical abnormalities in chromosome arrangement and spindle assembly occurred. Although blastocyst culture could eliminate embryos with chromosomal abnormalities, it would significantly decrease the number of normal embryos and reduce the availability of embryos. The optimum operating condition for electrofusion ST was the 37°C group without Ca2+.
INTRODUCTION
Spindle transfer (ST) is considered to be the most valuable therapeutic strategy for mitochondrial diseases and senile infertility, especially those with aging oocytes. Electrofusion ST has become the preferred method in mitochondrial replacement technology, because it doesn't involve exogenous substances (1). Due to chromosome abnormalities in some of the reconstructed oocytes, the efficiency of ST technology is low. It is speculated that ST may cause mechanical interference in the spindle. Since the spindle along with chromosomes is not membrane-wrapped (2)(3)(4)(5)(6)(7)(8)(9), enucleation or electrofusion stimulation may disrupt the function of the cytoskeleton, which may lead to abnormal chromosome segregation when the reconstructed oocyte is activated by subsequent fertilization (7,8,10,11). It is also suspected that premature activation may lead to abnormal chromosome segregation (4), but this remains controversial.
At present, there are few researches on the effective inhibition of chromosome abnormality in oocytes reconstructed via spindle transfer. Daniel Paull (4) has suspected that temporary room temperature treatment is beneficial to maintain chromosome stability, which may make the spindle disappear temporarily to inhibit premature activation. But it remains unknown whether there is a correlation between the temporary disappearance of the spindle and inhibition of premature activation. Since oocytes are particularly sensitive to temperature, cooling treatment for more than 10 minutes may cause irreversible spindle damage (12)(13)(14), and change in incubation temperature as little as 0.5°C significantly affects mouse embryo development (15). In addition, a slight increase in incubation temperature may promote tubulin assembly, enhance the spindle birefringence, and make the spindle clearer under the microscope (16). With the increase in temperature or the extension of the high temperature, the spindle microtubules aggregation occurs, the spindle will also disappear but will not reappear after the temperature returns to normal, causing irreversible effects on oocytes (16)(17)(18). Besides, studies have shown that intracellular calcium oscillations triggered by sperm penetration during fertilization (19) and Ca 2+ influx induced by mechanical or chemical operation (20) can both lead to decreased kinase activity, activation of oocytes and recovery of meiosis (21). Thus, some study has speculated that ST manipulation in a Ca 2+ free medium may avoid spontaneous activation, but this has not been confirmed (3,22).
In addition, MPF plays an important role in oocyte activation (23,24). When oocytes are fertilized or parthenogenetically activated, the Ca 2+ concentration increases instantaneously and cytostatic factor (CSF) expression decreases, resulting in the decrease or even disappearance of MPF activity and chromosome segregation, prompting oocytes to enter meiotic anaphase II (25). Moreover, premature activation in somatic cell nuclear transfer (SCNT) reconstructed embryos leads to abnormal spindles and chromosomes, as well as the expression of spindle related proteins (26,27). g-tubulin is an important regulatory protein involved in microtubule nucleation and spindle assembly that is located at the poles of the spindle in MII oocytes. If abnormal, g-tubulin will dissociate from the poles, and become irregularly scattered in the spindle microtubulin or in the cytoplasm (24).
In this study, C57BL/6N oocytes were used as the model, and electrofusion ST was performed under normal condition, Ca 2+ free, and at room temperature, respectively. The effects of enucleation and electrofusion stimulation on MPF activity, spindle morphology, g-tubulin localization and chromosome arrangement were compared to verify the existence and occurrence of premature activation and to subsequently clarify the factors and mechanism for chromosome abnormality in mice oocytes reconstructed via ST, which would thus optimize ST technology and promote its clinical transformation.
Oocyte Retrieval and Culture
C57BL/6N mice (female, 6-8 weeks old; male, 7-8 months old) were purchased from Beijing Vital River Laboratory Animal Technology Co. Ltd. This study was reviewed and approved by the Institutional Animal Care and Use Committee of the Sixth Medical Center of China PLA General Hospital (HZKY-PJ-2019-3). The number of mice, oocytes and replications used in each group in this study were shown in Supplemental Table S1. Female mice were administered 10 IU of pregnant-mare serum gonadotropin (PMSG) and 48 h later 10 IU of human chorionic gonadotropin (HCG) (28). To avoid the effects of anesthesia, euthanasia was performed via cervical dislocation 14-16 hr after HCG injection, and the oviducts were isolated. Following the removal of cumulus cells with 40 IU hyaluronidase, MII oocytes were collected and incubated in a fertilization medium (FM) under liquid paraffin oil in a 37°C, 6% CO 2 , 5% O 2 humidified incubator (29).
Spindle Transfer
Oocytes were exposed to gamate buffer with 7.5mg/ml Cytochalasin B (CB) for 5 min at 37°C before manipulation. Then the dish was placed onto the warm stage of an Olympus IX71 inverted microscope equipped with micromanipulators. A slot was made in the zona pellucida, using the Saturn Active Laser System (RI, Saturn Active, 6-47-500, UK) with several pulses of 100-200ms. The spindle was then gently aspirated into the micromanipulation needle and transferred into the perivitelline space of an enucleated donor cytoplast. After that, the reconstructed oocytes were transferred into a CB free gamete buffer and incubated for 10 min in a 37°C humidified incubator, as shown in Supplemental Figure S1.
Notes: the 37°C treatment group and the 25°C treatment group represented that after enucleation ST reconstructed oocytes were treated at 37°C and 25°C for 5 min before electrofusion treatment, respectively.
Electrofusion
Membrane fusion between the spindle and the donor cytoplast was initiated by placing it into BTXpress Cytofusion Medium C between gold electrodes (BEX, LF501G1, Japan). Different electrical pulse in each group listed in Table 1 was delivered by an Electro Cell Fusion System (BEX, CFB16-HB, Japan) at room temperature. The reconstructed oocytes were then washed twice and transferred to FM for 20-30 min to check the fusion status.
Notes: the Ca 2+ group represented that the operating mediums such as gamete buffer used in spindle transfer process and Cytofusion Medium used in electrofusion process both contained Ca 2+ . The Ca 2+ free group represented that the operating medium such as G-PGD used in spindle transfer and electrofusion process contained no Ca 2+ . In addition, the operating mediums used in spindle transfer and electrofusion process in other experiments were the same as those in the Ca 2+ group.
Fertilization and Culture
After a successful fusion, the reconstructed oocytes were transferred into FM to be co-incubated with sperm obtained from the cauda epididymis of C57BL/6N cultured in a sperm medium for 1h. 8 hours later, the zygotes were transferred into a cleavage medium for 2d, and then into a blastocyst medium for 2d, respectively.
MPF Assay Procedure
The oocytes in each group were washed 3 times in a Ca 2+ free PBS with 0.1% PVA, placed in tubes containing 15 ml of radio immunoprecipitation assay (RIPA) buffer containing a protease inhibitor cocktail tablet (Roche), vortexed on ice for 4-5 min, and then centrifuged at 4°C at 12,000 rpm for 15 min; The supernatant was collected and stored at -20°C until use. Assays of MPF level were performed using the Mouse M P F e l i s a k i t ( D O G E S C E , C h i n a ) f o l l o w i n g t h e manufacturer's protocol.
Immunofluorescence Staining
Immunofluorescence staining refers to the methods used by Zi-YunYi et al. (24), In brief, the oocytes in each group were fixed in 4% paraformaldehyde in a PBS with 0.5% Triton X-100 for 1 h at 4°C, followed by blocking in 3% BSA for 1 h at 37°C. Thereafter the oocytes were incubated with mouse monoclonal anti-gtubulin antibody (4D11, MA1850, invitrogen, 1:30) overnight at 4°C. After two washes (10 min each) in a washing buffer (0.1% PVA in PBS), the oocytes were labeled with Goat Anti-Mouse IgG H&L (DyLight ® 594, ab96881, Abcam, 1:30) for 1 h at 37°C. After two washes, the oocytes were stained with monoclonal anti-b-tubulin-FITC (F2043-2ML, Sigma, 1:30) for 1 h at 37°C, then co-stained with DAPI for 10 min at room temperature, followed by two more washes. Finally, the oocytes were mounted on glass slides with an antifading mounting medium (Sigma), and visualized with a confocal laser-scanning microscope (Nikon Ti2, Japan).
Karyotype Analysis
Cultured for 4 days, blastocysts were washed in PBS for 2-3 times, then each blastocyst was transferred into a labeled centrifuge tube containing 2ml PBS. After brief centrifugation, the samples were immediately transferred to a refrigerator and stored at -80°C. Then, all samples were sent to genetic testing
Statistical Methods
At least three replications were performed for each treatment and results obtained in different replications were pooled and analyzed together. The data was analyzed with SPSS23.0 statistical software, and GraphPad Prism 8.0 was used for plotting. Enumeration data such as oocyte/embryo proportions was expressed as a percentage (%), the comparison between groups was performed by the chi-square test, and measurement data such as the MPF activity was expressed as mean ± standard deviation, and was analyzed by univariate ANOVA. P<0.05 indicated statistical difference, P<0.01 indicated significant statistical difference, P<0.001 indicated extremely significant statistical difference, P>0.05 indicated no statistical difference.
Electrofusion Stimulation Rather Than Enucleation, Was the Key Factor Causing Premature Activation in Mice ST Reconstructed Oocytes
To investigate whether and when premature activation occurred in the ST process, we first detected MPF activity in each procedure, including MII oocytes (Ctrl), reconstructed oocytes before electrofusion (pre-ST), unfused ST reconstructed oocytes (Unfused ST) and fused ST reconstructed oocytes (ST), with the results shown in Figure 1A. There was no significant difference in MPF activity between the Ctrl and pre-ST groups (P=0.3421). Compared with Ctrl, the MPF activity was significantly decreased in Unfused ST (P=0.0107) and ST (P=0.0012), and the decline was most significant in ST. Meanwhile MPF activity in ST was also significantly lower than that in pre-ST (P=0.0097), indicating that electrofusion stimulation significantly reduced the MPF activity in ST reconstructed oocytes. Additionally, the effect of enucleation and electrofusion stimulation on the chromosome, spindle morphology, and gtubulin were compared, as shown in Figures 1B-E. In the Ctrl and Pre-ST group, chromosomes of oocytes were all arranged in the center of the spindle, and the spindle was normal, forming a typical bipolar, symmetrical, and spindle-shaped structure (P=0.895). g-tubulin was located on both poles of the spindle (P=0.808). Conversely, the chromosomal nondisjunction rate (CN), normal spindle morphology rate (NSM), normal gtubulin rate (NR) in Unfused ST and ST were all significantly lower than those in Pre-ST (P<0.01, respectively). There was no significant difference in CN (P=0.533), NSM (P=0.557), and NR (P=0.414) between Unfused ST and ST. Electrofusion stimulation, whether fused or not, rather than enucleation, caused reduced MPF activity, abnormal chromosomes activity and disrupted spindle organization during meiosis, indicating that after electrofusion stimulation, premature activation occurred in ST reconstructed oocytes.
Furthermore, in Figure 2B, the MPF activity in SEF (P=0.0350) and DEF (P=0.0326) was statistically lower than that in Ctrl, with no statistical difference between SEF and DEF (P>0.05). Compared with SEF (P=0.0020), DEF (P=0.0021) and Ctrl (P<0.0001), MPF activity in TEF was the lowest one. The immunofluorescence staining shown in Figures 2C, D indicated that CN (P TEF/SEF <0.01, P DEF/TEF < 0.01, P DEF/SEF =0.812), NSM (P TEF/SEF <0.01, P DEF/TEF <0.01, P DEF/SEF =0.679) and NR (P TEF/SEF <0.01, P DEF/TEF <0.01, P DEF/SEF =0.772) in SEF and DEF were all significantly higher than that in TEF, with no statistical difference between the two groups. We concluded that single or double electrofusion had little effect on mice oocytes reconstructed via ST, while triple electrofusion might have a negative effect, and that a threshold might exist in MPF inactivation.
In addition, as the electrofusion stimulation number increased, several typical abnormalities in chromosome arrangement and spindle assembly occurred, especially in the TEF shown in Figure 3. Chromosome abnormality mainly included misaligned chromosomes in the metaphase-plate region of the spindle, disrupted chromosomes spread throughout the whole spindle region, with others at the pole of the spindle. There were several types of aberrant spindle organization, including fractured spindle microtubules, disordered arrangement, broadened spindles, over-elongated spindles, the absence of spindles, and abnormal spindle poles, including spindles with no poles, monopoles, and multipoles, etc. The localization of g-tubulin, an important regulator of spindle organization at the spindle poles, was also disrupted, with g-tubulin dissociated from the poles of the spindle, irregularly scattered in the spindle microtubules, or in the cytoplasm.
Embryos in the Ctrl, MII-SEF, MII-DEF, and MII-TEF groups were cultured for 4 days. Figure 5A Table S2 and Figure S2) showed that there was no statistical difference in the chromosome abnormality rate in blastocysts among the 4 groups (P MII-TEF/Ctrl =1.000, P MII-TEF/SEF =0.801, P MII-TEF/DEF =0.744, P MII-Ctrl/SEF =0.801, P MII-Ctrl/DEF =0.744, P MII-SEF/DEF =0.941). It could be seen that the blastocyst culture could eliminate embryos with chromosomal abnormalities in the MII-TEF group, so it would significantly decrease the number of normal embryos and reduce the availability of embryos. The fusion rate in DEF and TEF was significantly higher than that in SEF. The blastocyst rate in TEF decreased significantly, with no significant difference between Ctrl, SEF and DEF. SEF, DEF, and TEF are the single electrofusion group, the double electrofusion group, and the triple electrofusion group, respectively. The data is expressed as a percentage (%). *P<0.05, ***P<0.001, and ns P>0.05. (B) Single or double electrofusion had little effect on the MPF activity, while triple electrofusion resulted in a very significant decrease. The data is expressed as mean ± standard deviation. *P<0.05, ****P<0.0001. ## P<0.01, ns, P>0.05. Value, the MPF activity per 100 oocytes (ng/ml). (C) Immunofluorescence staining results indicated that CN, NSM, and NR in TEF were significantly lower than that in SEF and DEF, with no statistical difference between SEF and DEF. The data is expressed as a percentage (%). **P<0.01, ns P>0.05. (D) Representative images in SEF and DEF were both normal in terms of chromosomes, spindle morphology, and g-tubulin location. In the TEF group, the chromosomes were separated in advance, the spindle morphology was abnormal, and g-tubulin localization was disordered. Scale bar, 20mm.
Transient Room Temperature Treatment After Enucleation Did Not Inhibit Premature Activation
Next, the effect of temperature (37°C, 25°C) on mice oocytes reconstructed via ST was compared, with the results shown in Figure 6. Culture results showed that transient room temperature treatment after enucleation had an adverse effect on fertilization ( Figure 6A). There was no statistical difference in fusion rate (P=0.442), cleavage rate (P=0.446), blastocyst rate (P=0.879), and hatching blastocyst rate (P=0.093) between the two groups. The fertilization rate at 37°C (76.62% vs 62.03%, P=0.014) was s igni fi c an tly hi gher. M oreo ver, t he A B D E F C FIGURE 3 | As the electrofusion stimulation number increased, several typical abnormalities in the chromosome arrangement and spindle assembly were generated, especially in TEF. Representative images from the immunofluorescence staining are shown. (A) The spindle was significantly wider, while the chromosomes and g-tubulin localization were normal. (B) Chromosomes were multilaterally arranged outside the spindle, and microtubules were disordered, with disordered g-tubulin localization, forming a multipolar spindle. (C) Chromosome distribution was disordered, some in the center of the spindle, others at one pole of the spindle, microtubules at one pole of the spindle were fractured, with the g-tubulin dissociated from the poles of the spindle, irregularly scattered in the spindle microtubules or in the cytoplasm, forming a unipolar spindle; (D) Chromosomes had a disorderly arrangement throughout the whole spindle region. The spindle microtubules were fractured and disordered, with g-tubulin dissociated from the poles of the spindle and dispersed irregularly on the microtubules, forming a poleless spindle. (E) The arrangement of the microtubules was disrupted, with the chromosomes having a disorderly arrangement in the center of the spindle. (F) The spindle was overelongated. Scale bar, 20mm.
immunofluorescence staining results ( Figures 6B, C) indicated that no statistical difference existed between the two groups in terms of CN (P=0.886). NSM (P=0.001) and NR (P<0.001) at 37°C were both significantly higher than those at 25°C, indicating that transient room temperature treatment after enucleation did not inhibit premature activation, and might affect spindle function in ST reconstructed oocytes, reducing the fertilization rate in ST reconstructed oocytes.
A Ca 2+ Free Manipulation Medium Did Not Inhibit Premature Activation
Afterwards, we performed electrofusion ST in a Ca 2+ free medium to explore whether or not premature activation was inhibited, and the results were shown in Figure 7. Interestingly, the fusion rate of the Ca 2+ free group (P<0.001) was extremely significantly higher, while there was no statistical difference in the fertilization rate (P=0.121), cleavage rate (P=0.166), blastocyst rate (P=0.674), and the hatching blastocyst rate (P=0.955) in Figure 7A. Moreover, the MPF activity, as shown in Figure 7B, decreased significantly in the Ca 2+ group (P=0.0211), with no statistical difference between the Ca 2+ group and Ca 2+ free group (P=0.8060), or between the Ctrl group and Ca 2+ free group (P=0.1405). Additionally, no statistical difference existed in terms of CN (P=0.469), NSM (P=0.789), and NR (P=0.820) between the Ca 2+ group and Ca 2+ free group ( Figure 7C), indicating that a Ca 2+ free manipulation medium did not inhibit premature activation, and that extracellular Ca 2+ might not be the key factor causing calcium oscillations in ST reconstructed oocyte activation.
DISCUSSION
Mitochondrial disease and senile infertility, especially those with aging oocytes, are both closely related to mitochondrial dysfunction (30)(31)(32)(33)(34)(35)(36)(37)(38)(39). Currently, the treatment methods mainly include complementary therapies such as the supplement of Coenzyme Q10, NAD, Growth Hormone and other substances that can enhance mitochondrial function (40)(41)(42)(43), mitochondrial transfer from aged adipose-derived stem cells (44) and autologous mitochondrial transfer (45,46), which can only temporarily relieve symptoms, while ST can fundamentally eliminate the influence of abnormal mitochondria. Thus, ST is considered to be the most valuable therapeutic strategy for clinical transformation. If ST can be used in the clinic, it will bring a ray of hope for patients with mitochondrial genetic diseases, also for patients with senile infertility, especially those with aging oocytes, and the key is to prove the safety and effectiveness of ST technology. Compared with human oocytes and non-human primate oocytes, mice oocytes are easier to obtain and can be used for large-scale experiments. Therefore, mice oocytes were used as the model to clarify the factors and mechanism for chromosome abnormality in oocytes reconstructed via ST. In this study, we demonstrated that electrofusion stimulation was an independent factor of chromosome abnormality in mice oocytes reconstructed via ST, and that it was unrelated to enucleation, fusion status, temperature, and Ca 2+ . Electrofusion stimulation could induce premature chromosome separation and abnormal spindle morphology and assembly by decreasing the MPF activity, leading to premature activation, and thus resulting in chromosome abnormality in mice oocytes reconstructed via ST. The optimum operating conditions for electrofusion ST was found to be the 37°C group without Ca 2+ .
In order to explore whether premature activation occurred in the ST process and in which procedure (enucleation, electrofusion stimulation), we first detected the MPF activity in the ST process. No significant difference in MPF existed between pre-ST and Ctrl, while MPF activity significantly decreased in the Unfused ST and ST group, with the decline being most significant in the ST group. During the enucleation process, chromosomes were arranged in the center of the spindle and the spindle morphology was normal, showing a typical bipolar, The MPF activity decreased significantly in +Ca 2+ , with no statistical difference between -Ca 2+ and +Ca 2+ /Ctrl. Data is expressed as mean ± standard deviation. ns P>0.05, *P<0.05. Value, the MPF activity per 100 oocytes (ng/ml).
(C) There was no statistical difference in CN, NSM, and NR between +Ca 2+ and -Ca 2+ . ns P>0.05. Data is expressed as a percentage (%). +Ca 2+ represents the Ca 2+ medium group, and -Ca 2+ represents the Ca 2+ free medium group. Representative images are shown in (C). For 37°C the images are normal. The middle 25°C showed that after transient room temperature treatment, the chromosomes were disordered, irregularly localized at the equatorial plate, with abnormal spindle morphology and g-tubulin localization, while in the bottom 25°C images, the chromosomes separated in advance, the spindle microtubulin disappeared, and the g-tubulin aggregated towards the middle of the spindle. 37°C and 25°C respectively represent the 37°C treatment group and the 25°C treatment group. Data is expressed as a percentage (%). Scale bar, 20mm.
symmetrical, and spindle-shaped structure, with g-tubulin located at both poles of the spindle. However, after electrofusion stimulation, CN, NSM, NR in the unfused group and the fused group were all significantly reduced, which is consistent with previous studies (4,47,48). Based on these observations, we predicted that it was electrofusion stimulation, whether fused or not, rather than enucleation, that caused MPF inactivation and abnormal chromosome activity and spindle organization during meiosis, leading to premature activation. To further explore the induction of premature activation, we subdivided electrofusion into three groups (SEF, DEF, and TEF). Culture results showed that the fusion rate in DEF and TEF was significantly higher than that in SEF, but in TEF the blastocyst rate decreased significantly, so did the MPF activity, CN, NSM and NR, with no significant difference between Ctrl, SEF and DEF, concluding that single or double electrofusion had little effect on mice oocytes reconstructed via ST, while triple electrofusion might have a negative effect, and that a threshold might exist in MPF inactivation. Besides, a precise spindle assembly is the guarantee for the normal separation of chromosomes, especially the spindle pole assembly, spindle morphology, and length of the spindle (49,50). As the electrofusion stimulation number increased, several abnormalities were generated, especially in TEF. Thus, increasing the electrofusion stimulation number could promote the fusion of ST reconstructed oocytes, but the electrofusion stimulation number should be minimized, with no more than 2 times being appropriate.
Next, we directly stimulated MII oocytes with different electrofusion. The results showed that electrofusion stimulation also resulted in premature activation in MII oocytes, especially in the three shock group (TEF) and the low intensity multiple shock group (4*MII-DEF1/3). So premature activation had nothing to do with fusion state, and electrofusion stimulation was the key factor in triggering premature activation. In addition, the blastocyst rate was significantly reduced in the MII-TEF group, but there was no statistical difference in the chromosome abnormality rate in blastocyst among the 4 groups. We hypothesized that the blastocyst culture process would eliminate embryos with chromosomal abnormalities in the MII-TEF group, so it would significantly decrease the number of normal embryos and reduce the availability of embryos.
To explore whether there was a correlation between the temporary disappearance of the spindle and inhibition of premature activation, ST reconstructed oocytes were treated at 37°C and 25°C for 5 min before electrofusion, respectively, and fused oocytes were used for immunofluorescence staining after recovery for 30 minutes. There was no difference in CN between the two groups, but NSM and NR in the 25°C group were significantly lower, along with the fertilization rate, which indicated that room temperature treatment before electrofusion did not inhibit premature activation, and might affect spindle function in ST reconstructed oocytes.
Furthermore, we investigated the effect of a Ca 2+ free medium on electrofusion ST reconstructed oocytes in mice. MPF activity in the Ca 2+ group and the Ca 2+ free group both decreased, and no statistical difference existed in the CN, NSM, and NR. Thus, Ca 2+ did not inhibit premature activation. Studies have also found that a rise in intracellular Ca 2+ is caused by intracellular Ca 2+ release and that Ca 2+ shock waves are not affected by external Ca 2+ (3). Oocyte activation cannot be initiated by a single Ca 2+ rise and its propagation is mediated by Ca 2+ induced calcium release (CICR) (3). Interestingly, the fusion rate in the Ca 2+ free group was significantly higher, and the mechanism behind this requires further study. All the above indicated that a Ca 2+ free A B FIGURE 8 | The optimum operating condition for electrofusion ST technology was the 37°C group without Ca 2+ . (A) Culture results showed that the fusion rate was the highest in 37-, the fertilization rate in 25-was the lowest, with no difference in the fertilization rate existing between 37+, 37-, and 25+. *P<0.05, **P<0.01, ***P <0.001, and ns P>0.05. (B) 25°C treatment after enucleation had adverse effects on the spindle morphology and g-tubulin localization in mice oocytes reconstructed via ST. NSM and NR in 37+ and 37-were both significantly higher than those in 25+ and 25-, with no difference existing between 37+ and 37-. **P<0.01, ***P <0.001, and ns P>0.05. 37+, 37-, 25+, and 25-respectively represent the 37°C group containing Ca 2+ , the 37°C group without Ca 2+ , the 25°C group containing Ca 2+ , and the 25°C group without Ca 2+ . Data is expressed as a percentage (%).
manipulation medium did not inhibit premature activation, and that extracellular Ca 2+ might not be the key factor causing calcium oscillations in oocyte activation. Electrofusion stimulation might induce premature activation in the reconstructed oocytes by changing the open-close state of calcium channels and the regulatory pathway of CICR (51)(52)(53)(54).
To further optimize ST technology, we conducted a cross experiment. The NSM and NR in 37+ and 37-were both significantly higher than those in the other two groups. Meanwhile the fusion rate was the highest in 37-, the fertilization rate in 25-was the lowest, and there was no difference in the fertilization rate between 37+, 37-, and 25+. In addition, there was no statistical difference in the developmental potential among the groups. Therefore, the optimum operating condition for electrofusion ST technology were determined to be the 37°C group without Ca 2+ .
In conclusion, the present study revealed that electrofusion stimulation was an independent factor for chromosome abnormality in mice oocytes reconstructed via ST, and that it was unrelated to enucleation, fusion status, temperature, and Ca 2+ . Electrofusion stimulation could induce premature chromosome separation and abnormal spindle morphology and assembly by decreasing MPF activity, leading to premature activation, and thus resulting in chromosome abnormality in mice oocytes reconstructed via ST. The optimum operating condition for electrofusion ST were determined to be the 37°C group without Ca 2+ .
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors. | 6,419 | 2021-07-28T00:00:00.000 | [
"Biology"
] |
Analysis of Binding Determinants for Different Classes of Competitive and Noncompetitive Inhibitors of Glycine Transporters
Glycine transporters are interesting therapeutic targets as they play significant roles in glycinergic and glutamatergic systems. The search for new selective inhibitors of particular types of glycine transporters (GlyT-1 and GlyT-2) with beneficial kinetics is hampered by limited knowledge about the spatial structure of these proteins. In this study, a pool of homology models of GlyT-1 and GlyT-2 in different conformational states was constructed using the crystal structures of related transporters from the SLC6 family and the recently revealed structure of GlyT-1 in the inward-open state, in order to investigate their binding sites. The binding mode of the known GlyT-1 and GlyT-2 inhibitors was determined using molecular docking studies, molecular dynamics simulations, and MM-GBSA free energy calculations. The results of this study indicate that two amino acids, Gly373 and Leu476 in GlyT-1 and the corresponding Ser479 and Thr582 in GlyT-2, are mainly responsible for the selective binding of ligands within the S1 site. Apart from these, one pocket of the S2 site, which lies between TM3 and TM10, may also be important. Moreover, selective binding of noncompetitive GlyT-1 inhibitors in the intracellular release pathway is affected by hydrophobic interactions with Ile399, Met382, and Leu158. These results can be useful in the rational design of new glycine transporter inhibitors with desired selectivity and properties in the future.
Introduction
Glycine is one of the major inhibitory neurotransmitters in the central nervous system (CNS). Once glycine is released from the presynaptic terminals, it binds to ionotropic glycine receptors, resulting in hyperpolarization of the postsynaptic membrane and neuronal inhibition [1]. Thus, glycine regulates the transmission of sensory and pain signals and ensures proper motor activity during movement [2]. In addition, it modulates the activity of the excitatory glutamatergic system by acting as a coagonist of the N-methyl-D-aspartate (NMDA) receptors [3] and consequently influences memory, learning, synaptic plasticity, and neuronal development [4].
The level of glycine in inhibitory and excitatory synapses is regulated by the glycine transporters of the sodium-dependent solute carrier family (SLC6). These transporters symport substrates against their concentration gradient based on the difference between the concentration of sodium and chloride ions on either side of the cell membrane [5]. There are two primary types of glycine transporters: GlyT-1 and GlyT-2 [2]. GlyT-2 transporters are found in the presynaptic terminals of glycinergic neurons located in the spinal cord, brainstem, and cerebellum, while GlyT-1 transporters are found mainly in membranes of glial cells surrounding the synapses of both glycinergic and glutamatergic systems. GlyT-1 transporters are also present near the NMDA receptors in the pre-and postsynaptic membranes of neurons [2]. Therefore, GlyT-1 transporters, in addition to their coincident localization with GlyT-2 transporters, can also be found in other regions of the CNS, such as the hippocampus, striatum, and prefrontal cortex [6].
As glycine transporters modulate the activity of both inhibitory and excitatory pathways, they have long been regarded as important therapeutic targets [6][7][8][9]. Considering the localization of GlyT-1 transporters in the vicinity of NMDA receptors, using the inhibitors of GlyT-1 transporters may be a new approach in the treatment of schizophrenia, which is i.a. characterized by impaired glutamatergic transmission [10]. The increased glycine level in the synaptic cleft after the administration of a GlyT-1 inhibitor leads to the saturation of coagonist binding sites at the NMDA receptors, thus facilitating their activation. In addition, modulating the NMDA receptor activity indirectly affects the mesolimbic dopaminergic transmission, which may be beneficial in the treatment of drug addiction, including alcoholism [9,10]. On the other hand, enhancement of glycinergic transmission by GlyT-1 inhibitors may aid in the treatment of epilepsy or neuropathic pain [6,7,9]. In the case of neuropathic pain, the approach of blocking GlyT-2 transporters seems to be of higher potential. Due to the restrictive localization of GlyT-2, which is largely limited to the structures responsible for nociception, i.e., the spinal cord and brainstem, GlyT-2 inhibitors appear to be safer and more advantageous [11]. It is worth noting that the complete impairment of GlyT-2 function is one of the causes of hyperekplexia-a serious hereditary neurological disease [12]. This disease is characterized by tremors, spasms, and episodes of apnea triggered by sudden and unexpected sensory stimuli. However, animal studies and subsequent clinical trials in humans have shown that a low therapeutic dose of GlyT-2 inhibitors is well tolerated and potentially alleviates various types of pain [7,11,13,14].
The first reported glycine transporter inhibitors were sarcosine and its derivatives with an attached aromatic fragment, such as ORG-24598, NFPS, or its R-isomer ALX-5407 ( Figure 1) [6,8,15,16]. They were found to be selective GlyT-1 inhibitors. None of the sarcosine derivatives have been approved for therapeutic use, mainly due to their toxic effects which are manifested as respiratory depression, ataxia, and coma [17,18]. Interestingly, although sarcosine is a competitive inhibitor (substrate), its lipophilic derivatives are generally noncompetitive. In addition, many of these derivatives cause irreversible inhibition, which is indicated to be one of the causes of their toxic effects [6,16]. Therefore, several chemical groups of GlyT-1 inhibitors without the sarcosine fragment were synthesized and tested. These include aminophenethylbenzamide derivatives, such as SSR-504734, or other compounds containing a benzamide fragment with additionally substituted sulfonamide or sulfonyl groups, such as ACPPB ( Figure 1) [6,8]. Among the best-investigated inhibitors with a nonsarcosine structure are the benzoylpiperazine derivatives, bitopertin and iclepertin [19][20][21]. They reached phase 3 clinical trials for the treatment of negative symptoms of schizophrenia. Bitopertin was finally found to have no significant therapeutic effect, while iclepertin is still on trial [22]. Compounds with a nonsarcosine structure, which include both competitive (most of the inhibitors) and noncompetitive (e.g., bitopertin and iclepertin) inhibitors, cause reversible inhibition [8,19,[23][24][25][26][27][28]. Selective GlyT-2 inhibitors are limited in number and include compounds with an amino acid structure, of which the best known is ALX-1393, as well as those with a nonamino acid structure, such as ORG-25543 ( Figure 2) [29][30][31]. ORG-25543 exhibits very high selectivity for GlyT-2; however, it blocks the transporter in an apparently irreversible manner, thus resulting in motor and respiratory side effects. Several ORG-25543 derivatives show more favorable reversible kinetics [13,32,33]. One among them is opiranserin, which is in phase 3 of clinical trials for the treatment of postoperative pain [14]. ALX-1393 is a reversible inhibitor, but exhibits a lower selectivity for GlyT-2 and poor permeability through the blood-brain barrier, which limits its clinical application [11,32]. Another interesting group of GlyT-2 inhibitors are derivatives with a lipid structure. The precursor of this group is N-arachidonyl glycine (NAGly), an endogenous lipid found mainly in the spinal cord, where it may regulate nociceptive pathways [34]. Modifications in the acyl tail and head group of the NAGly resulted in a series of highly active and selective derivatives, such as C18 ω9 L-Lys ( Figure 2) [35][36][37]. The first relevant information about the structure of glycine transporters and the supposed binding mode of the substrate and inhibitors was obtained from the analysis of the crystal structures of related proteins from the SLC6 family, i.e., leucine transporter (LeuT), dopamine transporter (DAT), and serotonin transporter (SERT) [38][39][40][41]. All SLC6 transporters have 12 transmembrane domains with the Nand C-terminus located intracellularly. Other characteristic features of these transporters are the long extracellular loop EL2, which contains a glycosylation site, and the V-shaped extracellular loop EL4. TM1 and TM6 have nonhelical fragments (hinge regions) in about half of their length, which-along with the adjacent fragments of TM3, TM8, and TM10-form the main binding site (S1). In the vicinity of the S1 site, binding sites for sodium and chloride ions involved in substrate transport are also present [38][39][40]42,43]. For the transport of one glycine molecule, GlyT-1 requires two sodium ions and one chloride ion, whereas GlyT-2 needs three sodium ions and one chloride ion [44]. During transport, the transporter assumes the following conformational states: outward-open, allowing the ions and the substrate to bind from outside the cell; occluded; and inward-open, allowing the ions and the substrate to be released into the cell [5,42,45,46]. Crystal structure analysis and in vitro studies showed that inhibitors approach transporters from the extracellular side, binding at the main binding site or at the allosteric binding site within the vestibule ( Figure 3A). Thus, most of the inhibitors block the transporters in an outward-open state [39,40,42,45]. An interesting exception is ibogaine, a noncompetitive inhibitor, which inhibits the SERT in its inward-open state, while remaining within the S1 site [47,48]. A breakthrough in the study of the structure of glycine transporters was the recent obtaining of the crystal structure of GlyT-1 in complex with an analog of bitopertin [49]. This crystal structure confirmed the overall compliance of the GlyT-1 structure with that of the previously explored SLC6 transporters and revealed an unusual binding mode of the inhibitor. The bitopertin analog (a noncompetitive inhibitor) blocks GlyT-1 in the inward-open state being located partially within the main binding site and partially at the intracellular site, which was not found before ( Figure 3B) [49].
Despite these new findings, information on the binding modes of the inhibitors from other chemical groups and about the amino acids that determine the selectivity of glycine transporters is limited. Only a few articles based on mutagenesis data and molecular modeling studies have been published postulating the binding sites for ORG-25543, ALX-1393, and lipid-based GlyT-2 inhibitors [29,50,51]. There is a lack of studies that extensively compare the structure of the two types of glycine transporters, taking into account both recent experimental data and a larger ligand library.
Therefore, to investigate the exact structure of the binding sites of GlyT-1 and GlyT-2, we decided to build a pool of homology models of both transporters using the recently released structure of GlyT-1 in the inward-open state and the crystal structures of DAT and SERT in other conformational states. Then, using docking studies and molecular dynamics (MD) simulations, we identified the binding modes of differ-ent groups of inhibitors, as well as the amino acids that determine the selectivity of glycine transporters.
Model Building
To build the models of GlyT-1 and GlyT-2 in the outward-open state, we used three crystal structures of DAT (PDB codes: 4M48, 4XP4, and 4XP9) and one structure of SERT (PDB code: 5I73) as templates. In addition, a model in the partially occluded state was also created based on DAT (PDB code: 4XPH). We used several templates to increase the diversity of our pool of models to achieve a higher possibility of finding the structure most similar to the real one. LeuT was rejected as a template due to its significantly lower amino acid sequence homology with glycine transporters. The models were built using Modeller program and the SWISS-MODEL server and were then evaluated by tools checking their quality and similarity to real protein structures. The best GlyT-1 and GlyT-2 models were chosen by docking a pool of selective ligands. The following criteria were considered for choosing the best models: The consistency of poses obtained for structurally related compounds, their docking scores, and the ability to explain structure-activity relationships. For competitive inhibitors, substrate-specific interactions with sodium ions and Gly121 were also taken into account. The best GlyT-1 model turned out to be the one built on the 4M48 template, and the best GlyT-2 model was the one built on the 4XP9 template, both with SWISS-MODEL server. Models of GlyT-2 in the inward-open state were built on both available GlyT-1 templates (PDB codes: 6ZBV and 6ZPL). Ligand docking in accordance with the aforementioned criteria revealed that the best model was the one built on the 6ZPL template using the SWISS-MODEL server. The crystal structures of GlyT-1 in the inward-open state possess a rather low resolution (3.40 and 3.94 Å) and lack a fragment of the long loop EL2, as well as the amino acids forming the intracellular loop between TM4 and TM5. Moreover, they have four-point mutations in the amino acid sequence. To fill in/replace the missing fragments and to slightly optimize the residue side chains, we decided to build a complete model of GlyT-1 in the inward-open state based on the 6ZBV and 6ZPL crystal structures. For detailed analyses, we used the model built on the 6ZPL template with the SWISS-MODEL server, which proved to be the most universal one for redocking of the compound from the crystal structures, as well as the analogs of this compound.
Structure of GlyT-1 and GlyT-2 Binding Sites
As the general structure of glycine transporters is well known, we focused on analyzing the structure of the binding sites: the main binding site (S1); the transporter vestibule including the S2 site; and the binding site overlapping with the intracellular release pathway.
Analyzing the structure of the main binding site in glycine transporters, it can be seen that the volume of the binding site is significantly reduced compared with transporters for dopamine, serotonin, or even leucine and gamma-aminobutyric acid (GABA). This is mainly due to the presence of a tryptophan residue (Trp376 in GlyT-1 and Trp482 in GlyT-2) in place of the phenylalanine residue in DAT, SERT, and LeuT or leucine/glutamine in GABA transporters. To a lesser extent, the volume of S1 is also reduced by the presence of threonine (Thr472 in GlyT-1 and Thr578 in GlyT-2) in place of serine in the other transporters ( Figure 4A). This tighter binding site is clearly an adaptation for the selective transport of small glycine molecules. Mutagenesis studies indicate that the replacement of Trp482 in GlyT-2 with phenylalanine decreases the affinity for glycine itself while extending its ability to transport other amino acids, including alanine, leucine, methionine, and even phenylalanine [52]. The replacement of Thr578 with serine in GlyT-2 reduced its selectivity to glycine, thus extending the possibility of transport of other amino acids, although their affinity remained low (EC 50 > 1 mM). An important difference at the S1 site between GlyTs and monoamine transporters is the presence of Leu476 in GlyT-1 or Thr582 in GlyT-2, at the position of glycine in DAT and SERT. This leads to a significant reduction in the volume of one of the pockets within the S1 site where the aromatic fragments of DAT and SERT inhibitors bind [39,40,42]. Therefore, it is unlikely that similar fragments of inhibitors could be accommodated in this pocket in GlyTs. A similar reduction in this pocket, followed by the obstruction of ligand binding therein, is observed in other transporters for aliphatic amino acids, such as LeuT (Ile359), and GABA transporters (Thr400 in GAT-1, Cys399 in BGT-1, Cys394 in GAT-2, and Cys414 in GAT-3) [38,45,53,54]. Interestingly, the replacement of Thr582 with leucine in GlyT-2 leads to substantial or even complete impairment of glycine transport [29,52]. It is worth mentioning that GlyT-2, in contrast to GlyT-1, is much more selective for glycine [52]. Besides glycine, GlyT-1 can transport sarcosine (N-methylglycine) and N-ethylglycine. A major determinant of this selectivity is the presence of Ser479 in GlyT-2 at the position of Gly373 in GlyT-1. Ser479 further limits the volume of the S1 site in GlyT-2, creating a steric clash for the methyl/ethyl fragment attached to the amino group of glycine. Mutagenesis studies indicate that the replacement of Ser479 with glycine reduces several times the affinity of GlyT-2 for glycine, but restores its ability to transport sarcosine and, to a lesser extent, N-ethylglycine [52,55]. A reverse mutation in GlyT-1 (Gly373Ser) leads to a complete blockade of the transporter. Another difference in the S1 site between GlyT-1 and GlyT-2 concerns the amino acid that forms the extracellular gate: Tyr370 in GlyT-1 versus Phe476 in GlyT-2. According to mutagenesis studies, the replacement of tyrosine with phenylalanine does not affect the function of GlyT-1 and even increases its affinity for glycine. Interestingly, the reverse mutation in GlyT-2 (Phe476Tyr) results in significant impairment or even a complete blockade of the transporter function [29,55].
Significant structural differences between GlyT-1 and GlyT-2 are found at the S2 site, which is located above the extracellular gate, within the vestibule of the transporter. The volume of this site is much larger in GlyT-1. This is mainly due to the presence of Leu524 and Val199 in GlyT-1 in place of Phe629 and Ile290 in GlyT-2, respectively ( Figure 4B). In addition, in GlyT-1, above these residues, there is an aliphatic residue Ile202, which is replaced in GlyT-2 by a larger aromatic residue Tyr293, which also reduces the volume of the S2 site in GlyT-2. These differences, as confirmed by the docking results described further, have a substantial impact on the selective binding of ligands.
A recently released crystal structure of GlyT-1 in the inward-open state revealed the possibility of binding inhibitors within the intracellular release pathway. This new site partially covers the main S1 binding site. Therefore, the presence of Ser479 in GlyT-2 in place of Gly373 in GlyT-1 at the S1 site, affects ligand binding also in the inward-open state. Comparing the structure of both transporters in the area closer to the intracellular side, it can be observed that this region is tighter in GlyT-1 due to the side chains of Ile399, Met382, and Leu158, which correspond to Val505, Leu488, and Val249, respectively, in GlyT-2 ( Figure 4C). This enables GlyT-1 to create stronger hydrophobic interactions with ligands.
Binding Mode of Noncompetitive GlyT-1 Inhibitors
The crystal structures of GlyT-1 in the inward-open state represent the binding mode of compound 1, which is a noncompetitive inhibitor with a benzoylisoindoline scaffold [49]. After docking other derivatives with the same chemotype, including bitopertin, a binding mode highly consistent with that observed in the crystal structures was obtained ( Figure 5A, Supplementary Figure S3). The methylsulfonyl group of the compounds forms a hydrogen bond with Gly121 from the nonhelical fragment of TM1 within the S1 site. In the MD simulations carried out for compound 1 and bitopertin, hydrogen bonds with adjacent Leu120 as well as the hydroxyl group of Tyr196 from TM3 were observed ( Figure 6B). It is worth noting that this pocket is tight enough to prevent the binding of derivatives that contain larger than methyl substituents. The position of the sulfonyl group and its interactions correspond to those of the carboxylic group of leucine or tryptophan in the LeuT crystal structures, as well as the predicted binding mode of the carboxylic group of nipecotic acid derivatives in GABA transporters [38,45,53]. However, it should be noted that tryptophan or GABA inhibitors bind to the transporters from the extracellular side, thus blocking them in the outward-open state. The alkyl or cycloalkyl fragments of the compounds, often substituted with fluorine atoms, as well as other cyclic moieties such as morpholine, are located between TM6 and TM8 and form hydrophobic interactions mainly with Trp376, Leu379, Leu476, and Cys475. The benzoyl fragment creates hydrophobic interactions with Trp376, and in the case of some derivatives, also CH-π stacking with this residue. It is also worth pointing out that within the nonhelical fragment of TM6, there is Gly373. As described earlier, this amino acid in GlyT-2 is replaced with a serine residue (Ser479), which may create a steric clash and significantly obstruct the binding of the compounds in GlyT-2, thus affecting their selectivity. The remaining component of the inhibitors, corresponding to 3-fluoro-5-(trifluoromethyl)pyridine-2-yl)piperazine fragment in bitopertin, is directed toward the inside of the cell. This fragment fits into the hydrophobic pocket formed by Tyr116 in TM1, Leu379 and Met382 in TM6, Leu158 and Phe154 in TM2, and Ile399 in TM7. Additionally, the aromatic ring can create π-π or CH-π stacking with Tyr116. As mentioned earlier, this pocket is tighter in GlyT-1 compared with GlyT-2, which enables better fitting of the inhibitors and, together with the Gly373 and Ser479 substitution, is responsible for their selective binding. In the MD simulation, the position of compound 1 was very stable (Supplementary Figure S5). Bitopertin showed a higher RMSD change; however, this was mainly due to the rotation of the aromatic ring ( Figure 6A). The most relevant interactions were preserved ( Figure 6B). When bitopertin analogs were docked to the S1/S2 site of the GlyT-1 models in the outward-open state, less consistent binding modes were obtained. Some compounds, such as compound 1 and bitopertin, interact with Gly121 and the sodium ion via a sulfonyl group (Supplementary Figure S4). The fluorinated alkyl substituents at position 2 of the benzoyl fragment were located at the level of the extracellular gate (Tyr196-Tyr370 line), being directed toward TM10 or within the S2 site reaching Trp124. The remaining fragment of the compounds was located above Tyr370 and, in some cases, it creates π-π stacking with this residue. In the MD simulation, compound 1 did not retain its initial position (Supplementary Figure S5). The interactions within the S1 site were broken, and the compound moved closer to the vestibule entrance. For bitopertin, the position, as well as the interactions, was preserved during the MD simulation. However, the MM-GBSA energy value of both compounds was significantly lower (more beneficial) for complexes in the inward-open state, which was particularly true in the case of compound 1 (Table 1). This may confirm the preferential binding of bitopertin analogs within the intracellular release pathway. Noncompetitive GlyT-1 inhibitors also include sarcosine derivatives. While docking to the GlyT-1 structure in the inward-open state, these inhibitors adopt an arrangement similar to that of the analogs of bitopertin ( Figure 5B,C). The carboxyl group is located in the position of the sulfonyl moiety, forming hydrogen bonds with Gly121 and Tyr196, as well as with Leu120 in some cases. In the MD simulations for ORG-24598 and ALX-5407, the protonated amino group of these compounds can create a hydrogen bond with the main chain of Ser371 ( Figure 6C,D). Their aromatic rings occupy the same areas as the corresponding fragments of bitopertin analogs. The most significant difference is the lack of a hydrogen bond with Thr472. Kinetics studies indicate that although ORG-24598 inhibits glycine uptake in a noncompetitive manner, it blocks the binding of bitopertin in a competitive way [19]. Thus, the finding that the binding mode of ORG-24598 and its analogs is highly consistent with that of bitopertin seems reasonable. While docking to the GlyT-1 model in the outward-open state, sarcosine derivatives also adopted a consistent arrangement ( Figure 5D, Supplementary Figure S4D). The carboxyl group coordinates the sodium ion and forms a hydrogen bond with Gly121. A hydrogen bond with the hydroxyl group of Tyr196 can also be observed, although it was significantly less stable in MD simulations ( Figure 6E). The protonated amino group creates a hydrogen bond with the main chain of Ser371 (as in the inward-open state), being arranged similarly to the amino moiety of leucine in the crystal structures of LeuT [38]. One of the aromatic rings is positioned at the level of the extracellular gate forming hydrophobic interactions mainly with Trp376, Tyr116, and Ile192. For some derivatives, it additionally creates CH-π stacking with Trp376 ( Figure 6E). The second aromatic fragment is located within the S2 site, where it forms numerous hydrophobic interactions with Val199, Leu524, Trp124, Tyr195, and Tyr196, as well as π-π and/or CH-π stacking with Trp124 or Tyr195 and Tyr196 residues (Figures 5D and 6E). The volume of the pocket in which this aromatic fragment binds is significantly reduced in GlyT-2 mainly by the side chains of Ile290 (Val199 in GlyT-1) and Phe629 (Leu524 in GlyT-1). The RMSD for ORG-24598 located in S1/S2 is more variable than that for the same compound bound within the intracellular release pathway (Supplementary Figure S5). Furthermore, the MM-GBSA energy value is significantly lower for the complex in the inward-open state, indicating that this transporter state and binding site are preferred for ORG-24598 ( Figure S5). Some studies suggest that the binding of NFPS to GlyT-1 is independent of the presence of sodium ions, which may be in disagreement with the presented binding mode within the S1/S2 site [56,57]. At the same time, other studies indicate that TM1 and TM3 contain determinants that affect the affinity of NFPS to GlyT-1 [56]. TM3 contributes to S1 and S2 sites, while it is distant from the intracellular release pathway. Interestingly, replacing TM3 in GlyT-2 with that in GlyT-1 increases the affinity of NFPS to that of wild-type GlyT-1. Moreover, NFPS exhibits a mixed mode of inhibition against this chimera [56]. It is likely that the effect of NFPS on this chimera is related to the ability of the inhibitor to bind within the S1/S2 site since, together with the replacement of the entire TM3 in GlyT-2, Ile290 is substituted by Val, which increases the volume of the S2 site, thus allowing the accommodation of NFPS biphenyl fragment. Based on the results obtained, it can be concluded that NFPS/ALX-5407 probably has the ability to bind to GlyT-1 in both inward-open and outward-open states.
Many studies have reported the apparently irreversible modes of inhibition of sarcosine derivatives [15,16,57]. As these compounds do not contain chemical moieties that can form covalent bonds with proteins, other factors must be responsible for this phenomenon. The results of docking studies and MD simulations that these compounds bind from the intracellular side, together with data showing their ability to interfere with the cell membrane [57], may partially explain the difficulty in washing them out. However, this issue requires further biological study, as well as in silico analysis.
Binding Mode of Competitive GlyT-1 Inhibitors
Molecular docking and subsequent MD simulations revealed consistent binding modes for competitive GlyT-1 inhibitors in a model of this transporter in the outwardopen state (Figures 7 and 8, Supplementary Figure S6). The inhibitors occupy the S1 and the S2 sites partially. For derivatives containing a sulfonamide or sulfonyl group, this fragment binds near the nonhelical fragment of TM1, creating a hydrogen bond with Gly121 and coordinating the sodium ion. It also forms a hydrogen bond with the hydroxyl group of Tyr196 from TM3. This arrangement is similar to that of the sulfonyl group present in bitopertin and its analogs. Alkyl (propyl, cyclopropylmethyl) or heteroaromatic (N-methyltriazole, N-methylimidazole) substituents attached to the sulfonamide/sulfonyl group are located deeper in the S1 site, occupying the hydrophobic pocket constituted mainly by Tyr116, Trp376, and Leu476. In GlyT-2, binding of this fragment may be hindered by the side chain of Ser479, which is in place of Gly373. A decrease in the hydrophobicity of this pocket may also affect the binding of these fragments in GlyT-2, which could be due to the presence of Thr582 in place of Leu476 in GlyT-1.
A large number of the discussed sulfonamide derivatives have two hydrophobic/aromatic fragments linked to the sulfonamide or sulfonyl group through saturated heterocyclic (piperidine, piperazine, pyrrolidine), bicyclic, or cyclohexane moieties. One of these fragments is usually a benzamide substituted with halogen atoms or other similar aromatic moieties. This fragment is located within the S2 site, where it forms numerous hydrophobic interactions with Trp124, Tyr195, and Tyr196, and with Val199 and Leu524, which are crucial for selectivity (Figure 7). In addition, π-π and/or CH-π stacking interactions involving either Trp124 residue (compound 2) or Tyr195 and Tyr196 residues (compound 3) can be observed ( Figure 8B,C). Docking results showed the presence of hydrogen bonds between the amide group of the compounds and the side chains of Asp528 and Trp124; however, they were not found to be stable in MD simulations. However, compound 2 forms another fairly stable hydrogen bond with the hydroxyl group of Tyr195, which is located close to the carboxyl moiety of Asp528 ( Figure 8B). The second hydrophobic fragment of the discussed compounds, which is usually a cycloalkyl or phenyl ring, is directed toward TM10, which lies at the level of the extracellular gate (Tyr196-Tyr370) or slightly above it. It creates hydrophobic interactions mainly with Tyr370, Trp376, Ile192, and Leu532, and, in the case of compound 2, with Tyr195. Compound 3 additionally forms π-π interactions with Tyr370 ( Figure 8C). Another group of competitive GlyT-1 inhibitors containing a sulfonamide moiety is the derivatives of tetrahydroquinoline, aminotetraline, and aminochromane, and their analogs. The difference between these compounds and the ones discussed earlier is the presence of a protonated amine group at a physiological pH. This group creates a salt bridge with the carboxyl moiety of Asp528 from the extracellular gate (Supplementary Figure S6B). Aromatic rings are involved in hydrophobic interactions with Tyr370, Trp376, and Ile192 and, to a lesser extent, with Trp124.
SB-733993 is a selective competitive GlyT-1 inhibitor that contains a naphthalene moiety attached directly to the sulfonamide group. This aromatic fragment forms hydrophobic interactions with Tyr116, Tyr196, and Trp376, and additionally CH-π stacking with Trp376 (Supplementary Figure S6D). The sulfonamide group is located close to the sodium ion and Gly121. The protonated piperidine, in turn, creates a salt bridge with Asp528. This residue is also involved in hydrogen bonds with the hydroxyl group of the compound. The 2,5-dimethylpiperidine fragment is positioned within the same area as the benzamide or phenyl rings of the previously discussed compounds. A large subset of competitive GlyT-1 inhibitors are aminophenethylbenzamide derivatives, such as SSR-504734. Their benzamide fragment is located in the hydrophobic pocket of the S2 site ( Figure 7D). MD simulation for SSR-50473 showed that this fragment creates CH-π interactions with Trp124 and Tyr195 ( Figure 8D). The amide nitrogen atom forms a strong and stable hydrogen bond with Asp528, whereas the protonated amine group is involved in a stable salt bridge with this residue. In addition, the piperidine ring can form cation-π and hydrophobic interactions with Tyr370. The phenyl ring is oriented toward the S1 site, creating hydrophobic interactions with Trp376, Tyr196, and Leu476. Docking studies also indicated CH-π stacking with Trp376, but it was not stable in MD simulations due to a slight shift in the phenyl fragment toward Tyr370 from TM6. These results suggest that the discussed derivatives, despite being competitive inhibitors, mainly bind to the S2 site, forming only hydrophobic interactions with some amino acids from the S1 site or the extracellular gate. Typical interactions with the sodium ion or Gly121 could not be observed. However, there are several arguments defending the demonstrated binding mode of aminophenethylbenzamide derivatives. The first argument stems from in vitro studies which have shown that SSR-504734 analogs inhibit GlyT-1 regardless of the presence of sodium ions, in contrast to sulfonamide-containing inhibitors (GSK931145 and ACPPB) that require these ions for their activity [8,23,58]. Another argument is that the arrangement of benzamide and the piperidine rings is consistent with the arrangement of the corresponding fragments of the sulfonamide/sulfonyl derivatives (Figure 7). Moreover, one of the aminophenethylbenzamide derivatives contains the attached sulfonyl group substituted with cyclopropylmethyl at position 4 of the phenyl ring. In docking studies, the fragment of the parent structure retained the arrangement and interactions characteristic of all derivatives, whereas the attached alkylsulfonyl moiety reached the sodium ion and Gly121, forming interactions within the S1 site (Supplementary Figure S6E). The same was observed for derivatives with an additional aromatic ring at position 3 of the phenyl ring, usually pyrazole or imidazole (including the substituted one with alkyl groups). An extra aromatic fragment extends toward Tyr116, intensifying hydrophobic interactions, and the free electron pair of the nitrogen atom from the ring coordinates the sodium ion (Supplementary Figure S6F). A parallel observation is the binding of the tetrahydroquinoline derivatives. Interestingly, the parent compound for this group was a hit (tetrahydroisoquinolin-7-ol derivative) that lacks the sulfonamide fragment [27]. However, according to the pose obtained in docking studies, this compound binds similarly to its sulfonamide derivative (Supplementary Figure S6C). It exhibits a competitive mode of inhibition, but a week interaction with S1, as observed in the case of SSR-504734 analogs.
While docking structurally similar compounds to the GlyT-1 model in the inwardopen state, we obtained significantly less consistent poses (Supplementary Figure S7). For some of the sulfonamide derivatives, the aromatic fragments overlapped with the arrangement of the aromatic fragments of compound 1 from the crystal structures of GlyT-1, but the sulfonamide group did not form analogous interactions within the S1 site. Other compounds adopted a different position, which indicates a mismatch with the binding mode observed for bitopertin analogs. This is primarily due to the larger alkyl/aromatic substituent at the sulfonamide/sulfonyl group found in competitive inhibitors, which cannot fit into a tight pocket within the nonhelical fragment of TM1. For SSR-504734, the aromatic rings generally overlapped with the aromatic fragments of compound 1, but in the case of its analogs, the poses widely varied. Even though in the MD simulation the tested compounds generally remained in the binding site and reached stability in the second part of the simulation, their RMSD plots were more variable compared with the those obtained for complexes in the outward-open state (Supplementary Figure S8). Additionally, for all three tested representatives of the competitive inhibitors, the MM-GBSA energy values were significantly lower for the arrangement within the S1/S2 site in the outward-open state, which suggests that it is preferred (Table 1).
Binding Mode of GlyT-2 Inhibitors
In vitro studies indicate that compound ORG-25543 and its analogs are highly selective and noncompetitive inhibitors of GlyT-2. Additionally, this compound behaves as an irreversible inhibitor in many assays. However, its derivatives, which have only a slightly modified structure, are fully reversible inhibitors [29,32,33]. In the docking of ORG-25543 and its derivatives to the GlyT-2 model in the outward-open state, the compounds were mainly arranged within the S2 site ( Figures 9A and 10). The binding mode was very similar to that proposed by Benito-Munoz et al. [29]. The protonated amine group creates an ionic interaction with Asp633 and a hydrogen bond with the hydroxyl group of Ser638. It is worth mentioning that the Asp633Glu mutation almost completely abolished the activity of ORG-25543, whereas Ser638Cys caused a significant impairment [29]. A possible cation-π interaction with Phe476 was also observed for the protonated amine group, although it was not very stable in the MD simulation ( Figure 10B). The cyclopentane ring forms hydrophobic interactions with Phe476 and, as indicated by MD simulations, also with the side chain of Thr472. In GlyT-1, Thr472 residue is replaced by a serine residue, which weakens the possibility of hydrophobic interactions. One of the methoxy moieties attached to the benzamide fragment creates a hydrogen bond with the nitrogen atom (NH) of the Trp215 side chain ( Figure 10B). Additionally, the entire benzamide fragment forms hydrophobic interactions with this residue. As expected, the Trp215Phe mutant showed a reduced affinity for this compound [29]. The second methoxy group and the oxygen atom of the 4-benzyloxy fragment are located near the nonhelical fragment of domain 1; however, they do not interact with the sodium ion, and the hydrogen bond with Gly121 was found to be relatively unstable during MD simulations ( Figure 10B). This is in agreement with findings indicating that ORG-25543 binds to GlyT-2 regardless of the presence of sodium ions [29]. The benzyloxy fragment initially points toward Tyr207 but, during MD simulations, it moved slightly toward Tyr287, forming π-π stacking with this residue. The nearby Thr582 residue may play a key role in the selective binding of this fragment to GlyT-2, as observed in biological assays [29]. Replacement of this residue with leucine, found at this position in GlyT-1, impaired the binding of ORG-25543, probably due to the reduced volume of the cavity where the aromatic ring of the benzyloxy fragment is located. The structurally related compound GT-0198 shows an arrangement similar to that of ORG-2543 and its analogs ( Figure 9C). The protonated amine of the piperidine ring forms a salt bridge with Asp633, whereas the benzyl fragment creates hydrophobic and CH-π interactions with Trp215. The second aromatic fragment reaches the S1 site, interacting mainly with Trp482, Ile283, and Tyr287. While docking to the GlyT-2 model in the inward-open state, the aromatic fragments of ORG-25543 and its analogs were arranged similar to one of the aromatic fragments of compound 1 in the crystal structure of GlyT-1 (Supplementary Figure S9). However, unlike in GlyT-1, one of the hydrophobic pockets between TM6 and TM8 was empty. In addition, the cycloalkyl ring of the compounds was located at a position corresponding to that of the polar sulfonyl moiety in GlyT-1. The only beneficial interaction observed at this site was the hydrogen bond between the protonated amine group and the main chain of Ser477. However, this bond, as well as the position of the entire ligand, was unstable during MD simulation (Supplementary Figure S10). Compound GT-0198 did not seem to fit into the described binding site, extending toward the inside of the cell. Moreover, the MM-GBSA energy value was higher for the ORG-25543-GlyT-2 complex in the inward-open state than that calculated for the complex in the outward-open state ( Table 2). All these observations indicate that the described derivatives approach GlyT-2 from the extracellular side and bind to it in the outward-open state.
Compound ALX-1393, as well as other related glycine derivatives, binds consistently within the S1 and S2 sites ( Figure 9B,D). The carboxyl group exhibits characteristic interactions with the sodium ion and Gly212. Kinetics data suggest that although ALX-1393 is generally a noncompetitive inhibitor, at high concentrations it could compete with glycine. In addition, the sodium ion binding to GlyT-2 affects the affinity of ALX-1393 for this transporter [29]. Thus, the above-described interactions seem to be reasonable. The protonated amine group formed hydrogen bonds with the main chain of Ser477 and with the side chain of Ser479, as observed during MD simulations ( Figure 10C). Although Ser479 is one of the main differences in a sequence compared with GlyT-1, the Ser479Gly mutant displayed only a slightly reduced affinity for ALX-1393 [29,55]. This indicates that the hydrogen bond with this residue is one of the determinants for GlyT-2 selectivity only. The Ser479 side chain creates a steric clash for GlyT-1 inhibitors rather than enhanc-ing the binding of the currently known GlyT-2 inhibitors. The protonated amino group also forms a cation-π interaction with Tyr207; however, it is not very stable. Although not involved in direct interactions, Thr582 confers beneficial hydrophilic/hydrophobic properties on the S1 site for the binding of amino acid fragments of ALX-1393 and its analogs. Replacement of Thr582 with leucine, present in GlyT-1, almost completely inhibited the binding of ALX-1393 to GlyT-2. The aromatic fragments of the described glycine derivatives are located almost entirely in the S2 site. Docking and MD studies of ALX-1393 showed π-π stacking with Tyr286 and Trp482, and additional hydrophobic interactions with Trp215, Tyr287, Phe476, and Leu290 ( Figures 9B and 10C). For compound 9, intensified aromatic and hydrophobic interactions with Trp215, Phe476, and Ile290 were observed ( Figure 9D). These derivatives are shorter and more inflexible, contain less extended aromatic fragments compared with GlyT-1 inhibitors, and thus fit better to the reduced S2 site in GlyT-2. Figure S10). Thus, the preferred binding site for ALX-1393 and its analogs may be the S1/S2 site, which is consistent with the results of Benito-Munoz et al. [29].
Homology Modeling
The amino acid sequences of human GlyT-1 and GlyT-2 transporters were downloaded in FASTA format from the UniProt database. For GlyT-1, we used the sequence of isoform c (GlyT-1c), which occurs in the CNS. Among the crystal structures of the SLC6 family proteins available in the PDB database, four structures of DAT from Drosophila melanogaster (PDB codes: 4M48, 4XP4, 4XP9, and 4XPH), one structure of human SERT (PDB code: 5I73), and two structures of human GlyT-1 (PDB codes: 6ZBV and 6ZPL) were retrieved. The amino acid sequences of glycine transporters and templates were aligned using the ClustalW multiple alignment option in BioEdit 7.2.6 program (Supplementary Figure S1). The obtained alignment was verified for the overlapping of conserved residues and compared with other alignments for the SLC6 protein family [53,59,60]. We omitted the Nand C-terminus due to low sequence homology and a lack of direct engagement in ligand binding. Based on the alignment of sequences, pools of GlyT-1 and GlyT-2 models in the outward-open, partially occluded, and inward-open states were built using the Modeller 9.18 program and the SWISS-MODEL server. Using Modeller, 100 models were generated for each template using MyModel class and the high optimization level. Cysteine residues forming the disulfide bridge within EL2 were defined. Using SWISS-MODEL, one model for each template was obtained. The models preserved sodium and chloride ions found within the binding sites in the templates, whereas ligands were rejected. In the case of models built on the 5I73 template, in which the chloride ion is missing, this ion was transferred to the models from the 4M48 template after superimposing the ion binding sites in the PyMOL program. Based on the DOPE and QMEAN scores, from the 100 models generated in Modeller, one best model was selected for each template for further evaluation with Verify3D program and Ramachandran plots. The models built using the SWISS-MODEL server as well as the templates were assessed in the same manner for comparison of results. Given the relatively good scores, particularly considering the binding sites, the entire pool of 14 models for each type of glycine transporter was submitted to docking studies with an aim of choosing the best models. The assessment of the finally selected models is described in Table S1.
Docking Studies
The ligand 3D structures were created using the Maestro 11.8 program and optimized with the LigPrep module applying the OPLS3e force field. Ionization states were generated at physiological pH (7.4 ± 0.5) using the Epik 4.6 program. The predicted pK a values were verified with the Marvin 21.8 program. For compounds containing chiral atoms with undefined absolute configurations, all possible stereoisomers were generated and used in the docking process.
Models were prepared in the Protein Preparation Wizard using default settings. In all GlyT-1 and GlyT-2 models built on the 4XPH template and GlyT-1 models built on the 5I73 template with SWISS-MODEL server, the side chain of Tyr370 in GlyT-1 or Phe476 in GlyT-2 was in a position that hinders access to the S1 site. Prior to ligand docking, the conformation of these residues was changed in accordance with the crystal structures with an open extracellular gate.
All docking processes were carried out in the Glide 8.1 program. For docking to S1/S2 site, the grid center was defined by the Tyr196 and Tyr370 residues in GlyT-1 and the corresponding Tyr287 and Phe476 residues in GlyT-2. In the case of docking to models in the inward-open state, the grid center was defined by the position of compound 1 transferred from the templates. In all docking processes, the inner box size was 15 × 15 × 15 Å, and the outer box size was 35 × 35 × 35 Å. Ligands were docked using standard precision. Five poses were written out for each ligand. The OPLS3e force field was applied during grid generation and Glide docking. The binding modes were visualized in the PyMOL 2.4.1 program.
Molecular Dynamics
MD simulations were performed in NAMD 2.13 using the CHARMM36m force field. Complexes were positioned in the membrane using the OPM server, and input files for MD simulations were prepared with the CHARMM-GUI online server following the same settings mentioned in our previous papers [53,61,62]. The system was equilibrated via a six-step protocol, and MD simulations were run at 303.15 K with a time step of 2 fs and a total duration of 50 ns. The interval for both the energy and trajectory recordings was 10 ps.
The RMSD for ligands and proteins was analyzed in VMD 1.9.3 after superpositioning all frames on the start frame for the protein backbone. The RMSD for the protein was calculated taking into account both the main and side chains of amino acids located at a distance of 7Å from the initial ligand pose. Hydrogen bonds were mapped in VMD with a defined maximum donor-acceptor distance of 3.5 Å and an angle cutoff of 40 • . In the case of ionic interactions, the cutoff for the distance between charged atoms was set to 5 Å. The presence of cation-π interactions was determined based on the distance between the positively charged nitrogen atom and the center of the aromatic ring (<6 Å), whereas aromatic interactions were determined based on the distance between the centers of the two aromatic rings (<5.5 Å). In addition, the relative orientations of the fragments involved in these interactions were visually examined.
The MM-GBSA binding free energy values were evaluated for 21 frames derived from the last 2 ns of the MD simulation (one frame for every 0.1 ns) using the Prime MMGBSA 3.0 program. The VSGB solvation model and OPLS3e force field were applied for this purpose. Protein flexibility, water molecules, and additional sodium and/or chloride ions (apart from the ions initially present at the binding site) were not considered in the calculations. The statistical significance of the difference in MM-GBSA energy values between the compared complexes was verified using a t-test in Statistica 13 software.
Conclusions
Glycine transporters (GlyT-1 and GlyT-2) play a key role in the function of both the inhibitory glycinergic system and the excitatory glutamatergic system. Inhibitors that selectively block these transporters have great potential for application in the treatment of various CNS diseases, such as schizophrenia, drug abuse, epilepsy, and neuropathic pain. Many groups of glycine transporter inhibitors with different selectivity and inhibition kinetics have been discovered so far. However, due to their poor pharmacokinetic and/or pharmacodynamic characteristics, no compound has been approved for therapeutic use to date. Knowledge about the mode of binding of particular inhibitor groups within glycine transporters is still limited, which hinders the search for new compounds with the desired selectivity and properties. The recently released crystal structure of GlyT-1 in the inwardopen state revealed an unusual binding mode for a noncompetitive inhibitor within the intracellular release pathway. In this study, we explored in detail the structure of glycine transporters by building homology models of these proteins in different conformational states and investigated their interactions with particular ligands through docking studies and MD simulations.
Our studies indicated that sarcosine-based noncompetitive GlyT-1 inhibitors, such as ORG-24598 and ALX-5407, can bind within the intracellular release pathway of GlyT-1, similar to the analogs of compound 1 from crystal structures. The selective binding of these inhibitors at this site in GlyT-1 can be attributed to a tighter pocket, allowing stronger hydrophobic interactions with the aromatic fragments of ligands, as well as by the presence of a Gly373 residue in place of Ser479 in GlyT-2. Interestingly, ALX-5407 fits equally well at the S1/S2 site of GlyT-1 in the outward-open state, indicating two potential binding sites that may differ in affinity. Competitive inhibitors of GlyT-1 bind coherently within the S1 and S2 sites. In the case of compounds containing a sulfonamide or sulfonyl moiety, the interaction of this fragment mimics that of the substrate carboxyl group by coordinating the sodium ion and forming a hydrogen bond with Gly373. The alkyl or heteroaromatic substituent is positioned in a hydrophobic pocket of the S1 site. Binding of these fragments is hindered by the side chain of Ser479 in GlyT-2 (Gly373 in GlyT-1). Additionally, in GlyT-1, Leu476 allows tighter binding and offers a more hydrophobic environment compared with Thr582 at the corresponding position in GlyT-2. The second region responsible for selectivity between GlyT-1 and GlyT-2, as well as other transporters, is the S2 site within the vestibule. The volume of this site is larger in GlyT-1 due to the presence of Leu524 and Val199 in place of Phe629 and Ile290 in GlyT-2. This enables large aromatic fragments of GlyT-1 inhibitors to bind within it. Competitive GlyT-1 inhibitors containing a protonated amino group form a stable salt bridge with Asp528 from the extracellular gate. ORG-25543, ALX-1393, and their derivatives bind in the outward-open state of GlyT-2. ORG-25543 is mainly located at the S2 site, and its interaction with the carboxyl group of Asp633 via a protonated amino group at this area seems to be crucial. The benzyloxy fragment of this compound is directed toward the S1 site. The selective binding of this fragment to GlyT-2 is likely due to Thr582 which provides more space compared with Leu476 in GlyT-1. ALX-1393 binds more tightly within the S1 site, where its amino acid fragment forms substrate-like interactions, which is reflected by slight differences in inhibition kinetics compared with ORG-255432. In the case of ALX-1393 and its analogs, the shorter linker and less extended aromatic fragments, compared with GlyT-1 inhibitors, allow them to bind within the limited space of the S2 site in GlyT-2.
The glycine transporter models investigated in this study, along with docking studies, MD simulations, and MM-GBSA energy calculations, enabled the determination of the binding modes of many inhibitors. In addition, they allowed us to identify the amino acids that are responsible for the selectivity of glycine transporters and understand the differences in transport kinetics for some groups of inhibitors. The information about the structure of glycine transporters and the algorithm for investigating the binding modes of compounds presented in this paper may be useful in the future to design new GlyT-1 and GlyT-2 inhibitors with desired properties, as well as compounds selective for other transporters from the SLC6 family. | 11,382 | 2022-07-01T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
LocaL nonsymmetricaL postbuckLing equiLibrium path of the thin fgm pLate niesymetryczna LokaLna ścieżka równowagi pokrytycznej cienkiej płyty z materiału funkcjonaLnie gradientowego
The influence of the imperfection sign (sense) on local postbuckling equilibrium path of plates made of functionally graded materials (FGMs) has been analyzed. Koiter’s theory has been used to explain this phenomenon. In the case of local buckling, a nonsymmetrical stable equilibrium path has been obtained. The investigations focus on a comparison of the semi-analytical method (SAM) and the finite element method (FEM) applied to the postbuckling nonlinear analysis of thin-walled complex FG plated structures.
Introduction
Since the mid 1980's Functionally Graded Materials (FGMs) have been a relatively new class of composite materials, which have become a very popular research field and have been used in numerous engineering applications. A standard functionally gradient material is an inhomogeneous composite made up of two constituents -typically of metallic and ceramic phases. Within FGMs, different microstructural phases have different functions, and the overall FGMs attain the multistructural status from their property gradation. In most cases, these phases content changes gradually along the thickness of the plate or shell. This eliminates adverse effects between the layers (e.g., shear stress concentrations and/or thermal stress concentrations), typical for layered composites what generally improves material utility properties. The combination of ceramic with a metal component improves the characteristics of FGM structures i.e. a better resistance to high temperature (ceramic) and good mechanical features (metal), reducing further a fracture possibility of the whole gradient structure. These features make high temperature environments the leading application area of FGM structures.
The nonlinear analysis of plates and shells devoted to basic types of loads is covered in the monograph by Hui-Shen [4]. Author considers static bending and thermal bending as an introduction to buckling and postbuckling behaviour of FGM plates and shells. The shear deformation effect is employed in the framework of Reddy's higher order shear deformation theory (HSDT) [20].
In [19], alongside the HSDT for FGM plates, Reddy compares the application of the first order shear deformation theory (FSDT) and the classical laminated plate theory (CLPT) to functionally graded plate analysis. According to the presented results for thin-walled plates, it is obvious that an application of the FSDT gives practically the same results as the HSDT. The discrepancy between both theories is of 2% in the calculated deflections of the plates under analysis.
The buckling problem of functionally graded plates is discussed in the frame of different approaches and for different loads: in [21] -biaxial in-plane compression; thermal loads (constant temperature) with axial compression in [24]; biaxial in-plane compression in [2] and [16], and through the thickness temperature gradient in [23].
Birman and Byrd [1] give a wide review of theories employed for a description of grading material properties and focus on the principal developments in functionally graded materials (FGMs) with an emphasis on the recent works published since 2000 (up to 300 works cited).
In some papers (e.g., [14,27]), the concept of 'physical neutral surface' that allows one to uncouple the in-plane and out-of-plane deformations is introduced.
Due to the complexity of buckling problems of FG plates under compound mechanical and thermal loads, the finite element method (FEM) seams to be the only possible solution in many cases. Therefore, in the literature one can find many papers which present results of a solution to different problems of FG plate buckling, obtained with an application of the FEM, for example [15,17,22].
In current paper in the finite element method solution, FG plates were modelled as multilayered composite structures whose graded material properties in the range of 10-40 isotropic layers were defined. After the convergence analysis the model with twenty layers sciENcE aNd tEchNology was accepted. For meshing, a shell element with four nodes and six degrees of freedom in each node was employed. The rotational DOF in the plane of the element was constrained via the penalty function.
Conducting with the FEM the nonlinear buckling analysis of a rectangular FGM plate, subjected to one-directional compression in its plane, the authors of the present paper have observed some intriguing influence of the imperfection sign (i.e., its direction) on postbuckling equilibrium paths of investigated FGM plates. Therefore, this work is aimed at an explanation of this phenomenon. The general asymptotic Koiter's theory of stability has been assumed as the basis of investigation. Among all versions of the general nonlinear theory, Koiter's theory [6,7,25,26] of conservative systems is the most popular one, owing to its general character and development. Even more, so after Byskov and Hutchinson [3] formulated it in a convenient way. The theory is based on asymptotic expansions of the postbuckling path for potential energy of the system.
The nonlinear stability of thin-walled multilayer structures in the first order approximation of Koiter's theory is solved with the modified analytical-numerical method (ANM) presented in [8]. The analytical-numerical method (ANM) should also consider the second order approximation in the postbuckling analysis of elastic composite structures. The second order postbuckling coefficients were estimated with the semi-analytical method (SAM) [12], modified by the solution method given in [11]. The investigation of stability of equilibrium states requires an application of a nonlinear theory that enables us to estimate an influence of different factors on the structure behaviour. The analysis of postbuckling behaviour of thin-walled composite plate structures using the SAM will be by far faster and more thorough than the FEM.
The initial imperfections were introduced by updating the finite element mesh with the first mode shape of the eigen-buckling solution, with a given magnitude corresponding to the plate thickness and assumed sign (direction). The eigen-buckling analysis, where the critical load was determined despite the eigen-mode, preceded the nonlinear buckling analysis.
Formulation of the problem
The square plate is supported at all their edges. It is assumed that the FG plate obeys Hooke's law. The material properties are assumed to be temperature independent.
In strain-displacement relations -in order to enable the consideration of both out-of-plane and in-plane bending of the plate, all nonlinear terms are present [8,9,11]: and κ κ κ where: , u v, w -are components of the displacement vector of the plate in the , , x y z axis direction, respectively, and the plane x y − overlaps the midplane before its buckling.
It should be highlighted that in the majority of publications devoted to stability of structures, the terms 2 2 , , ( ) x y x y u u v v + in strain tensor components (1) are neglected. However, the main limitation of the assumed theory lies in an assumption of linear relationships between curvatures (2) and second derivatives of the displacement w . In such an approach, finite displacements and small or moderate rotations are considered [11].
In thin-walled FG structures -plates or shells, usually the ceramic volume fraction V c and metal fraction V m distribution throughout the structure thickness t are described by a simple power law of: and 0 q ≥ is the volume fraction exponent (i.e., for 0 q = -plate is full ceramic and for q = ∞ -plate is metallic -see Fig. 1).
According to the rule of mixture, the properties of the functionally graded material (E -Young's modulus, ν -Poisson's ratio etc.) can be expressed as follows: In the present study, the classical plate theory is employed to obtain the governing equations of the thin FG plate equilibrium. Using the classical laminated plate theory (CLPT), the stress and moment resultants ( N , M ) are defined as [5,8,9]: where: A , B , D -are extensional, coupling and bending stiffness matrices, respectively. For the FG plate their components are listed below: Due to the presence of the nontrivial submatrix B , the coupling between extensional and bending deformations exists as it is in the case of unsymmetrical laminated plates [5,8,9]. An extensional force results not only in extensional deformations, but also bending of the FG plate. Moreover, such a plate cannot be subjected to the moment without suffering simultaneously from extension of the middle surface. Coupling between extension and bending is a result of a combination of the geometry and FGM properties in the structures. The stretching-bending coupling affects strongly the constitutive equations and the boundary conditions that have a complex form and the solution procedures become difficult.
The equations of stability of thin-walled structures have been derived using a variational method [8,9,11]. After expanding the fields of displacements U and the fields of sectional forces N into a power series with respect to the mode amplitudes ζ 1 (the dimensionless amplitude of the buckling mode), Koiter's asymptotic theory has been employed [3,6,7,8,9,10,13,18,25,26]: where: λ -load parameter, The postbuckling equilibrium path within an imperfect structure with the amplitude ζ 1 * for the single mode (i.e., an uncoupled mode), buckling mode has the following form [3,8,9,13,26]: a a a a cr cr where: σ cr -critical (bifurcational) value of σ (instead of λ ). The coefficients in equilibrium equation (9) are given in papers [3,8,9,13,26]. It can be easily seen that the amplitude ζ 1 * is a small quantity (i.e., only linear terms with respect to ζ 1 * have been accounted for) and the linear pre-buckling state is assumed. The corresponding expression for the total elastic potential energy of the structure has the following form: [3,8,9,13,26] -to be precisely determined.
The considerations performed in [12] and the results of [11] allow for concluding that the approximated values of the 1111 a coefficients correspond to an application of the following simple supported boundary conditions of the plate at both edges (i.e. x = 0; ) The first condition in (11) means that the external loading is not subjected to any additional increment.
At the critical point, the dependence describing the relationship for the ideal structure (i.e., without the imperfection ζ 1 0 * = ) is subject to bifurcation between the external loading and the displacement amplitude ζ 1 . Equilibrium path equation (9) can be treated as the first variation of the system potential energy, that is to say, as the condition necessary for the system equilibrium. aNd tEchNology equilibrium state, the second variation of energy was calculated Then the intersection points of the equilibrium path and the structure stability limit of the ideal structure were determined. Finally, the coordinates of two points 1, 2 p p were arrived at: A sample equilibrium path for the ideal square FGM plate is presented in Fig. 2. There it is visible that in the case of the FG plate, an nonsymmetrical stable equilibrium path exists. The unstable configuration corresponds to the postbifurcational equilibrium path of the plate without imperfection in the range < In the linear problem, the critical stress σ cr is the characteristic quantity, whereas in the nonlinear first order problem, the magnitude of the coefficient 111 a determining the sensitivity to imperfections should be accounted for.
Analysis of the results
Detailed numerical computations were conducted only for a square FGM plate. The plate is subjected to uniform compression in the direction of x axis. All plate edges are assumed to be simply supported. Although, in the subsection devoted to determination of critical stress, some other boundary conditions along the unloaded edges are considered as well.
The following geometrical dimensions of the square plate (Fig. 3) and the material constants for Al-TiC are assumed: where: indices m and c refer to the metal (Al) and ceramic material (TiC), respectively. In [8], an unbending, prebuckling state, i.e., a distribution field of the zero state according to (A1) has been assumed. According to assumed (A1) and (8) displacements, for the zero state (i.e., prebuckling) the force and moment dependence (5) takes the form: Then for the zero state, it results in an occurrence of nonzero inner sectional forces (A2) (0) , is presented. In legends to these figures the descriptions mean: F -free edges, for which the following has been assumed: (Fig. 4) for the boundary conditions under consideration increase with a decrease in the values of volume fractions q -as can be expected. Differences in the values of critical loads for cases S1 and S2 become visible for 0.5 q > . They are relatively inconsiderable (below 10%) and larger for case S2. a for case C are by approximately 60% lower than for conditions S1 and S2. For condition S2, the values of 1111 a are higher than for S1, and these differences become visible for 0.5 q > . However, they do not exceed 5% for the range of variability 0.1 10 q ≤ ≤ under consideration. As it has been discussed in detail in [8], sectional moments of the zero state ⋅ l U (according to the notation introduced by Byskov and Hutchinson [3,26]). An appearance of first order internal forces (i.e., related to the forces (1) (1) (1) , , = ) -respectively for cases F, S1, S2, and C; one obtains different distributions of inner forces of the first order, i.e., (1) (1) , , , , x y xy M M M . These inner forces values are determined with accuracy up to a constant, as it takes place for eigenproblems. The conditions on loaded edges (11a) enforce a generation of a self-balancing system of forces (1) x N . Below, few exemplary diagrams for S2 boundary conditions on longitudinal edges, for 0.5 q = are shown.
In Fig.7, distributions of first-order inner membrane forces , , , , To verify the proposed SAM solution, finite element computations were performed for the FG plate under axial loading. The commercial ANSYS software was applied for the numerical calculations. The numerical model was created with an application of a shell finite element. It was a multi-layered four-node element with six degrees of freedom at each node (three translations in the directions of local coordinate axes and three rotations around these axes). The rotational DOF around the normal to the plate midplane was constrained via the penalty function to relate this independent rotation with the in-plane components of displacements. This element is dedicated for modelling multi-layered structures and is equipped with the section option which allows for easy tailoring the lay-ups of the modelled plate. The sensitivity to shear strains in this element is governed by the first-order shear deformation theory, whereas the element formulation is based on the logarithmic strain measure. According to the current analysis requirements, the applied finite element was associated with linear elastic material properties. To discretize the model, a uniform mesh of elements was generated. The boundary conditions on loaded plate edges, which followed from S1 type analytical simple support, were introduced by displacement constrains in appropriate directions as well as coupling of edge node displacements to keep the edges straight.
The initial imperfection was introduced by updating the finite element mesh with the local mode shape of the eigen-buckling solution, with a given magnitude corresponding to the plate thickness. The eigen-buckling analysis, where the critical load was determined despite the eigen-mode, preceded the nonlinear analysis. Therefore, the numerical model employed large displacement formulation. The load was applied to the plate edges in the form of uniformly distributed node forces.
For the square plate under analysis and for 0.5 q = , the results of calculations obtained from the SAM and the FEM were compared. A comparison of the results is presented in Fig. 9. were determined approximately. It is followed by visible differences in the results obtained with both the methods (SAM and FEM) for the same value of imperfection. It can be seen that the sign of imperfection exerts an influence on the postbuckling equilibrium path. The initial deflection ζ 1 * along ceramic yields higher values of total deflections for the given value of load / cr N N than the initial deflection along the direction of metal.
In both cases the assumed absolute value of imperfection ζ 1 * was equal. Thus, as it was discussed above, the application of Koiter's theory through the semi-analytical method enables an explanation of the phenomenon of various postbuckling equilibrium paths for the functionally graded plate for different signs of imperfection with the same absolute magnitude ζ 1 * . In particular, it can be seen for
Conclusions
The analytical and numerical investigations on FGM -a relatively novel material, applications in plate and shell structures are presented. The effect of gradually varying volume fraction of constituent materials leads to continuous change from one surface to another eliminating interface problems and gives smooth material properties of final composite which is especially import in thermal environment applications.
An influence of imperfection values on various postbuckling equilibrium paths of the FG plate has been analyzed. The basis to explain the discussed behaviour is the nonlinear Koiter's theory of conservative systems. In the case of the FG plate, nonzero first-order sectional inner forces that cause an occurrence of nonzero postbuckling coefficients are responsible for the system sensitivity to imperfection. It results in the fact that postbuckling equilibrium paths of plate structures made of FGMs are unsymmetrically stable. This explains the observed differences in plate response dependence on imperfections sign (sense). where Δ is the actual loading. This loading of the zero state is specified as a product of the unit loading and the scalar load factor. Taking into account relationship (5), inner sectional forces of the prebuckling (i.e., unbending) state for the assumed homogeneous field of displacements (A1) are expressed by the following relationships before the redistribution of forces in the plate due to plate deformations: The assumed displacement field and the field of inner forces, corresponding to it for the prebuckling state, fulfil equilibrium equations for the zero state as an identity.
The omission of the displacements of the fundamental state implies that we ignore the difference between the configuration of the non-deformed state and the fundamental state and we may consequently regard the previously defined displacements u v ( ) ( ) , 0 0 as the additional ones from the fundamental state to the adjacent state.
The first order approximation, being the linear problem of stability, allows for determination of values of critical loads, buckling modes, and initial postbuckling equilibrium paths. | 4,229.4 | 2015-01-01T00:00:00.000 | [
"Engineering"
] |
Review on the Removal of Dyes by Photodegradation Using Metal-Organic Frameworks Under Light Irradiation
Metal–organic frameworks are coordination network/polymer with organic ligands containing potential voids. MOFs are a class of porous polymeric material, consisting of metal ions linked together by organic bridging ligands. Different Metal-organic framework compounds of Zn, Fe, Al, Cr, Co, and Cd have been successfully synthesized using different synthesis method under ambient conditions by different scholars. The photocatalytic activity of these MOFs was investigated by the degradation of different organic dyes (such as: MO, MB, DTBP, Orange G., RhB, RBB, and phenol) in aqueous solution under light irradiation. These MOFs exhibits a promising photocatalytic activity for efficient dye degradation under UV –visible light depending on their band gap differences. The effect of electron acceptors (H 2 O 2, KBrO 3 and (NH 4 ) 2 S 2 O 8 ) addition on the photocatalytic performance of MIL-53(Fe) on MB dye was also evaluated. The photocatalytic performance of this MOF was enhanced by the presence of electron scavengers by prolonging the hole-electron recombination. As a photocatalyst, the most remarkable feature of MOFs is the observation of reverse shape selectivity in which large molecules that cannot access the interior of the micropores are degraded significantly faster than those others that can enter into the pores.
INTRODUCTION
Although dyes make our world beautiful, they bring us pollution. Color is the first contaminant to be recognized in wastewater and has to be removed before discharging into water bodies or on to land. The colored wastewaters of industrial effluents are unattractive because they account significant concentrations of pollutants so that they become the sources of increasingly acute complaints. Moreover, dye wastewater usually consists of a number of contaminants including acids, bases, dissolved solids, toxic compounds, and colored materials. They can have acute or chronic effects on exposed organisms, which depend on the concentration of the dye and the exposed time. In addition to that, many dyes are considered to be toxic and even carcinogenic (Rashed and El-Amin, 2007).
Dye wastewaters enter the environment from manufacturers and consumers such as textile, leather, paper, printing, plastic, and food industries usually in the form of dispersion or a true solution and often in the presence of other organic compounds originating from operational processes (Pelizzetti, 1985). The presence of small amounts of dyes in water (even < 1 ppm) is highly visible and it affects the pleasing appearance, causes significant loss in luminosity and any increase in the temperature will greatly deplete the dissolved oxygen concentration in wastewater. This results in subsequent alteration of the aquatic ecosystem (Vakiti, 2012). Thus the presence of dye materials greatly influences the quality of water and the removal of this kind of pollutant is a prime importance (Agarwal, 2013).
Conventional dye removal methods, including physical, chemical, and biological processes, have been used intensively as a solution for the problem. But these methods have disadvantages such as impacts on health, high cost and difficulty in recycling. It is also a problem because of these dye compounds in wastewater ordinarily contain one or several benzene ring and cannot be decomposed easily in chemical and biological processes. Moreover, most of the dyes are found to be resistant to normal treatment process as they are designed to resist chemical and photochemical degradation.
But porous solids based heterogeneous photocatalytic technology was developed as a promising solution to this challenge. Recently MOFs have attracted attention as catalysts in the degradation of dyes under visible irradiation owing to their high surface area, low densities, and high porosity, thermal stability and adjustable chemical functionalities (Mohanty, 2012). MOFs which exhibit high surface area and large pore volume have attracted considerable attention due their elegant topology and potential applications in separation, gas storage, molecular sensing, and catalysis. In addition, MOFs behave as semiconductors when exposed to light, thus making MOFs potentially be photocatalysts. More recently, MOFs that can act as a photocatalysts have attracted much attention for exploiting new applications of MOFs (Du et al., 2010).
Since there are only few reports on the photocatalytic activity of MOFs for the degradation of organic dyes, this review paper will try to see explicitly the photocatalytic activity of different MOFs for the removal of various organic contaminants and factors affecting this activity Metal-Organic Frameworks
Definition of MOFs
Metal-organic frameworks are coordination network/polymer with organic ligands containing potential voids. Metal organic frameworks are highly crystalline materials built from inorganic and organic building blocks with infinite inorganic-organic connectivity, forming soluble complexes that then self-assemble into one-, two-, or three-dimensional frameworks consisting of metal ions linked together by organic bridging ligands. Sometimes they are referred to as hybrid inorganic-organic frameworks and a subset of which are inorganic coordination polymers (Wilkinson et al., 1987).
Composition of MOFs
A metal-organic framework (MOF) material can be thought of as the composition of two major components: a metal ion and an organic molecule called a linker (Hong, 2005;James, 2003). These organic molecules act as a linker to link the metal ions. The backbone of the compound is constructed from metal ions which act as connectors and organic bridging ligands as linkers (Adedibu and Isaac, 2012).
Metal ions + Organic units Coordination polymers (linkers/birding ligands) or MOF materials = Figure 1: Schematic representation of a MOF unit cell showing the arrangement of inorganic joints (red circles (a), black spheres (b) and red squares(c)) linked by organic struts (yellow rods (a), white rods (b) and black rods (c)). The yellow sphere represents the void space within the lattice framework (Ma, 2011).
The organic ligands or linkers are groups that can donate multiple lone pairs of electrons (polydendate) to the metal ions, whereas the metal ions are made up with vacant orbital shells that can accept these lone pairs of electrons to form metal-organic framework materials.
Primary Building Units of MOFs
The metal ions and organic ligands used in the synthesis of MOFs are considered as the "primary building units" . The transition-metal ions are often used as versatile connectors in the construction of MOFs. The first-row transition metal ions, such as Cr 3+ , Fe 3+ , Cu 2+ , Zn 2+ , are especially commonly used (Subramanian and Zaworotko, 1995). Some alkali metal ions, alkaline-earth metal ions and rear-earth metal ions have also been employed as metal nodes for construction MOF structures. Depending on the metal and its oxidation state, Chemistry and Materials Research www.iiste.org ISSN 2224-3224 (Print) ISSN 2225-0956 (Online) Vol.12, No.1, 2020 coordination numbers can range from 2 to 7, giving rise to various geometries, such as linear, T-or Y-shaped, square-planar, tetrahedral, square-pyramidal, octahedral etc., which play an important role in directing the MOF structures (Long et al., 2006;Beck et al., 1995). The organic ligands, which are used for MOF construction, generally contain coordinating functional group such as carboxylate, phosphate, sulfonate, amine, or nitrile (Adedibu and Isaac, 2012).
Secondary Building Units of MOFs
Coordination of the primary carboxylate ligands to metal ions can result in many metal-oxygen-carbon (M-O-C) clusters, which are called "secondary building units", i.e. SBUs. Instead of one metal ion at a network vertex, the SBUs serve as connecting points that are joined together by the linkers leading to the formation of a MOF network. SBUs have intrinsic geometries, which play an important role in directing the MOF topology and thus, the MOFs constructed by SBUs generally exhibit high structural stability (Beck et al., 1995;Subramanian and Zaworotko, 1995;James et al., 2006).
Synthesis Methods of MOFs
Some reaction parameters are needed to be considered in order to reproduce the MOF materials. These are composition of the reactants, pH of the reaction medium, reaction temperatures, reaction time, solubility of the reactants, solvent type, concentration of the reactants, heating and cooling rates and types of reaction container and they should be screened in order to identify the optimal reaction conditions. One starting approach is to vary one parameter at a time in a systematic way (Vakiti, 2012). Several preparation methods for the formation of MOFs have been developed throughout the years.The most commonly used synthetic approaches are given below.
Hydro-and Solvo-thermal Synthesis
Hydro-and solvothermal synthesis are the most common methods used for making MOFs. Hydrothermal synthesis involves water as the solvent whereas solvothermal refers to the use of organic solvents. The choice of solvent is based on its ability to dissolve the organic linker. In both cases, the reactions take place inside a Teflon-lined stainless steel autoclave. Since it is a closed system, an auto genius pressure will be built up (Yaghi et al., 2012;James et al., 2006;Mark et al., 2012). The reaction mixture is normally heated at temperatures ranging from 80 to 220 0 C, over a time of several hours to several days. Compared with microwave, electrochemical and mechanochemical techniques, the use of autoclaves is a slow method.
Ionothermal Synthesis
Ionothermal synthesis involves the use of, as the name implies, ionic liquid, which act as the solvent. Ionic liquids have many attractive properties and have been used for producing many new structures. Their high polarity and pre-organized structure give them excellent solvating abilities. They are suitable for high temperature reactions, as in autoclaves and microwave ovens, since they have high thermal stability and possess little measurable vapour pressure (James et al., 2006;Yaghi and Li, 1995).
Microwave Synthesis
Microwave-assisted synthesis can be applied for making MOFs to reduce the reaction time and/or heating temperature and increase the purity of the product. Microwave synthesis in organic chemistry has attracted considerable attention over the last decade as a result of its short reaction times. Microwave synthesis was extended to the synthesis of zeolites and more recently has been applied for the production of MOFs. It was not until recently that this method was applied to the synthesis of MOFs. Fast reaction rates, high yields and selectivities, low amounts of waste and the possibility to control the size, shape and quality of the crystals are the main advantages of microwaves synthesis. The fast reaction times achieved with microwave heating can be explained by the increased number of nucleation sites due to the rapid heating (Mark et al., 2012;Yaghi and Li, 1995).
The particle size generally decreases using microwave heating compared with conventional methods. The promotion of uniform and rapid nucleation throughout the mixture results in high quality crystals with a narrow size distribution within a short time scale. Instead of days, as in solvothermal synthesis, it can take just a few minutes to synthesize a MOF. The product can be isolated within an hour. This method is applicable to functionalized devices for thin films, conductive materials, catalysis and gas sorption, storage and separation. The fact that the microwave instruments are device specific in terms of the irradiation power and the experimental setups, can lead to an uncertainty in the reproducibility of the experiments. This limitation is an ongoing discussion among scientists from different chemistry fields.
Sonochemical Synthesis
Sonochemical synthesis is a technique that could compete with microwave heating due to the ability to reduce both time and temperature of the reaction of MOFs (Mikaela, 2012). In one case, where three different methods Chemistry and Materials Research www.iiste.org ISSN 2224-3224 (Print) ISSN 2225-0956 (Online) Vol.12, No.1, 2020 for preparing MOF-177 were compared, sonciation showed better results than both solvothermal and microwave syntheses. High yield of good quality MOF-177 ([Zn4O(BTB) 2]), which gave greater CO2 uptake than samples prepared by conventional or microwave-assisted syntheses, was obtained after only half an hour (Yaghi et al., 2012;Yaghi and Li, 1995).
Electrochemical Synthesis
Electrochemical synthesis was used for preparing a MOF for the first time in 2006 by Müller. The electrochemical reaction was performed inside a glass reactor containing Cu-plates as an electrode material, supplying metal ions to a solution of linker and methanol (time: 150 min, voltage: 12-19 Volt, currency: 1.3 Ampere). Pure solid of octahedral crystals were collected. By varying the voltage, the concentration of metal ions was changed and thereby the crystal sizes could be controlled. The short reaction times compared with hydrothermal synthesis, together with the fact that the system is solvent-free and can form the coatings continuously; make it applicable in industry (Müller et al., 2009).
Mechanochemical Synthesis
Mechanochemical synthesis can work in the absence of solvent. Solvents are nearly always added to reactions to facilitate the diffusion and collision of the components. However, solvent is not always necessary. The first example of a solvent-free synthesis, or mechanochemical synthesis, of a MOF was Cu(ina)2 (Mark et al., 2012; Yaghi and Li, 1995). The metal salt and the acid, both solids, were ground using a ball mill without any addition of solvent or heat. The reaction was initiated by minimizing the particle size, which facilitated the interaction between the metal salt and the acid. The reaction was accelerated by further grinding (James et al., 2006).
Room Temperature Synthesis
In this method the metal salt solution in a specific solvent and the linker solution in the same/different solvent are prepared and mixed with stirring. The resultant solution is further stirred for longer hours upon magnetic stirrer and the product is separated by filtration, washed plenty times with the solvent and dried at room temperature (Negash, 2013;Tranchemontagne, 2008).
Applications of MOFs
Factors such as high surface areas together with outstanding porosity with nano sized pores and the possibility to produce MOFs from simple starting material in already established industrial processes are essential for the successful future of MOFs. MOFs have been studies widely and found application in many fields. In this review, some extensive studied applications of MOF materials are introduced briefly.
MOFs as Host Materials
Given the high porosity of MOF materials, it is expected that MOFs could serve as host materials with exceptional guest molecules loading capacity. All kinds of gas molecules, liquid adsorbates, and nano particles have been encapsulated into MOFs (Andrew et al., 2013;Müller, 2009). The most notable study of MOFs as loading materials is the storage of fuel gas H2/CH4 and greenhouse gas CO2 (Ma, 2011).The adsorption of other gas molecules, such as hydrogen sulfide (H2S), ammonia (NH3), acetylene (C2H2), and bioactive molecule nitric oxide (NO), in MOF materials have also been performed (Wilkinson et al., 1987). Liquid adsorbates (or dissolved in liquid), such as hydrocarbons, drugs, ferrocene, even large molecules like fluorescent dyes, C60 have also been successfully accommodated into MOFs and thereby generating fascinating properties (Adedibu and Isaac, 2012).
MOFs as Absorbents for Molecule Separation
Beside the high porosity, another feature of MOF is their high structure tenability with adjustable pore size, channel topology and inner surface properties, which makes them promising candidates as adsorbents for selective gas/solvent separation and purification. The pore sizes within the MOF is crucial to include highly efficient separation of gust molecules by the size/shape exclusion effect in which large molecules are prevented from entering the pores while small molecules are allowed to pass trough (Ma, 2011;Wilkinson et al., 1987).
Optimizing the pore surface, either using customized metal nodes and organic linkers with functionality already present or by post-synthetic covalent organic linkers with modification, aiming at controlling the adsorbates framework interaction is another way to facilitate selective molecule adsorption/separation (Adedibu and Isaac, 2012;Andrew et al., 2013). So far, MOFs with pores ranging from 2 to 48Å have mainly been studied for separating gas molecules, such as CO2, CH4, H2, O2, and small-sized solvent molecules, such as methanol, ethanol, and water (Ma, 2011;Wilkinson et al., 1987).
MOFs as Template Materials
Using MOFs as templates to direct the growth of embedded materials aiming at generating nano scale particles or Braga et al. (2012) were the first to show that MOFs could be used as templates for supporting the growth of metal nanoparticles in 2005. Since then, a range of nano scale metals, e.g. Cu, Ru, Pd, Au, Ag, Pt, and metal-oxides such as TiO2, CuO, ZnO, as well as hybrids like NaAlH4, were generated inside MOF cavities by loading their precursors into the framework through a vapour deposition, solution impregnation or solid grinding method (Braga et al., 2012).
MOFs for Biological Applications
Among numerous studies, the biological application of MOFs is still a very new field. Until now, only few studies concerning MOFs as biomaterials have been reported. Several properties, such as biocompatibility, efficacy, and imaging properties of MOFs remain to be investigated. And, extended biological applications of MOFs need to be explored. MOFs applied as a potential drug carrier for biomedical and pharmaceutical applications aiming at targeted drug delivery to specific sites with controlled rate and avoiding the "burst effect" has attracted more interest recently. Complexes of highly paramagnetic metal ions, such as Gd 3+/ Mn 2+ , MOFs are often administered to enhance the MRI contrast by increasing water proton relaxation rates (Ma, 2011).
MOFs as Magnet
Magnets are very important materials with an ever-increasing number of uses. The magnetic properties such as ferromagnetism, antiferromagnetism, and ferromagnetism of polymetallic systems derive from the cooperative exchange interactions between the paramagnetic metal ions or organic radicals through diamagnetic bridging entities. Therefore, their magnetic behaviours depend on the intrinsic nature of both the metal and the organic ligands as well as the particular level of organization created by the metal ligands coordination interaction. As a result, in pursuing the magnetism of MOFs, the ligands design is crucial both to organize the paramagnetic metal ions in a desired topology and to efficiently transmit exchange interactions between the metal ions in a controlled manner (Adedibu, and Isaac, 2012;Müller et al, 2009).
Magnetic studies of MOFs are embedded in the area of molecular magnets and the design of low-dimensional magnetic materials, magnetic sensors, and multifunctional materials. Indeed, closed shell organic ligands that are typically used in MOFs mostly give rise to only weak magnetic interactions. Furthermore, the porosity of MOFs provides additional interesting phenomena in regards to magnetic properties. The use of chemical coordination or crystal engineering techniques allows for the systematic design of MOFs with adjustable magnetic properties (Ma, 2011).
Mechanism for Degradation of Dyes
The mechanism of dye degradation using MOFs under light irradiation will be discussed by considering the MB degradation using MIL-53(Fe) under visible irradiation. The reaction mechanism for MB decoloration could be discussed based on semiconductor theory. Illumination of MIL-53(Fe) photocatalyst by photons with energy equal to or greater than its band gap excites electrons (e -) from the valence band to the conduction band and produces holes (h + ) in the valence band. The photogenerated holes (h +) with strong oxidant capacity can directly oxidize adsorbed organic molecules or react with water molecules or hydroxyl ion (OH -) to generate hydroxyl radical ( • OH). The formed hydroxyl radicals ( • OH) also possess strong oxidation ability and can react readily with surface adsorbed organic molecules. Meanwhile, photogenerated electrons can be trapped by molecular oxygen to form superoxide radical ( • O2-), which also possesses strong oxidant ability to decolorize the MB molecules. The low efficiency of MB photodegradation over MIL-53(Fe) photocatalyst could be ascribed to the fast electron-hole recombination. The electron transfer process is more efficient if the molecules are preadsorbed on the surface within a reasonable range and with an appropriate orientation (Du et al., 2011). For heterogeneous catalysis, the overall process can be decomposed into five independent steps (Herrmann, 1999). These steps are: 1. Transfer of the reactants (contaminants) from the fluid to the surface of the catalyst 2. Adsorption of the organic contaminants on the activated photocatalyst surface 3. Photocatalysis reaction of the adsorbed molecules on the surface of the catalyst 4. Desorption of the reaction products (intermediates) from the surface of the catalyst 5. Transfer of the products (intermediates) from the interface region to the solution
Factors Affecting the Degradation Rate pH Value of the Reaction Media
The solution pH is an important variable in aqueous phase photocatalytic reactions. The pH of a solution influences adsorption and dissociation of substrate, catalyst surface charge, oxidation potential of the valence band and other physicochemical properties of the system (Shankar et al. 2004a,b). In accordance with Nerst's law, varying the solution pH would shift the energy of the valence and conduction band edges (Hoffmann et al., 1995). This results in the valence band electron becoming more effective and the conduction band holes less effective at higher pH. The pH affects significantly not only photocatalyst activity, but also changes pollutant structure. For example, Chemistry and Materials Research www.iiste.org ISSN 2224-3224 (Print) ISSN 2225-0956 (Online) Vol.12, No.1, 2020 phenol can be charged positively or negatively under different pH range. The interaction and affinity between both photocatalyst and phenol will be varied with the solution pH. So, the pH of the aqueous solution is a key factor for photocatalytic reaction and can affect the adsorption of pollutants on the photocatalyst surface, an important step for the photo-oxidation to take place (Naeem and Feng, 2009). The degradation rate of phenol decreased with the increase in pH. Moreover, low degradation rate at higher pH is attributed to the fact that when the concentration of OHion is higher in the solution, it prevents the penetration of UV light to reach the catalyst surface (Qamar et al., 2006). Furthermore, high pH favours the formation of carbonate ions which are effective scavengers of OHions and can reduce the degradation rate (Akbal and Onar, 2003).
Pollutant Concentration
As the concentration of model pollutant increases, more molecules get adsorbed on the photocatalyst surface, the substrate concentration can influence the extent of adsorption and rate of reaction at the surface of the photocatalyst. It will be an important parameter for optimization between high degradation rate and efficiency (Pecchi et al., 2001). Mahalakshmi et al. (2009) found the optimum concentration for the dye under investigation and the rate increases up to this point but above this concentration, the rate decreases due to insufficient quantity of • OH radicals, as the formation of • OH radicals is a constant for a given amount of the catalyst. Similarly, Zhu et al. (2000) reported that the photo generation of holes or . OH radicals on the catalyst surface is reduced since the active sites are covered by dye ions. Another possible cause is the radiation screening effect at a high dye concentration since a significant amount of radiation may be absorbed by the dye molecules rather than the photocatalyst particles and then reduces the efficiency of the catalytic reaction.
Light Intensity
Light irradiation plays a significantly important role in all of the photocatalytic reactions and determines the number of created electron hole pairs. Accordingly, increasing the incident photon rate would result in an increase in the photocatalytic reaction rate. The rate of oxidation of a particular compound is proportional to light intensity. This phenomenon indicates that high photon flux increases the probability of collision between photons and activated sites on the catalyst surface and enhances the rate of photocatalytic reaction. Furthermore, at sufficient high light intensity levels, the collision between photons and the activated sites approaches its limit, and further increase in the light intensity will have no effect on the reaction rate (Mahalakshmi et al., 2009).
Reaction Temperature
Temperature has nearly no effects in the some range but an increase in photocatalytic reaction temperature above the range promotes the electron-hole recombination due to the decrease in the dissolved oxygen and disfavour the adsorption of organic compounds onto the catalyst surface (Palmer et al., 2002).
Photocatalyst Concentration
The initial rates of reaction are directly proportional to the mass (m) of catalyst. However, above a certain value of m, the reaction rate levels off and becomes independent of mass (Shankar et al., 2004a,b). The increase in the efficiency seems to be due to the increase in the total surface area (active sites) available for the photocatalytic reaction as the dosage of photocatalyst increased. However, when catalyst was overdosed, the number of active sites on the catalyst surface may become almost constant because of the decreased light penetration via shielding effect of the suspended particles and the loss in surface area caused by agglomeration (Sobczynski et al., 2004).
Electron Acceptors
Molecular oxygen has been employed as an effective electron acceptor in most photocatalysis applications. In heterogeneous photocatalytic reaction, molecular oxygen (air) has been used for this purpose as an electron acceptor for prevention of electron hole recombination. One approach used to prevent electron hole recombination is to add electron acceptors into the reaction media. The presence of H2O2 as electron acceptor can serve as electron scavengers to prevent the recombination and enhance photodegradation efficiency. H2O2 has several effects including: (a) avoid recombination of electron-hole by accepting the conduction band electron and (b) increase the concentrations of the hydroxyl radical. Electron scavenging and the consequent e --h + recombination suppression can also be achieved by the use of other inorganic oxidants such as KBrO 3 and (NH 4 ) 2 S 2 O 8 (Du et al. 2011).
CONCLUSION
MOFs have high thermal stability, excellent crystallinity, high surface area and larger pore volume. They have a promising photocatalytic activity for the removal of different organic dyes by photodegradation upon irradiation of lights with different wavelength depending on their band gap differences. Bulky organic dyes are more easily degraded than smaller ones due to the reverse shape/size selectivity of MOFs. Given the richness of different metal sites and/or metal-containing clusters and organic bridging linkers to construct diverse MOF materials and thus to Chemistry and Materials Research www.iiste.org ISSN 2224-3224 (Print) ISSN 2225-0956 (Online) Vol.12, No.1, 2020 tune their capacity to absorb solar energy and initiate photocatalytic properties, it is expected that new MOF photocatalysts for their wide applications will be emerging in the near future.
As a relatively new class of materials, porous MOFs will continue to draw interest and inquiry by both academia and industry. They are readily available using simple synthetic strategies that supply high surface area materials. In conclusion, the MOF research community has made great progress in the last decade, yet we may have just seen the start of the innovations in respect to the application potential of MOFs. Many new types of applications will emerge as the research topic becomes more and more popular. The future of the field is indeed very bright. | 6,076 | 2020-01-01T00:00:00.000 | [
"Materials Science"
] |
Recent advances in the dearomative functionalisation of heteroarenes
Dearomatisation reactions of (hetero)arenes have been widely employed as efficient methods to obtain highly substituted saturated cyclic compounds for over a century. In recent years, research in this area has shifted towards effecting additional C–C bond formation during the overall dearomative process. Moving away from classical hydrogenation-based strategies a wide range of reagents were found to be capable of initiating dearomatisation through nucleophilic addition (typically a reduction) or photochemically induced radical addition. The dearomatisation process gives rise to reactive intermediates which can be intercepted in an intra- or intermolecular fashion to deliver products with significantly increased molecular complexity when compared to simple dearomatisation. In this Perspective recent examples and strategies for the dearomative functionalisation of heteroaromatic systems will be discussed.
Introduction
The chemistry of heterocyclic aromatic compounds (heteroarenes) has been investigated extensively for some considerable time. Over the last 30 years the motivation for organic chemists to nd new ways to synthesize and derivatise heteroarenes has been fuelled by their widespread use in medicinal chemistry. 1 While the basic strategies for building up heterocyclic aromatic systems from simple building blocks have been thoroughly explored, 2 the synthesis of their dearomatized, saturated counterparts still offers a lot of possibilities to the creative chemist. The importance of these dearomatised counterparts in drug discovery and medicinal chemistry cannot be overstated as the eld moves out of the era of "atland" 3 and into an age where the focus shis towards complex three-dimensional molecular architectures. In this regard, a general route to access functionalised, saturated heterocycles is to reduce and thereby dearomatise a pre-functionalised heteroarene (e.g. by catalytic hydrogenation).
In this Perspective we will focus on recently published reactions where a dearomatisation reaction on a substrate is enhanced by additional C-C (or related C-B/C-Si) bond formation, namely a dearomative functionalisation. 4 Common and established methods for accomplishing such a transformation rely on the arene acting as a nucleophile to initiate dearomatisation. However, in this review we will focus on the reverse scenario: dearomatisation initiated by the arene acting as an electrophile or a radical acceptor. The fundamental concept behind this type of transformation is depicted in Scheme 1 Nicolas received his PhD from TU Wien, Austria, in 2019 working on the synthesis of steroidal doping metabolites. He then took on a postdoctoral position at the same institution developing a total synthesis of phytosiderophore natural products. In 2020 he moved to the University of Oxford as a FWF Erwin-Schrödinger fellow investigating stereoselective dearomative functionalisation of nitrogen containing heterocycles.
Bruno Marinič received his MChem degree from the University of Oxford in 2018. He is currently a nal year iCASE (AstraZeneca) DPhil student in the research group of Prof Tim Donohoe. His current research focuses on developing new reductive dearomative functionalisation reactions of Nheterocycles.
(illustrated with pyridine) and shows many intriguing possibilities. Generally, an initial dearomative step (here a nucleophilic addition) results in the unmasking of an aromatic substrate and reveals a reactive intermediate (I) which is qualied to engage in subsequent intra-or intermolecular bond formation. Importantly, this intermediate typically shows polarity and regiochemistry preferences that are opposite to that of the original arene. Aer C-C bond formation, the secondary intermediate II of these reactions can be diverted into several different reaction manifolds. For example, it can undergo rearomatisation 5 to provide a product of a formal C-H activation (III) or undergo a second (and even third) C-C bond formation reaction to provide the irreversibly dearomatised product IV.
To add to the myriad of productive reaction pathways, it should be noted that instead of engaging in C-C bond formation, intermediate I can also protonate to form a highly electrophilic iminium species V which can undergo trapping by a nucleophile. Finally, it should be added that apart from the stepwise processes discussed above, photoredox promoted cycloaddition reactions can also result in the formation of two new bonds (typically as a result of radical addition) and thus also fall under the umbrella of dearomative functionalisation. Therefore, this class of reaction has been included, especially given the intense current interest in this area.
Some of the benets of developing a general dearomative functionalisation methodology are immediately obvious: the ability to accomplish rapid and efficient enhancement of molecular complexity through cascade processes; the diversication of readily accessible molecular scaffolds and the possibility for high regio-and stereocontrol in the obtained products. Therefore, we want this Perspective to illustrate the enormous potential of dearomative functionalisation in chemical synthesis.
Metal-catalysed
In this section, methods relying on an initial reductive dearomatisation caused by a metal hydride (itself generated by transfer hydrogenation rather than with molecular hydrogen) are listed. In 2019 we reported a reductive hydroxymethylation reaction (Scheme 2) of activated quinolines 1 and pyridines 2 employing formaldehyde as both a reductant (i.e. source of hydride) and an electrophile. 6 In this transformation the Ir(III) catalyst was transformed to an iridium hydride through the oxidation of formaldehyde methyl hemiacetal; this was then able to add hydride at C-4 of the activated heteroarene. The resulting enamine (see archetype I in Scheme 1) went on to trap formaldehyde electrophile, thus generating an iminium ion which was nally reduced by more Ir-H catalyst. While quinoliniums reacted without the need for additional activation of the aromatic system the pyridines required an electron withdrawing group (EWG) to be present at C-4. Subsequently we expanded the scope of the N-activating groups and diversied the C-4 substituents that are tolerated (see 4, Scheme 3). 7 The value of the methodology was also demonstrated when we applied it in a short and efficient synthesis of the anti-depressant pharmaceutical paroxetine. 8 The group of Zhang recently disclosed some remarkable cascade reactions under similar conditions, functionalising reactive iminium species formed in situ through intramolecular trapping reactions. In 2021 a protocol for the incorporation of substituted phenols onto heterocycles was published (see 6 / 12, Scheme 4). 9 In this work, the phenol partner 7 reacted with formaldehyde, followed by dehydrogenation in situ to give aldehyde 9. The enamine species formed from dearomatisation of the quinolinium then went on to capture this aldehyde. Following elimination of water, an extended a,b-unsaturated iminium 10 was formed, which was reduced to form a new enamine species. Aer reaction with another equivalent of formaldehyde, the "trapped" iminium 11 which was now unable to lose a proton was attacked by the phenol to deliver the corresponding cis-annulated products. An alternative mechanism leading to the same product can also be formulated by allowing the initial enamine to attack formaldehyde and form an extended iminium which in turn could be attacked by the phenol in a Friedel-Cras-type reaction.
Subsequently, the scope of reaction partners was expanded 10 to include cyclic 1,3-dicarbonyls and aromatic derivatives thereof, giving rise to complex pentacyclic products (Scheme 5) in one step. This three-component annulation reaction possibly proceeds through formation of a good conjugate acceptor by reaction of 14 with formaldehyde (including elimination). The enamine formed aer initial dearomatisation of the quinoline then attacks via a conjugate addition and a subsequent second alkylation with formaldehyde gives rise to 15.
Zhang and co-workers also expanded this chemistry to include anilines under similar conditions, but in the absence of formaldehyde; here the resulting aminal functionality was not stable and this led to fragmentation of the quinoline core and formation of a new quinoline ring (Scheme 6). 11 Overall, this process could be regarded as a formal C-3 alkylation of quinolines. While this example does not strictly constitute a dearomatisation (as the product is aromatic), it was included in this review due to the similarity in mechanism and concept.
Finally, Zhang's efforts in this area culminated in the recently reported annulation of activated quinolines and isoquinolines in a Mannich/Friedel-Cras cascade to give complex azaarene products (see 25 and 26, Scheme 7). 12 Mechanistically, this reaction was thought to involve attack of an initial enamine species onto formaldehyde, followed by loss of water and a conjugate reduction by a hydride equivalent to give methylated species 21. The Schiff base 22 formed from the condensation of formaldehyde with the aniline can then react either stepwise through enamine alkylation of the iminium or in a [4 + 2] cycloaddition giving rise to intermediates 23 or 24. The former can react to give 26 via a Friedel-Cras type mechanism whereas the latter directly collapses to give the product through rearomatisation. This protocol proved to be high-yielding and broadly applicable to a range of substituted (iso)quinolines and anilines.
In earlier work Zhang was also able to exploit the reactivity of an iminium species formed aer initial arene reduction and protonation. In 2018, the C-2 functionalisation of 1,8-naphthyridines 27 with substituted anilines 28 under acidic ruthenium catalysis (Scheme 8) was reported. 13 In this case acidic activation of the substrate with TsOH was sufficient to allow dearomatisation of the quinoline core.
An example of a related dearomatisation reaction that is coupled with C-C bond formation but then nishes with a rearomative process is depicted in Scheme 9. 14 In 2020 we developed the C-3 and C-5 methylation of pyridinium salts to give pyridines. Mechanistically, the intermediate 32, formed aer dearomatisation by hydride at C-2 and formaldehyde alkylation at C-3 is able to lose water and form an extended iminium Scheme 3 Reductive hydroxymethylation of 4-aryl and 4-heteroaryl pyridinium salts. 7,8 Scheme 4 Catalytic reductive tandem functionalisation of quinolinium salts with phenols and formaldehyde. 9 Scheme 5 Catalytic annulation of isoquinolines with 1,3-dicarbonyl compounds and formaldehyde. 10 species which can accept a hydride at the exocyclic position thus installing a methyl group aer re-aromatisation in situ. This process can be repeated at C-5 and nally the activating group could be cleaved with CsF in a one-pot fashion to give functionalised pyridine 31 directly.
Metal-free dearomatisations
Relative to approaches using transition metal catalysts there have been a handful of recent reports on metal-free dearomative functionalisation. In 2019 we disclosed the C-4 hydroxymethylation of activated isoquinolines with paraformaldehyde being used as both an electrophile and reducing agent (Scheme 10). 15 Under strongly basic conditions a Cannizzaro type mechanism was proposed whereby a hydride was transferred from formaldehyde. Aer hydride attack on the arene, the reaction then proceeds as expected with the nal iminium being reduced through a second hydride transfer from formaldehyde. Interestingly, C-4 unsubstituted substrates underwent two sequential alkylation reactions resulting in methyl/hydroxymethyl substitution (R 3 = Me in Scheme 10) at that position.
Subsequently both the Donohoe and Zhang groups have developed metal-free protocols for the functionalisation of (iso) quinolines by employing alternative reducing agents. Pleasingly, the movement away from formaldehyde has allowed for the trapping of a more diverse range of electrophiles by the reactive enamine intermediates.
Zhang reported the synthesis of 3-substituted quinolines by reductive deconstruction of activated isoquinolines (35, Scheme 11) using phenylsilane as the reductant under basic conditions. 16 Mechanistically this reaction proceeds through an initial hydride attack at C-1 of the isoquinolinium, followed by aldol-type condensation onto the aromatic aldehyde and loss of water. The resulting unsaturated iminium is intercepted by the ortho-aniline to form aminal 38 which collapses to rearomatise the newly formed quinoline core and form 37. A broad functional group tolerance for all three residues was observed and generally good yields of rearranged quinolines were obtained.
Finally, we reported the dearomative functionalisation of activated (iso)quinoline salts by using buffered formic acid as the reductant, together with a wide range of electrophiles (Scheme 12). 17 This work was heavily focused on the use of a,bunsaturated ketones, which engaged in a Michael type reaction with the in situ formed enamine at C-3 and C-4 respectively for quinolines and isoquinolines. The reaction was initially performed under rhodium catalysis with very low catalyst loadings (0.01 mol%) but was also found to operate in the absence of metal as well. This data suggests that the reductant complex (formic acid and triethylamine) is able to selectively reduce arenes and iminium ions in the presence of other electrophilic species, a nding that could likely be exploited in other transformations. Apart from ketone-based electrophiles maleimides, 1,1-disubstituted olens, nitrostyrene and aldehydes could also be employed. The formation of more complex annulated products was also observed with certain C-3 and C-4 substituted substrates; usually by formation of secondary reactive exocyclic enamine species which formed tricyclic products e.g. 43 in moderate yields as single diastereomers.
Metal-catalysed
Apart from the reductive functionalisation reactions discussed in the last section, dearomatisation reactions which are Scheme 8 Reductive functionalisation of 1,8-naphthyridines at C-2 under ruthenium catalysis. 13 Scheme 9 C-3 methylation of pyridines through transient dearomatisation of activated pyridiniums. 14 Scheme 10 Metal-free C-4 hydroxymethylation of isoquinolinium salts with paraformaldehyde. 15 Scheme 11 Metal-free deconstruction of isoquinolinium salts for the synthesis of C-3 arylquinolines. 16 Scheme 12 Evolution of the dearomative functionalisation of (iso) quinolinium salts under acidic conditions. 17 initiated by carbon or heteroatom nucleophiles are also known. The Xu group disclosed a copper-catalysed reductive C-2 silylation of C-3 substituted indoles, enabled by a chiral NHCligand (Scheme 13). 18 Mechanistically the reaction proceeds via formation of a silylcopper species which engages with the substrate in an addition reaction to give a copper enolate. Subsequent protonation and epimerisation delivered the desired dearomatised products 45 in good enantiomeric purity and as single diastereomers.
In closely related work, Xu and co-workers accomplished a stereoselective C-2 borylation of indoles by employing B 2 pin 2 as a source of boron and again using a chiral copper catalyst (Scheme 14). 19 Because C-3 esters were employed to activate the arene, the authors did not observe concomitant epimerisation at C-3 towards the thermodynamically more stable transisomers and so the cis-products 47 were isolated with good diastereoselectivity. Additionally, the authors showed that the newly incorporated boryl and silyl functionality could be derivatised to form a variety of 2,3-substituted indoline products.
The Ito group was able to expand on these ndings in reports from 2020 and 2021 whereby C-2 activated pyrroles and indoles underwent a similar dearomatisation sequence. In the rst example shown (Scheme 15) addition at C-3 was enabled by the same mechanistic principles as discussed earlier. 20 Thus, by placing an electron-withdrawing group at C-2 the natural polarity prole of the pyrrole/indole nucleus was reversed and nucleophilic attack of the silyl species at C-3 was achieved in excellent yield and with good overall selectivity for the trans products 49.
The related enantioselective borylation of pyrroles under copper catalysis allowed for a rich follow-up chemistry by harnessing the reactivity of the initially formed allylic borane products as nucleophiles towards aldehydes. 21 In this case, the borylated intermediate 51 could be isolated but was usually carried through to the next step (this consisting of trapping an aldehyde directly to form three stereocenters, all with excellent diastereo-and enantioselectivity, Scheme 16). The 3,4-olen remaining in the dihydropyrrole products 52 serves as an attractive handle for further functionalisation as demonstrated in the report (e.g. via dihydroxylation).
With regards to the related reductive functionalisation of quinolines, efforts have been more focused on the introduction of carbon nucleophiles onto the arene, for instance in work by Harutyunyan and co-workers in 2020 (Scheme 17). 22 An enantioselective addition-reduction sequence at C-4 of quinolines 53 was enabled through Lewis acid activation of the substrate and a chiral copper-catalyst in combination with borane-THF as a reductant. As an alternative to full reduction, the nitrogen could be capped by reaction with acetyl chloride, giving rise to the respective 1,4-dihydroquinoline analogues.
In related work, Wang and co-workers demonstrated C-2 and C-4 dearomative functionalisation of pyridines and quinolines using the same principle of nucleophilic dearomatisation through attack of an organometallic reagent under concomitant Lewis acid activation (Scheme 18). 23,24 The crucial intermediate 56, formed aer C-4 attack of the nucleophile, could be elaborated either by reduction with a hydride source as before or by addition of a second nucleophile (here indole) to obtain difunctionalised products (58) in good yields. Unfortunately, other arene nucleophiles did not participate in the trapping of the transient iminium species.
In subsequent work by Wang these ndings were expanded to allow for a more general dearomative 2,4-functionalisation of quinolines. The crucial modication in procedure consisted of the utilization of TMSCN as a transient nucleophile, giving rise to a C-2 cyano species 60 which was not isolated but rather reacted in a subsequent operation with a Grignard reagent (Scheme 19). 25 Thus, a broad scope of residues could be introduced at C-2 (see 61) in moderate yield and with excellent diastereoselectivity.
In the area of pyridine dearomatisation the Yoo group disclosed a palladium-catalysed dearomative annulation that allows for the functionalisation of the C-2, C-3 and C-4 positions of 62 in a single step (Scheme 20). 26 In this reaction an initial lactone decarboxylation reaction gives rise to a palladium-allyl complex 62a bearing a negative charge which attacks the reactive pyridinium salt 62 at C-4. The resulting enamine 64 can then engage as an intramolecular nucleophile forming a 6membered ring. Release of the Pd(0)-species closes the catalytic cycle and the reactive iminium 62b is immediately intercepted by the N-tosyl anion, forming two new rings and C-C bonds in a single step.
By design, this method can only be applied to make a structurally narrow set of saturated heterocycles. Nevertheless, the efficiency and clever exploitation of the inherent substrate reactivity showcases the power of dearomatisation chemistry. In this light, Yuan, You and co-workers have demonstrated a similar C-2,3,4 one-pot functionalisation by using alkyne carbonates under copper catalysis (Scheme 21). 27 In this case the activating group on the pyridinium ring is slightly modied, bearing a nucleophilic thiolate. Mechanistically this transformation is proposed to proceed via C-2 attack of an alkyne cuprate and subsequent attack of the sulfur onto the alkyne to close the 6-membered ring. Elimination of the copper and decarboxylation of 67 then gives rise to allene 67a which reacts with the extended enamine to form the 5-membered ring at C-3. Finally, the oxygen or nitrogen of 67b closes another 5membered ring at C-4 by capturing the extended iminium ion. This example shows that the concept of carefully engineering an appropriate nucleophile-electrophile-nucleophile sequence could pave the way for a more general approach to 2,3,4-functionalised heteroaromatics in one step.
Metal-free
In the domain of metal-free transformations, the same principles as discussed in the previous section have been recently applied to different 5-and 6-membered heterocycles. For instance, Wang disclosed the C-2,3,4 functionalisation of doubly activated pyridines 68 under very mild conditions (Scheme 22). 28 Double activation of the pyridine substrate via an N-substituent and an electron withdrawing group at C-3 were required to render the heterocycle reactive enough for the initial dearomatisation step (here the attack of a malonate anion at C-4). The salicylaldehyde-imine functionality then serves as an electrophile and is alkylated by the resulting enamine 70, and nally the resulting C-2 iminium is captured by the nucleophilic phenol. Because of the fully intramolecular nature of the reaction cascade excellent diastereoselectivity was observed for the tetracyclic products that were obtained in one-pot. Recently, Wang's group has applied the same methodology to activated quinolines. 29 Two modes of reactivity were discovered, resulting in the incorporation of either one or two equivalents of an external iminomalonate 72 (Scheme 23). For simple reaction to give a tricyclic product 73 an electron decient substituent on the aromatic ring was required, thus decreasing the nucleophilicity of the intermediate enamine. There are many possibilities for the related mechanism leading to heptacyclic products 74; possibly the initial product of [3 + 2] cycloaddition opens up and a second equivalent of 72 is alkylated through the deprotonated 1,3-diester moiety. An uncommon S N 2 substitution of a secondary amine by the nucleophilic C-3 position was then invoked to close the 5membered ring. The phenol of the second electrophile equivalent can then attack the electrophilic C-4 position to produce the product 74.
Together with the previous work on pyridines, this example demonstrates how virtually identical methods applied to these heterocyclic systems can lead to very different outcomes, depending on the specic electronic and steric biases of the reaction components.
Wang also experimented with other multi-nucleophile reagents as per their report in 2021 on the reaction of pyridinium salts with 1,5-diazapentadienes. 30 In this case, complex nitrogen-bridged systems were formed through sequential Michael and Mannich cascade reactions (Scheme 24). For pyridines, a formal [2 + 2] cycloaddition was invoked to arrive at the highly complex cage-like structures. For isoquinolines a relatively simple 1,3-difunctionalisation was achieved without incorporation of a second equivalent of isoquinoline or reagent. These results show one of the major difficulties in the development of this type of methodology: since both the reagent as well as the substrate can display nucleophilic or electrophilic reactivity, dimerization processes (see intermediate 84) can occur spontaneously. In the case of the isoquinolines the second equivalent of substrate could be removed through opening, elimination, protonation and ring closure by treatment with SiO 2 but these pathways are dictated by the particular biases of the specic substrate and product and can be difficult to control.
Another example of triple functionalisation through a pincer-like reagent tethering multiple nucleophilic and electrophilic sites together was reported by Chen, Du and coworkers (Scheme 25). 31 In this work, C-3 substituted pyridines and quinolines 86 were reacted with cinnamoyl ketones together with a quinine-derived primary amine organocatalyst (C-1).
Double activation of the pyridine and the quinoline core was required to render the substrates reactive enough to undergo dearomatisation under very mild conditions at room temperature. The nal iminium ion 89 was again intercepted at C-2 by a phenol or aniline on the aromatic ring, as per the work of Wang. Good to excellent yields and enantioselectivities were observed for both pyridines and quinolines.
A dearomative cyclopropanation reaction of activated quinolines was reported by Yoo in 2020 utilising sulfur ylide chemistry in combination with a tethered intramolecular nucleophile. 32 In this work, the ylide derived from trimethyl sulfonium iodide attacked at C-4 of a quinoline (Scheme 26). This was followed by displacement of DMSO through attack of the in situ formed enamine 90 to build up the cyclopropane ring. Finally, the intramolecular nucleophile, which was tethered to the quinoline nitrogen, intercepted the iminium to give tetracyclic products 91 in good yields. Through the use of more exotic ylide precursors substituted cyclopropanes were also made accessible. With the use of NaH base the ylide concentration in the mixture was enhanced and a second equivalent of sulfur ylide was able to attack at C-2 to give 90b and nally close the 6-membered ring in 92 through DMS(O) displacement by the tosylamine.
An interesting example of a simple C-4 acylation of doubly activated pyridines was reported by Massi and co-workers in 2018 (Scheme 27). 33 A chiral NHC (C-2) was employed as a catalyst to affect an Umpolung addition of the alkyl aldehyde reagent and enable stereoselective attack at C-4 of the heterocyclic core.
The group of Nishigaichi disclosed an interesting example of the 1,3-difunctionalisation of isoquinolines by radical photochemistry (Scheme 28). 34 Thus, in situ activation of an isoquinoline with methyl chloroformate provided the corresponding isoquinolinium which accepted an electron in a SET process from a triuoroborate reagent. Radical coupling then formed a new bond at C-1 of the isoquinoline. The resulting 1,2-dihydro species 99 was not stable as it could protonate from trace moisture to give a very reactive acyl iminium ion 100 which was attacked by the electron rich aromatic system in a Friedel-Cras alkylation to form a second new C-C bond at C-3. Remarkably, the reaction could also be carried out under thermal conditions. This short report did not provide an extensive substrate scope as only very electron rich systems were investigated, but the utility of the method was demonstrated by reduction of the activating group (LAH, THF) to directly deliver the (racemic) natural products argemonine and eschscholtzidine.
Photoredox initiated dearomatisation
Finally, accomplishing dearomative functionalisation via (formal) cycloaddition transformations has become a more prominent approach in recent years, coinciding with the rise of photoredox catalysis as a powerful tool in the eld of synthetic organic chemistry. The unique reactivity patterns of the highenergy intermediates generated via visible-light-induced excitation have opened new opportunities for the rapid construction of molecular complexity and topology which are not easily achieved by known ground-state transformations. The Glorius group has made signicant contributions in this area, starting with their 2019 publication on the photoredox mediated dearomatisation of pyridines via an intramolecular [4 + 2] cycloaddition (Scheme 29). 35 Following initial excitation of the cinnamyl amide alkene 101 to a biradical intermediate, the resulting electrophilic a-carbonyl radical triggers a 5-exo-trig cyclisation onto an adjacent pyridine. Aer the resulting 1,6biradical undergoes inter-system crossing (ISC) a simple radical recombination completes the [4 + 2] cycloaddition sequence to afford a wide range of isoquinuclidine analogues.
The Dixon group have developed an interrupted dearomative Minisci reaction of quinolines with imines (Scheme 30). 36 The photocatalytic construction of bridged 1,3-diazepanes proceeds via radical addition to the C-4 position of the 4-substituted quinoline substrates 103. Subsequently, a Hantzsch ester promoted reduction gives dihydropyridine intermediates which undergo a two-electron ring closure to form the bridged diazepane core 104. Good efficiency in the construction of sterically congested all-carbon quaternary centres was observed in this transformation alongside a generally wide scope of N-arylimine and quinoline derivatives.
In their subsequent work the Glorius group reported an alternative intermolecular dearomative cycloaddition of alkenes onto bicyclic azaarenes (Scheme 31). 37 The two sets of conditions developed utilise either the Brønsted acidity of the solvent Scheme 26 Cyclopropanation of activated quinolines to form tetracycles. 32 Scheme 27 Enantioselective dearomative C-4 acylation of 3-cyano pyridiniums using NHC catalysis. 33 Scheme 28 Radical-initiated formation of bridged tetrahydroisoquinolines by chloroformates and trifluoroborates. 34 Scheme 29 Dearomative photoredox catalysed intramolecular [4 + 2] cycloaddition of pyridines. 35 (HFIP) or add a Lewis acid (BF 3 ) to preactivate the respective (iso)quinoline 105 or 106 by lowering the triplet energy gap of the substrates. Aer energy transfer activation by the excited photosensitiser [Ir-F] a [4 + 2] cycloaddition on the added alkene proceeded with generally good regio-and diastereoselectivity. Functional groups and substitution patterns on both the alkene and azaarene components were well tolerated resulting in an impressive array of over 80 bridged polycyclic products being formed.
In 2018 the Meggers group reported the rst example of catalytic asymmetric dearomatisation by visible-light activated [2 + 2] photocycloaddition with benzofurans (Scheme 32). 38 A Nacylpyrazole moiety at the 2-position of the benzofuran permits coordination of a visible-light-activated chiral-at-rhodium Lewis acid catalyst. The reaction begins with the blue light excitation of the reactant-catalyst complex. Aer ISC to reach the triplet state the reactant complex reacts with the alkene to generate a 1,4-biradical intermediate which then recombines to form the desired photocycloaddition product 112. The subsequent release of the photocatalyst completes the catalytic cycle. Almost perfect regioselectivity was observed with the formation of a single diastereomer and very high enantioselectivity in the products of 98-99% ee thereby providing chiral tricyclic structures with up to four stereocentres.
Following on from their work on six membered Nheterocycles, the Glorius group developed a lanthanide photocatalysed dearomative [2 + 2] cycloaddition-ring expansion sequence of indoles (Scheme 33). 39 Direct visible light excitation of a bidentate complex formed between the N-acylpyrazole group on the indole 113 and a simple commercially available gadolinium salt (Gd(OTf) 3 ), delivered an excited state intermediate. This undergoes a stepwise [2 + 2] cycloaddition with an alkene to give a cyclobutene species; a spontaneous semi-pinacol rearrangement ring-expansion followed (see intermediate 116) and at this stage the reaction pathway diverged depending on the R 2 substituent. Indoles lacking a substituent at the 3-position (R 2 = H) underwent pyrazole elimination followed by a tautomerization to give the rearomatized product 114. On the other hand, when R 2 = alkyl, migratory addition of the pyrazole moiety to the imine led to the formation of the dearomatized cyclopenta[b]indoline 115.
Dhar and co-workers have developed a method for conducting a dearomative intramolecular [2 + 2] cycloaddition by visible light photocatalysis (Scheme 34). 40 The photocycloaddition reaction is thought to commence with excitation of the iridium photosensitizer to its triplet excited-state, followed by an intermolecular energy transfer to the substrate 117 exciting it from its ground-state to a diradical triplet excited state. The diradical species then attacks the tethered olen via a C-2 radical in a 5-exo-trig manner to form a 1,4-diradical intermediate which undergoes radical-radical combination to furnish the fused tetracyclic scaffold. Starting from achiral precursors this method enables a convenient synthesis of novel, functionalized tetracyclic scaffolds with at least three stereogenic centres that incorporate a fused azabicyclo[3.2.0]heptan-2-one motif. Recently, the You group have developed a visible-lightinduced interrupted dearomative cycloaddition reaction of indoles tethered to vinyl cyclopropanes (Scheme 35). 41 Various types of cycloaddition could be achieved by simple engineering of the substrate structures (e.g. by placing a group either on the C-2 or C-3 position of the indole and tuning the reaction conditions). The divergent reaction pathways could proceed via 1,4-and 1,7-diradical intermediates to trigger either a [5 + 2] or a [2 + 2] dearomative cycloaddition respectively. In general, the reactions gave highly complex polycyclic products 120 or 121 in good yields with good chemo-and diastereoselectivity.
Additionally, in 2021, You reported an intramolecular double dearomative [4 + 2] cycloaddition of indoles bearing a pendant 1-naphthyl ring (Scheme 36). 42 Furthermore, a dearomative [2 + 2] cycloaddition reaction was facilitated when tethered heterocycles were introduced (e.g. 2/3-furyl, 2-benzofuryl and 3indolyl). Similar to the examples described above, the reactions are likely to proceed through dearomative cycloadditions of triplet diradical species generated via a photocatalytic energy transfer mechanism. A wide range of architecturally complex polycyclic indoline derivatives 126 were produced in high yields and as single diastereoisomers.
Wang has reported an asymmetric neutral radical-engaged dearomatisation reaction of indoles with amines (Scheme 37). 43 SET oxidation of a tertiary amine additive by an excited photocatalyst gives a radical cation, which is deprotonated by the NaOAc to generate a nucleophilic radical. This species then adds to the arene, and the resulting a-carbonyl radical is subsequently reduced to give a carbanion. High diastereoselectivity of the initial radical addition was achieved by employing the Oppolzer camphorsultam chiral auxiliary (X c , see also Scheme 38). While the initial protonation proceeds to give a cis-intermediate, equilibration under thermodynamic control results in trans-geometry in the products 128.
Additionally, a decarboxylative approach to generating reactive radicals for dearomatisation was explored by the Wang group to prepare a wide array of 2,3-disubstituted indolines 130 with high trans-stereoselectivity (Scheme 38). 44 The reaction could again be rendered highly stereoselective by incorporating Oppolzer's camphorsultam auxiliary at the C-3 acyl substituent.
Finally, the Masson group demonstrated a functionalisation at the C-2 position of indoles bearing a C-3 electronwithdrawing group by using visible light LEDs and tetra-Nbutylammonium decatungstate (TBADT) as a photocatalyst (Scheme 39). 45 Hydroacylation at C-2 was observed by employing the respective aldehydes as reactants in the presence of base. The proposed mechanism consists of an acyl radical being formed by HAT from the aldehyde which adds across the indole forming a stabilised radical at C-3 (this is in turn reduced by a second HAT and then protonated to deliver the reduced, functionalised product 132). In addition to the broad functional group tolerance on both the indole and aldehyde substrates benzofurans and thiophenes were also amenable to functionalisation under these conditions. Scheme 39 Reductive C-2 acylation of substituted indoles via photoredox mediated activation of aldehydes. 45
Conclusions
In recent years the eld of dearomative functionalisation has progressed signicantly as is evident by the large body of publications on this topic, enabling organic chemists to build up signicant molecular complexity in a single step and expanding the chemical space of saturated heterocycles, all by starting from their aromatic counterparts. By encompassing metal-catalysed as well as metal-free conditions a plethora of different species have been reported to successfully initiate a dearomatisation sequence; these range from simple hydride equivalents to radical species. Most approaches use the intrinsic reactivity proles of the parent arene to perform the initiation; nucleophiles for example will react with pyridines at the C-2 or C-4 positions and thus electrophiles can then be incorporated at the C-3 position. Photoredox chemistry has emerged as a powerful tool via the generation of high energy reactive intermediates to trigger processes that would have been difficult to achieve by other means.
Using dissolving metals to generate solvated electrons that act as initiators to heteroarene dearomatisation is well established in the literature, as is the trapping of the reactive intermediates with electrophiles. 46 In this regard, an emerging area is one of utilising electrochemical approaches in the reductive dearomatisation of arenes. 47 New advances in the eld that allow for precise control of the intermediates generated by using the appropriate electric potentials offers great possibilities in terms of investigating reactivity pairings that are currently not explored.
Exploiting the inherent reactivity of intermediates formed following an initial dearomatisation has been key to discovering new transformations in this area with creative approaches that integrate multiple new functional groups and rings via annulation strategies. While a lot of the initial work in this area focused on intramolecular reactions, followup work has been successful in developing intermolecular variations enabling the construction of complex scaffolds from simple and readily available starting materials. Many impressive 3-dimensional polycyclic frameworks have been prepared from simple, at precursors; however, some of the structures still remain rather specialised. This is because a narrow window of reactivity regarding both the heterocyclic substrate (degree of activation, steric requirements etc.) as well as the "reagent" (multiple nucleophilic/electrophilic sites) needs to be targeted and optimised to arrive at an efficient transformation. Note that almost all of the reported approaches still rely on some form of activation of the heteroarene before the dearomatisation step. This is either done with pre-functionalisation such as N-quaternisation (e.g. pyridiniums, quinoliniums, etc.) or with the selective placement of specic electron withdrawing groups on a given N-heterocycle. Alternatively, in situ activation using Brønsted and Lewis acids can be employed to avoid the need for pre-functionalisation. It is noteworthy that examples of dearomative reductive functionalisations as key disconnections in natural product synthesis still remain few and far between, and this is an area with great potential considering the frequency of saturated N-heterocycles in these structures.
On the other hand, the potential for late-stage functionalisation has been well documented with several groups reporting transformations on highly advanced drug-like or natural product-like arenes. Furthermore, initial steps towards enantioselective transformations have been made. This area is ripe for development as only a few strategies for enantio-induction in pyrroles and indoles have been properly explored. Most approaches have been focused on either chiral auxiliaries or chiral metal complexes to achieve good levels of enantioselectivity. Meanwhile, enantioselective reactions of 6-membered heteroaromatics remain difficult, with some metal catalysed approaches and a couple of examples using organocatalysts rounding off the more recent advances. We consider that the development of new enantioselective methods is of particular importance as it will garner more interest from the wider synthetic community.
In addition to further developments in the area as discussed above new modes of dearomatisation still remain to be thoroughly explored, such as dearomative atom-insertions and dearomative atom-mutations, concepts introduced and recently published by Sarlah. 4c, 48 Both of these offer many opportunities for new disconnections and would present a leap in the way we use dearomatisation reactions in general. So far, these approaches have focused on adding and building onto the existing heterocyclic frameworks rather than the possibility of completely rearranging them.
The progress so far suggests that with careful planning and reactivity matching, broadly applicable processes can be developed and the eld offers many exciting opportunities for innovation to shape heterocyclic chemistry in the 21st century.
Author contributions
All authors contributed to the selection of publications and the writing of the manuscript.
Conflicts of interest
There are no conicts to declare. | 8,367.8 | 2022-11-17T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Simultaneous Sensor and Actuator Fault Reconstruction by Using a Sliding Mode Observer, Fuzzy Stability Analysis, and a Nonlinear Optimization Tool
This paper proposes a Takagi–Sugeno (TS) fuzzy sliding mode observer (SMO) for simultaneous actuator and sensor fault reconstruction in a class of nonlinear systems subjected to unknown disturbances. First, the nonlinear system is represented by a TS fuzzy model with immeasurable premise variables. By filtering the output of the TS fuzzy model, an augmented system whose actuator fault is a combination of the original actuator and sensor faults is constructed. An H∞ performance criteria is considered to minimize the effect of the disturbance on the state estimations. Then, by using two further transformation matrices, a non-quadratic Lyapunov function (NQLF), and fmincon in MATLAB as a nonlinear optimization tool, the gains of the SMO are designed through the stability analysis of the observer. The main advantages of the proposed approach in comparison to the existing methods are using nonlinear optimization tools instead of linear matrix inequalities (LMIs), utilizing NQLF instead of simple quadratic Lyapunov functions (QLF), choosing SMO as the observer, which is robust to the uncertainties, and assuming that the premise variables are immeasurable. Finally, a practical continuous stirred tank reactor (CSTR) is considered as a nonlinear dynamic, and the numerical simulation results illustrate the superiority of the proposed approach compared to the existing methods.
Introduction
Over the past few decades, the reliability and safety of industrial systems has attracted considerable attention. As a consequence, fault-tolerant control (FTC) has received considerable attention in different fields [1,2]. There are different classifications for FTCs. In general, FTCs are classified into passive and active classifications. Active fault-tolerant controllers compensate for the effects of the occurred faults by using early information obtained from fault detection and isolation (FDI) schemes, which leads to a more flexible dynamic [3]. Consequently, FDI is becoming an attractive topic in different research fields. Observer-based methods are one of the most popular model-based FDIs. The main idea of observer-based FDIs is to construct a residual based on the measured output of the systems or to reconstruct the fault directly. Sliding mode observer (SMO) works based on the second approach, which detects the faults while determining the dynamic behavior [4,5]. SMOs are more insensitive to the unknown uncertainties occurring in the system compared to other observers like unknown input observers (UIOs) [6].
First, SMO observers were developed for linear dynamic systems; however, most actual physical systems are often nonlinear. Currently, lots of SMO-based fault reconstruction methods have been developed for uncertain nonlinear systems. In ref. [7], by considering a filter of the measured output vector, the original system with sensor and actuator faults is transformed into an augmented system with just the actuator fault and unknown inputs.
Nevertheless, the classes of nonlinear systems considered in most of the papers are limited and cannot represent a general model for real systems [8,9].
Takagi-Sugeno (TS) fuzzy models can represent the behavior of nonlinear systems while keeping the simplicity of the linear models. A TS fuzzy representation is a convex nonlinear aggregation of several linear systems. Because the parameters of a TS fuzzy representation satisfy the convex sum, it is interesting to investigate the properties of the TS system based on its local linear vertices. With the advent of TS fuzzy systems, TS-based FDI techniques emerged to tackle a broader range of nonlinear systems [10]. By changing a nonlinear system to a TS system, some local linear systems are created, representing the behavior of the nonlinear system in a specific operating area. These local linear systems can be aggregated by using an interpolation mechanism. Thus, TS fuzzy models can represent the actual nonlinear behavior while maintaining the simplicity of linear models. Thus, an efficient FDI can be obtained by combining the SMO, which is robust to the uncertainties, and the TS fuzzy model, which causes simplicity in the design process. Recently, several researchers have utilized TS-based SMOs for fault detection and isolation in continuous-time and discrete-time systems [11,12]. However, in the methods developed in these articles, it is assumed that the premise variables are measurable, which reduces the applicability of these approaches. To deal with this problem, an FDI approach for stability analysis of the TS fuzzy systems with immeasurable premise variables was proposed in [13,14].
In [15], simultaneous actuator and sensor faults in a nonlinear system represented by a TS fuzzy model are reconstructed by using an SMO and considering H ∞ performance criteria to reduce the effect of disturbance, whereas [16] does the same procedure for the fault reconstructions and both of the exogenous disturbance and the system faults are reconstructed. However, in refs. [15,16] quadratic Lyapunov functions (QLFs) are used to design the observers. By using the QLF for TS fuzzy systems with a large number of fuzzy rules can cause undesired performance or unfeasible solutions. Consequently, refs. [17,18] offered to use a non-quadratic Lyapunov function (NQLF) to design the TS-based SMO for the FDI purposes. In all these papers, a linear optimization approach based on linear matrix inequalities (LMIs) is utilized, making the stability analysis more complex and using some approximations and lemmas to prove the stability conditions.
In this paper, a TS fuzzy-based SMO with immeasurable premise variables is designed to reconstruct simultaneous actuator and sensor faults in a nonlinear system exposed to an unknown disturbance. Then, the states and faults are estimated. The stability of the proposed observer is guaranteed by using the NQLF and fmincon as a nonlinear optimization tool in MATLAB. In addition, H ∞ performance criteria are considered to minimize the effect of disturbances and uncertainties on the estimation error and the fault estimations. By using the NQLF, a generalized eigenvalue problem is proposed, which maximizes the admissible Lipschitz constant and minimizes the disturbance effects on the estimation error through a nonlinear optimization problem.
The main advantages of the proposed approach over the existing methods can be summarized as follows:
•
Using nonlinear optimization tools instead of LMIs, which results in better accuracy.
•
Utilizing NQLF, which leads to less conservative optimization conditions than simple quadratic Lyapunov functions.
•
Assuming that the premise variables are immeasurable, which makes the proposed method applicable to a broader class of TS fuzzy systems.
This paper is organized as follows. Section 2 presents a TS fuzzy model with simultaneous actuator and sensor faults and disturbance and how to construct a fictitious system with just an actuator fault. In Section 3, the main results of this paper, including the sliding mode observer design and the sufficient conditions of stability of the estimation errors, are proposed and guarantee the H ∞ performance simultaneously. Section 4 discusses the procedure of the actuator and sensor fault reconstructions. In Section 5, simulation results are given, and comparisons are discussed. Finally, in Section 6, the concluding remarks are given.
Preliminaries
Assume that a continuous-time nonlinear system affected by actuator and sensor faults and disturbance is given as . where , u(t), t) ∈ R l are the state, input, output, unknown actuator, and sensor faults, and the system uncertainty vectors, respectively. f and g are nonlinear smooth functions. By using sector nonlinearity transformation, the nonlinear model (1) can be replaced by the following TS fuzzy model where C and N are known full rank matrices with appropriate dimensions. A i , B i , M i , and D i are real known matrices, r represents the number of fuzzy rules and µ i (ξ(t)) are the fuzzy membership functions depending on the unmeasurable variable vector ξ(t) and satisfy the following so-called convex sum property In the rest of the paper, (t) is dropped from the equations, d, µ i andμ i denote d(x, u, t), µ i (ξ(t)), and µ i ξ (t) and the mark ( * ) denotes the transposed element in a symmetric matrix.
To build a system with just an actuator fault and then use the actuator fault reconstruction concepts, the output is passed through an orthogonal matrix T r ∈ R p×p and an augmented TS system of order n + h can be obtained as where −A f ∈ R h×h is an arbitrary stable matrix, z ∈ R h and N 2 ∈ R h×h . T r can be obtained by QR reduction of the matrix N.
By defining wherex is the estimation of the x, the TS system (4) can be derived as Moreover, the nonlinear term φ is assumed to satisfy the Lipschitz condition as To design a sliding mode observer, some assumptions and lemmas are needed as follows.
Lemma 1.
(a) If Assumptions 1 and 2 are satisfied, then there exist changes of coordinates T i such that WithM 0.i ∈ R (q+h)×q ,N 0 ∈ R (q+h)×h are nonsingular.
Assumption 4.
The unknown vectors f a and f s and the derivatives of the µ i for i ∈ {1. . . . .r} are assumed to be norm bounded by some known constants. Therefore,
TS Fuzzy-Based Sliding Mode Observer Design
The proposed TS sliding mode observer for the nonlinear system (2) in the new coordinate (10) is as follows: where G n.i and G l.i are design matrices of the observer that will be derived through Theorem 1. e Y := Y −Ŷ represents the output error estimation, ν a.i and ν s are the equivalent output error injections that are used to compensate the errors due to the actuator fault and sensor fault, respectively, and have the following structure: where η a.i and η s are two positive scalars such that w a.i and w s are two arbitrary positive constants. The observer (16) guarantees that the state estimation error converges to a pre-designed sliding surface in finite time and then, asymptotically to zero. Define state estimation error as e := X −X. By subtracting the observer dynamics from the system dynamic (7) in the new coordinate (12), the state estimation error dynamic can be given as By partitioning φ as φ = φ T 1 φ T 2 T and applying a further change of coordinates where L i ∈ R (n+h−p)×(p−q−h) is a stabilizing gain matrix, it is straightforward to see that The goal is to design the matrices L i such that the asymptotic stability of (22) is assured while the following specified H ∞ performance is guaranteed: The following theorem provides sufficient conditions to ensure asymptotic stability of the state estimation error (22) with maximized admissible Lipschitz constant γ in (8) and minimized H ∞ performance gain ϑ in (24).
and eig represents eigenvalues of a matrix, then, the estimation error (22) is asymptotically stable with the maximized admissible Lipschitz constant γ * = max(γ) = 1 T L T L −1 √ εσ and the derived L i matrices can be used for the purpose of simultaneous fault reconstruction.
Proof. The proof of this theorem is done by using a positive NQLF as follows where P j = diag P 1j , P 2j with P 1j ∈ R (n+h−p)×(n+h−p) and P 2j ∈ R p×p are symmetric positive definite matrices. The time derivative of the candidate Lyapunov function along the trajectory (22) is given by From (14), (17), (18) and (21), one has: From (14), one has ∑ r k=1 .μ k P k ≤ ∑ r k=1 ρ mi P k .
By considering the fact that 2P T Q ≤ 1 ε P T P + εQ T Q with ε > 0 and using (8), one obtains 2 e T P j T L φ ≤ 1 ε e T P j P j e + εφ T T L T T L φ ≤ 1 ε e T P j P j e + εα 2 e 2 , where α := T L T L −1 γ. By Substituting (29)-(31) into (28), one has By defining parameter σ := εα 2 −1 and the cost function as J := .
V( e) + e T e − ϑ 2 d T d, one has where β := ϑ 2 . By placing (23)in (33) and considering the diagonal structure of P j , the inequality (33) is continued as where Based on the Congruence [20], the inequality (35) is satisfied by By utilizing Lemma 2, the summations and the fuzzy membership functions will be omitted from inequalities (36). Finally, the results are going to be used for fmincon function which is a nonlinear optimization tool in MATLAB software and finds the minimum of a problem specified by The matrix inequalities (36) should be changed to some one-dimensional inequalities, and the optimization problem can be defined as (25) and (26). In addition, from the α and σ found by the optimization problem, the maximum admissible Lipschitz constant and the minimum can be calculated as
Simultaneous Fault Reconstruction
In Section 3, an H ∞ sliding mode observer is designed in which two discontinuous terms (19) are considered to reconstruct simultaneous faults in the presence of an unknown disturbance based on the measured signals u and y. Along the sliding surface e Y = . e Y = 0. Consequently, (22) on the sliding surface changes to where ν eqa,i and ν eqs are approximations of the equivalent output error injection terms (17) required to maintain the sliding motion and can be defined as where δ f and δ d are small positive constants. Consequently, (40) leads to On the other hand, using (8) and (24) can show that the term A 21.i e 1 + φ 2 + D 2.i d is bounded as Therefore, for small values of d , the actuator and sensor faults can be estimated aŝ (43) where † shows the pseudo-inverse of a matrix.
Remark 1.
The numerical solution of Theorem 1 can be summarized as follows: • Find the orthogonal transfer matrix T r ∈ R p×p by using the QR reduction of matrix N and obtain the augmented TS system (4).
•
Find the changes of coordinates T i and obtain the system matrices in the format (12) and (13).
•
Compute the scalars σ, ε, and θ and also the matrices L i using the fmincon function in MATLAB software and solving the nonlinear optimization problem(25).
•
Compute the maximized admissible Lipschitz constant as Reconstruct the sensor and actuator faults using Equations(43) and(44).
Numerical Example
In this section, a three-state variable continuous stirred tank reactor (CSTR) system is utilized to show the effectiveness of the proposed sliding mode observer in both actuator and sensor faults reconstruction in the presence of an unknown disturbance. To show the performance improvement of the proposed approach, the obtained results are compared to the LMI approach presented in ref. [17].
Consider a well-mixed variable CSTR in which a multi-component chemical reaction A B → C is being carried out. The nonlinear dynamics of the CSTR is given by the following model [21], where x = [x 1 x 2 x 3 ] T , and the states represent the concentrations of the species A, B, and C, respectively. To check the advantage of the proposed method, two faults and a disturbance are added to the dynamic (45) as It is supposed that the concentration of B is dimensionless, which means that x 2 ∈ [−1 1]. Consequently, by using TS rules, two membership functions can be defined as Therefore, the local linear TS matrices can be determined as The TS fuzzy system matrices satisfy all the assumptions; therefore, the TS fuzzy sliding observer (16) can be designed.
For simulation, the parameters and input signal are chosen as u = sin(t), A f = 1, A s = −5I, η d.i = η a = 25, η s = 25, δ a = 0.01 and δ s = 0.01. and the initial conditions are chosen as X 0 = 1 1.2 1 0 T andX 0 = 1.5 2.8 0.5 0 T . Moreover, the disturbance is chosen as d = 0.1 sin(0.2t)x 3 and the shape is shown in Figure 1. It should be noted that the initial point for fmincon is chosen based on the results of the related published papers. Figure 2 shows the state estimation error which converges to a neighborhood close to zero due to the unknown disturbance. It should be noted that the initial point for fmincon is chosen based on the results of the related published papers. Figure 2 shows the state estimation error which converges to a neighborhood close to zero due to the unknown disturbance. It should be noted that the initial point for fmincon is chosen based on the results of the related published papers. Figure 2 shows the state estimation error which converges to a neighborhood close to zero due to the unknown disturbance. The proposed approach is compared with another non-quadratic Lyapunov-based approach using linear optimization analysis based on LMIs [17]. Figure 5 describes the fault estimation errors using both approaches. The proposed approach is compared with another non-quadratic Lyapunov-based approach using linear optimization analysis based on LMIs [17]. Figure 5 describes the fault estimation errors using both approaches. The proposed approach is compared with another non-quadratic Lyapunov-based approach using linear optimization analysis based on LMIs [17]. Figure 5 describes the fault estimation errors using both approaches. The proposed approach is compared with another non-quadratic Lyapunov-based approach using linear optimization analysis based on LMIs [17]. Figure 5 describes the fault estimation errors using both approaches. As can be seen, the proposed nonlinear approach is less conservative and can estimate both actuator and sensor faults with smaller errors. In addition, the proposed approach has less computational burden. In Table 1, a quantitative comparison between the proposed approach and the LMI approach presented in ref. [17] is considered. In this table, the Euclidean and infinity norms of the fault error estimations are compared and the improvements are calculated as where and represent the ‖ ‖ using the LMI approach [17] and the nonlinear proposed approach, respectively. As can be seen in Table 1, the proposed approach improves the fault estimation accuracies by more than 30%.
Discussion
In this paper, a nonlinear optimization approach for simultaneous actuator and sensor fault reconstruction in nonlinear systems subjected to unknown disturbances was proposed. First, an augmented system with just an actuator fault was created. Then, by using the fuzzy Lyapunov stability analysis and two changes of coordinates, the parameters of As can be seen, the proposed nonlinear approach is less conservative and can estimate both actuator and sensor faults with smaller errors. In addition, the proposed approach has less computational burden. In Table 1, a quantitative comparison between the proposed approach and the LMI approach presented in ref. [17] is considered. In this table, the Euclidean and infinity norms of the fault error estimations are compared and the improvements are calculated as where F n and F l represent the Error o f f using the LMI approach [17] and the nonlinear proposed approach, respectively. As can be seen in Table 1, the proposed approach improves the fault estimation accuracies by more than 30%.
Discussion
In this paper, a nonlinear optimization approach for simultaneous actuator and sensor fault reconstruction in nonlinear systems subjected to unknown disturbances was proposed. First, an augmented system with just an actuator fault was created. Then, by using the fuzzy Lyapunov stability analysis and two changes of coordinates, the parameters of a sliding mode observer were designed through a nonlinear optimization problem while maximizing the Lipschitz constant and minimizing the H ∞ performance index. The optimization problem was solved by using fmincon in MATLAB as a nonlinear optimization tool. By utilizing the optimum points, both actuator and sensor faults were reconstructed properly. Finally, the simulation results showed a considerable increase in the fault reconstruction accuracy with constraints with smaller dimensions. | 4,621.4 | 2022-09-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Adoption of Electronic Cash Payment among Students: A Case Study of Siem Reap Build Bright University
ABSTRACT
INTRODUCTION
The use of electronic cash is growing around the world.In 2021, the number of mobile payment users worldwide will reach 1.33 billion.This growth is driven by several factors, including the increasing popularity of smartphones, the growing availability of Internet access, and the convenience and security of mobile payments [1].
In Cambodia, the use of electronic cash is also growing.In 2020, the National Bank of Cambodia launched the Bakong system, a blockchain-based nationwide payment system.Bakong allows people to make payments using their smartphones, even if they do not have a bank account [2].
The number of payments made via electronic systems in Cambodia reached nearly 1,000 per cent of the country's gross domestic product (GDP) in 2022, said a senior official of the National Bank of Cambodia (NBC) [3].The pandemic has accelerated the adoption of digital payments among Cambodian consumers, as they believe going cashless will make society more hygienic (43%), efficient (39%), and environmentally friendly (37%) [4].
There were several studies on the use of electronic cash in other countries, such as China.The study identified some factors, including perceived security and cost of use, as beneficial extensions of the traditional UTAUT model, and intention is a crucial antecedent to users' actual utilization of e-cash [5].
In Cambodia, there was a lack of study on the increase in electronic cash use, particularly with students' attitudes toward electronic cash use in Siem Reap province.The research on factors affecting electronic cash use, in particular the people in Siem Reap province in Cambodia, The research addresses the lack of understanding of the factors influencing electronic cash use in Siem Reap province, Cambodia.This study will provide valuable insights by investigating the current level of awareness and adoption among students and analyzing the factors that shape their attitudes towards electronic cash, such as convenience, security, privacy, and technological appeal.
This research can inform policymakers, businesses, and other stakeholders about the potential barriers and opportunities for the broader adoption of electronic cash in the region.Additionally, this research can contribute to the existing literature on electronic cash usage and serve as a basis for further studies.The current research addresses the gaps in the previous studies with the two research objectives below.
(1) To identify the current awareness and adoption of electronic cash among students.(2) To analyze the factors influencing students' attitudes toward electronic cash, including convenience, security, privacy, and technological appeal.
THEORETICAL REVIEW
Six theories were employed to understand research objectives and conceptualize research: TAM, PMT, DIT, TPB, TT, and TRA.First, the Technology Acceptance Model (TAM) was applied to assess users' perceptions of the convenience and ease of using electronic cash.TAM suggests that users are more likely to adopt technology if they perceive it as easy to use and beneficial [6].
Second, the TAM theory was complemented by another theory, the Protection Motivation Theory (PMT), which is used to understand users' attitudes and behaviours related to security and privacy concerns.PMT suggests that individuals are motivated to protect themselves from potential threats, and their adoption of electronic cash may be influenced by their perception of the security measures in place [7].
Moreover, the Diffusion of Innovation Theory was applied to analyze users' attitudes towards technological appeal and innovation.This theory suggests that individuals who are more open to adopting new technologies are more likely to embrace electronic cash due to its innovative features [8].A familiarity factor was identified with adopting the Theory of Planned Behavior (TPB).TPB suggests that individuals' familiarity with behaviour can influence their attitude and perceived behavioural control [9].
The Trust and TRA theories guided trust in payment systems and benefits.Trust theory indicates the importance of trust for users' adoption of technology, which can impact their willingness to use electronic cash [10].The Theory of Reasoned Action (TRA) was used to understand users' attitudes towards the perceived benefits of electronic cash over traditional payment methods [11].
The literature review helps the researcher conceptualize the factors affecting students' use of electronic cash as below:
METHOD
Quantitative research was employed.Statistical methods are used to collect and analyze numerical data in quantitative research [12].Quantitative research entails gathering data to quantify information and subject it to statistical analysis, thereby supporting or challenging alternative claims of knowledge [13].The research was conducted in Siem Reap province, Cambodia, explicitly targeting students from Build Bright University.A total of 211 students participated in the study.Convenience sampling was employed to gather information from the students.
The data collection process involved the use of questionnaires administered through Google Forms.The questionnaires were divided into three parts: demographic data, awareness and adoption of electronic cash, and factors influencing the adoption of electronic cash.The questionnaire design included different response formats, such as "1=Yes, 2=No" for categorical questions, and a Likert scale ranging from 1 to 5. A score of 1 indicated strong disagreement or no influence, while a score of 5 indicated strong agreement or high influence.
Both primary and secondary data were used in this research.The primary data was collected through the questionnaires, while the secondary data was gathered from existing literature and resources related to electronic cash adoption.
After collecting the data using Google Forms, the responses were cleaned and transferred to SPSS (Statistical Package for the Social Sciences) for analysis.Descriptive statistics were used to analyze the data to answer the research objectives.
Demographic Data 4.2.1. Sex of respondents
The research findings indicate that out of the 211 respondents, 46.9% identified as male, while 52.6% identified as female.There was also one respondent (0.5%) who chose not to disclose their sex data.These results provide valuable insights into the sex distribution among the participants in the research study.
Majors of Studies
The data presented shows the distribution of majors among a group of respondents.The most popular major among the respondents was accounting, with 33.6% choosing this field of study.This is followed by Business, chosen by 22.3% of the respondents.Engineering and Civil Construction are also popular, with 21.3% of the respondents opting for this major.Other majors such as Law, Information and Technology, Agriculture, and Tourism Development also have a significant representation, although to a lesser extent.Interestingly, English has the lowest percentage, with only 0.5% of the respondents choosing this major.
Education
The data presented reveals the education degrees that the respondents were studying.The majority, accounting for 89.1% of the respondents, were pursuing a bachelor's degree.On the other hand, a smaller percentage, 10.9%, were studying for a master's degree.
Incomes
The data provides an overview of the monthly incomes of the respondents.It reveals that a significant portion, 29.4%, have a monthly income of less than $100.This suggests that a considerable number of respondents may be facing financial challenges with limited income resources.On the other hand, the most significant percentage, 47.4%, falls within the income range of $100 to $300, indicating that a significant portion of the respondents have a moderate income level.The data also shows that a smaller percentage of respondents fall within the higher income range, with only 0.5% reporting a monthly income over $5,000.
Knowledge and Uses of Electronic Cash 4.2.1. Knowledge of Electronic Cash
The data in Table 6 indicates that 82.9% of the respondents know electronic cash, while 17.1% do not.This suggests that most surveyed individuals know the concept and understand how electronic cash works.However, it is essential to note that many respondents still lack knowledge in this area.The research findings indicated that respondents' knowledge about electronic cash is distributed across different levels.Of the 175 participants who know electronic cash, only 4% rated their knowledge as very low, suggesting a limited understanding of it.Another 9.7% rated their knowledge low, indicating a slightly better but inadequate understanding.The majority of respondents, 45.1%, rated their knowledge as moderate, suggesting a fair understanding but with room for improvement.31.4% of respondents rated their knowledge high, indicating a good understanding and familiarity with electronic cash.Finally, 9.7% rated their knowledge as very high, suggesting an extensive understanding and expertise in electronic cash.The data in Table 8 reveals that the primary sources of awareness about electronic cash are friends, family, media, advertisements, and other sources.Friends were cited by 17.7% of respondents, indicating the influence of personal recommendations and discussions.Family members played a role for 5.1% of respondents, emphasizing the impact of close relationships.Media was the most significant source, with 52.6% of respondents gaining awareness through various channels such as television, newspapers, and social media.Advertisements contributed to 8.0% of respondents, while 16.6% mentioned other sources.
Use of Electronic Cash
The research findings show that most respondents, 98.3%, reported using electronic cash to make payments.This indicates a high adoption and usage of electronic cash as a payment method among the surveyed individuals.Only a tiny percentage of respondents, 1.7%, reported not using electronic cash to make payments.These findings suggest that electronic cash has become a popular and widely accepted payment option, with high trust and convenience among users.The widespread usage of electronic cash highlights its importance and relevance in modern-day transactions.The research and previous studies' findings collectively demonstrate a high adoption and usage of electronic cash as a payment method.The research findings indicate that most respondents use electronic cash for making payments, highlighting its popularity and acceptance among users.The frequency of use emphasizes the convenience and integration of electronic cash into daily financial transactions.
These findings are consistent with previous studies that reported high adoption rates and positive perceptions of electronic cash among consumers.The studies [14], [15], and [16] support the research findings, highlighting the widespread acceptance and trust in electronic cash as a payment method.
Frequency of Electronic Cash
The research findings provide insights into respondents' frequency of electronic cash usage.13.1% of respondents reported using electronic cash rarely, which means once a month or less.33.7% reported using electronic cash occasionally, translating to 1-3 times a month.18.3% reported using electronic cash regularly, meaning 1-3 times a week.Lastly, 34.9% reported using electronic cash frequently, indicating usage more than three times a week.These findings indicate that many respondents use electronic cash regularly or frequently, highlighting its convenience and popularity as a payment method.
Payment Method a. Mobile App Payment
The research findings indicate varying preferences among respondents regarding using mobile apps as a payment method.Only 2.3% of respondents rated mobile apps as their least preferred payment method.Similarly, 2.9% rated mobile apps as less preferred.A significant portion, 30.8%, had a neutral preference for mobile apps as a payment method.On the other hand, 35.5% of respondents rated mobile apps as more preferred, indicating a higher level of preference for this payment method.Lastly, 28.5% of respondents rated mobile apps as their preferred payment method.These findings suggest that while there is a range of preferences for mobile apps as a payment method, a considerable percentage of respondents favour using mobile apps for transactions.This highlights mobile apps' growing popularity and acceptance as a convenient and preferred payment option.
b. Payment through Cards
The research findings indicate varying preferences among respondents regarding using cards (credit or debit) as a payment method.25% of respondents rated cards as their least preferred payment method.Similarly, 22.1% rated cards as less preferred.31.4% of respondents had a moderate preference for using cards as a payment method.On the other hand, 15.1% of respondents rated cards as more preferred, indicating a higher preference for this payment method.Only 6.4% of respondents rated cards as their most preferred payment method.These findings suggest that while there is a range of preferences for using cards as a payment method, a significant percentage of respondents have a lower preference for using cards than other payment options.
c. Digital Currencies
The research findings indicate varying preferences among respondents regarding using digital currencies as a payment method.7% of respondents rated digital currencies as their least preferred payment method.Similarly, 11% rated digital currencies as less preferred.41.3% of respondents had a moderate preference for using digital currencies as a payment method.On the other hand, 29.1% of respondents rated digital currencies as more preferred, indicating a higher level of preference for this payment method, and 11.6% of respondents rated digital currencies as their most preferred payment method.These findings suggest that while there is a range of preferences for using digital currencies as a payment method, a significant percentage of respondents prefer using digital currencies over other payment options.This indicates a growing acceptance and preference for digital currencies as a viable payment method.Convenience and ease of use Table 14 shows the frequency and percentage distribution of respondents' perceptions regarding the convenience and ease of use of electronic cash.Convenience and ease of use have varying degrees of influence on students' usage of electronic cash.A small percentage (1.7%) of students indicated that convenience and ease of use did not influence their usage of electronic cash.A slightly larger percentage (7.0%) of students felt that convenience and ease of use had less influence on their usage.Most students (32.6%) perceived convenience and ease of use as having a moderate influence on their usage of electronic cash.A similar percentage of students (37.8%) indicated that convenience and ease of use had more influence on their usage.This shows that for many students, convenience and ease of use play a significant role in their decision to use electronic cash.Lastly, 20.9% of students felt that convenience and ease of use greatly influenced their usage.The author's findings highlight the significance of convenience and ease of use in influencing students' usage of electronic cash as a payment method.The data indicates that students value electronic cash's convenience and seamless experience, aligning with their digital lifestyles and the desire for quick and hassle-free transactions.This emphasizes the need for electronic cash providers to prioritize user-friendly interfaces and seamless functionality to cater to students' preferences and encourage further adoption.The studies [17] and [18] have found that convenience and ease of use significantly influence consumers' adoption and intention to use mobile payment services.
The study conducted by Sasongko et al. [19] in Indonesia found that perceived usefulness strongly influences the intention to continue using electronic money applications [20].This current research finding supports the studies by Anil et al. [21] and Michael and Wiese [22], which showed that perceived usefulness has a positive and significant impact on the continued intention to use electronic money applications [23], [24].
Qu et al. [25], in a study conducted with consumers in pilot cities of e-cash in China, found that perceived e-cash ease of use leads to a higher positive attitude toward ecash.They highlighted that attitude toward e-cash is an essential determinant of user intention for e-cash services [5].Kim et al. [26] reported similar findings, indicating that perceived convenience positively impacts the intention to use e-cash.
Kim et al. [26] illustrated that perceived convenience is one of the most influential variables on the usage intention of payment-related FinTech services, emphasizing the importance of convenience in driving adoption [16], [26].Podile and Rajesh [27] found that respondents' perception of convenience positively impacts their intention to adopt cashless transactions in India.
Gao and Waechter [28] identified a positive relationship between perceived convenience and usage intention for mobile payments in Australia.They highlighted the role of convenience in shaping users' intentions to adopt mobile payment solutions.
Pal et al. [29] demonstrated that perceived convenience positively impacts individuals' intention to use mobile payments.They highlighted the convenience of digital currencies, such as offline transaction capabilities, and the flexibility of digital wallets.
Widayat et al. [30] identified ease of use, efficient transaction time, faster payment, and the simplicity of the payment process as the main reasons customers adopt electronic money.They highlighted the significance of convenience in driving the adoption of e-cash.
Security and privacy
Table 15 provides insights into the factors affecting students' use of electronic cash, specifically security and privacy.Security and privacy are significant considerations for students when using electronic cash.A small percentage (3.5%) of students felt that security and privacy had less influence on their usage of electronic cash.A significant portion of students (29.1%) perceived security and privacy as having a moderate influence on their usage of electronic cash.A more significant percentage of students (43.6%) indicated that security and privacy had more influence on their usage.This shows that for most students, security and privacy play a significant role in their decision to use electronic cash.Lastly, 23.8% of students felt that security and privacy greatly influenced their usage.This suggests that for a notable portion of the sample, these factors substantially impact their decision to use electronic cash.
This study was supported by other authors, who indicated that security and privacy concerns are significant factors influencing consumer adoption and usage of digital payment systems.The authors [31] and [32] have confirmed that security and privacy concerns significantly influence consumers' intentions to use mobile payment systems.These findings align with current research, reinforcing the importance of electronic cash providers addressing security and privacy concerns to build trust and encourage adoption among students.
The impacts of security and privacy on electronic cash usage have been a significant area of investigation in recent studies.Shin [33] emphasized the importance of perceived security and trust in mobile wallets, highlighting their influence on electronic cash adoption.This finding suggests that users' perceptions of electronic cash systems' security measures and reliability can be crucial in shaping their willingness to adopt and use such systems.
Further supporting this finding, Khalilzadeh et al. [34] demonstrated substantial evidence of the effects of security and trust on customers' intentions to use mobile payment technology.Their research highlighted the crucial role of these factors in driving electronic cash adoption.Users need to feel secure and trust the electronic cash systems to be confident to adopt and continue using them.Khalilzadeh et al. [34] specifically focused on perceived security, primarily based on consumer perceptions of reliability and privacy.Users' confidence in the reliability of electronic cash systems is crucial for their willingness to engage in electronic transactions.Additionally, privacy concerns are paramount, as users expect their personal and financial information to be protected when using electronic cash services.
Technological appeal and innovation
Table 16 presents data on the perceived influence of technological appeal and innovation on students' usage of electronic cash.Technological appeal and innovation are important factors affecting students' usage of electronic cash.Only a tiny percentage (0.6%) of students indicated that technological appeal and innovation did not influence their usage of electronic cash.A slightly larger percentage (6.4%) of students felt that technological appeal and innovation had less influence on their usage.This implies that while these factors may have some impact, they are not the primary considerations for these students.Most students (39.5%) perceived technological appeal and innovation as having a moderate influence on their usage of electronic cash.
A significant percentage of students (36.6%) indicated that technological appeal and innovation had more influence on their usage.This shows that for most students, technological appeal and innovation play a significant role in their decision to use electronic cash.Lastly, 16.9% of students felt that technological appeal and innovation greatly influenced their usage.This suggests that for a notable portion of the sample, these factors substantially impact their decision to use electronic cash.
This research was supported by other studies that found that technological factors significantly shape consumers' adoption and usage of digital payment systems.Studies done with MBA students enrolled in a regional university in Texas [18] and [35] have found that personal innovation in information technology has not directly impacted the adoption of wireless mobile technology.
Familiarity with the payment method
Table 17 presents data on the perceived influence of familiarity with the payment method on students' usage of electronic cash.Familiarity with the payment method is an essential factor affecting students' usage of electronic cash.A small percentage (1.7%) of students indicated that familiarity with the payment method did not influence their usage of electronic cash.A slightly larger percentage (2.3%) of students felt that familiarity with the payment method had less influence on their usage.Most students (36.6%) perceived familiarity with the payment method as having a moderate influence on their usage of electronic cash.A significant percentage of students (41.3%) indicated that familiarity with the payment method had more influence on their usage.This shows that for most students, familiarity with the payment method plays a significant role in their decision to use electronic cash.Lastly, 18.0% of students felt familiarity with the payment method greatly influenced their usage.This suggests that for a notable portion of the sample, familiarity with the payment method substantially impacts their decision to use electronic cash.
The other studies also support these findings, indicating that familiarity with a technology or payment method is significant in determining adoption and usage.Studies [36] [17] have found that familiarity positively influences consumers' intentions to use and adopt technology-based systems, including mobile payment systems.
Trust in the payment system/provider
Table 18 provides insights into the perceived influence of trust in the payment system or provider on students' usage of electronic cash.Trust in the payment system or provider is an essential factor affecting students' usage of electronic cash.Only a tiny percentage (0.6%) of students indicated that trust in the payment system or provider did not influence their usage of electronic cash.A slightly larger percentage (2.3%) of students felt that trust in the payment system or provider had less influence on their usage.Most students (37.2%) perceived trust in the payment system or provider as having a moderate influence on their usage of electronic cash.
A significant percentage of students (43.6%) indicated that trust in the payment system or provider had more influence on their usage.This shows that trust plays a significant role in most students' electronic cash use.Lastly, 16.3% of students felt trust in the payment system or provider greatly influenced their usage.This suggests that trust substantially impacts a notable portion of the sample's decision to use electronic cash.
This current research finding is aligned with other research that indicates that trust is a significant factor influencing consumer adoption and usage of digital payment systems.The studies [17] and [32] have found that trust significantly influences consumers' adoption and intention to use mobile payment systems.Similarly, a study [31] found that trust significantly influences consumers' acceptance and usage of technology-based services.
The research conducted in Indonesia has shed light on the significant influence of trust on the continued intention to use electronic money applications [20].This finding underscores trust's crucial role in shaping users' attitudes and behaviours towards electronic money.
Perceived benefits
Table 19 shows the benefits of electronic cash compared to traditional payment methods.Students perceive several benefits of electronic cash over traditional payment methods, indicating a potential motivation for their usage.A small percentage (6.4%) of students indicated that the perceived benefits of electronic cash did not influence their usage.
A more significant percentage (15.1%) of students felt that the perceived benefits of electronic cash had less influence on their usage.This implies that while these benefits may have some impact, they are not the primary considerations for these students.Most students (47.7%) perceived the benefits of electronic cash as having a moderate influence on their usage.A significant percentage of students (22.1%) indicated that the perceived benefits of electronic cash had more influence on their usage.This shows that for most students, the perceived benefits play a significant role in their decision to use electronic cash.Lastly, 8.7% of students felt that the perceived benefits of electronic cash had a high influence on their usage.This suggests that for a notable portion of the sample, the perceived benefits substantially impact their decision to use electronic cash.
The current research findings were supported by several studies that indicated that perceived benefits are essential factors in consumer adoption and usage of digital payment systems.Studies [18] and [17] have found that perceived benefits significantly influence consumers' intentions to use and adopt mobile payment systems.
CONCLUSION
In conclusion, the research objectives of assessing respondents' knowledge and usage of electronic cash and identifying factors influencing students' usage have provided valuable insights into the understanding, adoption, and preferences related to electronic cash as a payment method.
The findings indicate that most respondents know electronic cash and understand how it works, highlighting a positive level of familiarity and understanding.However, some respondents still lack knowledge in this area, indicating the need for continued education and awareness campaigns to promote understanding and adoption among the general population.
The research also reveals that convenience and ease of use significantly influence students' usage of electronic cash.Security and privacy are also crucial for students, as they prioritize protecting their personal and financial information.Technological appeal and innovation play a crucial role in shaping students' preferences for electronic cash.Familiarity with the payment method also influences students' usage, suggesting that they are more likely to adopt electronic cash if they are already familiar with it.
Trust in the payment system and provider is another significant factor influencing students' usage of electronic cash.Students require confidence in the security, reliability, and transparency of the system and the provider.Perceived benefits, such as rewards and discounts, also significantly shape students' preferences for electronic cash.Students value the advantages offered by electronic cash over traditional payment methods.
It is recommended to focus on education and awareness campaigns to improve students' knowledge and understanding of electronic cash.Additionally, designing userfriendly interfaces, implementing robust security measures, and staying updated with technological trends will enhance the user experience and build student trust.
Factors
Ease of uses Security Technologies and Innovation Trust Benefit Electronic Cash Knowledge and Uses Knowledge of electronic cash Uses of electronic Cash
Table 1 .
Sex of Respondents
Table 2 .
Age of Respondents
Table 3 .
Majors of Studies
Table 4 .
Education Degree of Respondents
Table 6 .
Knowledge of electronic cash
Table 7 .
Rate knowledge about electronic cash on
Table 8 .
Sources of awareness about electronic cash
Table 9 .
Use of electronic cash for making payments
Table 10 .
Frequency of electronic cash usage
Table 11 .
Preference on payment method: Mobile App
Table 12 .
Preference on payment method: Cards (Credit/Debt)
Table 13 .
Preference on payment method: Digital currencies
Table 14 .
Convenience and ease of use
Table 15 .
Security and privacy
Table 16 .
Technological appeal and innovation
Table 17 .
Familiarity with the payment method
Table 18 .
Trust in the payment system/provider
Table 19 .
Perceived benefits over traditional payment methods | 6,056.4 | 2023-12-06T00:00:00.000 | [
"Business",
"Economics",
"Education"
] |
Stochastic fluorescence switching of nucleic acids under visible light illumination
: We report detailed characterizations of stochastic fluorescence switching of unmodified nucleic acids under visible light illumination. Although the fluorescent emission from nucleic acids under the visible light illumination has long been overlooked due to their apparent low absorption cross section, our quantitative characterizations reveal the high quantum yield and high photon count in individual fluorescence emission events of nucleic acids at physiological concentrations. Owing to these characteristics, the stochastic fluorescence switching of nucleic acids could be comparable to that of some of the most potent exogenous fluorescence probes for localization-based super-resolution imaging. Therefore, utilizing the principle of single-molecule photon-localization microscopy, native nucleic acids could be ideal candidates for optical label-free super-resolution imaging.
Introduction
Since being discovered, nucleic acids have been a central focus of studies in biological, physical, and chemical sciences. In biological systems, nucleic acids form highly complex and intricate structures that house, maintain, and regulate access to the genetic information critical for life. Increasingly, it has become apparent that the nanoscale topology of these structures has a prominent role in the regulation of essential cellular functions [1], such as gene transcription and replication. As such, directly visualizing these complex cellular systems will help expand our understanding of biological interactions, providing insight on gene regulation and cellular behavior. Recently, super-resolution fluorescence microscopy techniques, including stimulated emission depletion microscopy (STED), structured illumination microscopy (SIM), and photon localization microscopy (PLM), such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM), have extended the ultimate resolving power of optical microscopy far beyond the diffraction limit [1][2][3][4][5][6][7], facilitating access to the nanoscale organization of chromatin [8][9][10][11]. However, the majority of strategies used to image structures formed by nucleic acids require methods that label DNA-associated proteins instead of DNA itself or utilize small molecule dyes that may alter the structure and function of the native structures and affect cell viability [12,13].
Owing to these limitations, developing label-free optical super-resolution imaging methods to visualize DNA topology under native, non-perturbing conditions becomes attractive. It is wellknown that nucleotides fluoresce under UV illumination and have significantly weaker absorption in the visible range. However, we have observed visible light-excited fluorescence of unmodified nucleic acids when their concentration approached the value of physiological conditions [14]. This intrinsic fluorescence has likely been overlooked previously because most photochemical studies of nucleotides were performed in dilute solutions with concentrations ranging between 10-100 µM [15][16][17][18], which is significantly lower than that in nuclei and chromosomes (0.1-1 M) [19][20][21]. More importantly, we further observed the stochastic fluorescence switching of nucleotides under visible illumination, which sets the foundation of using unmodified nucleic acids as endogenous contrast agents for nanoscopic imaging with photon localization microscopy (PLM) [14]. This phenomenon sets the stage for developing new label-free super-resolution optical imaging methods to resolve macromolecular structures with nucleotide topologies.
Here, we present the detailed photochemical characteristics of nucleotides at physiological concentrations under visible light illumination. To explain the mechanism of the observed stochastic fluorescence switching, we examined the fluorescence recovery of nucleotides under varying depletion conditions and demonstrated that these results fit well with the theory of ground state depletion (GSD) [6,7]. Furthermore, due to the relatively high quantum yield and low intersystem crossing probability of nucleotides, the photon count and blinking duration of individual emission events were found to be comparable to those of some of the most potent exogenous dyes used in PLM, making DNA molecules themselves ideal candidates as imaging contrasts in PLM.
Chemicals and materials
All chemicals used in the photochemical measurements are commercially available for independent reproduction. Mononucleotide samples used in the measurements are HPLC grade (G8377, A1752, C1006, T7004) and were used as purchased from Sigma Aldrich. 2-and 4-base oligonucleotides were synthesized by Midland Science while 8-, 16-and 20-base oligonucleotides were synthesized by IDT. All synthetic oligonucleotides were used as purchased without further purification.
All samples used were derived both by extraction from natural synthesis (yeast, salmon) and through chemical synthesis (oligonucleotides from IDT and Midland). There is not a common denominator for their manufacture, other than their nucleic acid origin. Critically, these samples are for use as purity standards in analytical studies. For instance, the mononucleotides purchased from Sigma Aldrich are isolated from yeast and are used as the analytical standard for assessment of mononucleotide synthesis by mass spectrometry. In comparison, 20-base oligonucleotides are synthetically produced by phosphoramidite synthesis. The use of these synthetic oligonucleotides is for polymerase chain reactions. As a result, all of these molecules are of very high purity and produced or isolated by independent means for use as scientifically accepted standards.
For absorbance and fluorescence measurements, samples were dissolved in molecular biology grade nuclease-free distilled water (AM9938, Ambion, Invitrogen). Photochemistry measurements, including absorbance and fluorescence, were performed with commercial instruments in a 1-cm quartz cuvette (Z600717, Sigma). Baselines have been carefully measured with pure nuclease-free distilled water in the same cuvette following exactly the same preparation. Between each measurement, the cuvette was washed by acetone and methanol sequentially, triple rinsed with water, and then air dried. Finally, to account for any incidental contamination during the aliquoting of samples, all pipette tips [200 µL (37001-528, VWR) and 2,000 µL (83007-378, VWR)] used were from the same box without reuse.
LC-TOF analysis of mononucleotide samples
Mononucleotide samples used in the measurements are HPLC grade (the highest grade commercially available) and were used as purchased from Sigma Aldrich. While these materials have a certified purity greater than 99.9%, we understand the concerns that the observed fluorescent properties could be from organic impurities. To further exclude this possibility, we have performed liquid chromatography time-of-flight (LC-TOF) mass spectrometry on these mononucleotide samples. The mass spectra of the mononucleotides were measured using a high resolution electrospray ionization (HR-ESI) Agilent 6210 LC-TOF mass spectrometer with Agilent 1200 HPLC introduction. The results from these measurements show no indication of impurities compared to blank injections according to both 280nm UV absorbance and mass analysis for all nucleotides.
A similar, independent analysis was performed for thymidine monophosphate (T7004) by the Scripps Center for Metabolomics and can be found online at the METLIN metabolite database (https://metlin.scripps.edu/metabo_info.php?molid=3451). These results demonstrate that the mononucleotides have a purity exceeding 99.9%. Chemical analysis by mass spectrometry indicates the remaining molecules <0.1% are nucleotide metabolites (the fragments of nucleotides PO 3− 4 and the pentose monosaccharide ribose) that unavoidably appear in any nucleotide samples due to dissociation.
Size exclusion HPLC analysis of polynucleotide samples
The main impurities present in the synthetic oligonucleic acids are incompletely elongated strands (shorter sequences) and trace quantities of organic salts that are removed by desalting. Using HPLC, the nucleic acids account for 99.06% of the sample. The purity of these samples is measured by IDT and was confirmed using size exclusion high-performance liquid chromatography (HPLC) performed independently by the Northwestern University Keck Biophysics core. The size exclusion HPLC was coupled to a photo-diode array detectors (DAD) absorbance fluorimeter set to measure absorbance at 230 nm and 532 nm. HPLC grade Water (ThermoFisher) and molecular or HPLC grade methanol (>99.9%), ethanol (>99.45%), and acetic acid (>99.85%) were used during sample preparation as needed. Measurements were performed at high concentrations (4-5 mg/ml) to match our photochemistry experiments.
Absorbance spectra
In order to satisfy the sample quantity needed for fluorescence spectrophotometry, we chose commercially available mononucleotides (Sigma), including adenine (A), guanine (G), cytosine (C), and thymine (T), that were dissolved in nuclease free distilled water at a concentration of 0.1 M. Absorbance (A = − log 10 T, where T is transmittance) were measured in 1-cm quartz cuvettes using a dual-beam UV-VIS Spectrophotometer (UV-1800, Shimadzu). A control (nucleasefree distilled water, Ambion, Invitrogen) prepared by exactly same procedure was placed in the reference beam to compensate for any environmental variation during the measurement. According to the instrument specification, the measurement accuracy is ±0.001 absorbance units (A.U.) at 0.5 A.U.
Due to the difficulty of acquiring large quantity of polynucleotide samples, the absorbance spectra of 100-µM poly-G nucleotide were measured in a 45-µL cuvette with path length of 3-mm (Z802336, Sigma) by using the same protocol used for mononucleotide solutions.
Fluorescence spectra and lifetimes
Fluorescence spectra of polynucleotides were measured by using a spectrofluorimeter (Nanolog, Horiba Jobin-Yvon) equipped with an iHR 320 spectrometer with a 100 g/mm grating at 600 nm, a 150-W xenon arc lamp and a picosecond photon-detection module (PPD-850). Due to the difficulty of acquiring large quantity of polynucleotide samples, fluorescence spectra of polynucleotides were measured in a 45-µL cuvette with path length of 3-mm. Fluorescence lifetimes of mononucleotides were measured by the time-correlated single photon counting method as perviously reported in [22]. Mononucleotides illuminated at 532 nm have exceptionally longer fluorescence decays with respective lifetimes of 1.98±0.08 ns, 2.00±0.03 ns, 2.48±0.05 ns, and 2.56±0.07 ns for A, G, C, and T.
Absolute quantum yield measurement
To measure the absolute quantum yields (QYs) of mononucleotides, we used a Horiba Quanta-Phi F-3029 integrating sphere mounted in the sample compartment of the Nanolog (Horiba Jobin-Yvon) spectrofluorimeter. The measurements were performed at an illumination wavelength of 532 nm at room temperature. Data was processed by software supplied by Horiba-Jobin-Yvon, from which G and C have QYs of 0.0559±0.0072 and 0.1057±0.0107, respectively; A and T have somewhat lower QYs of 0.0194±0.0024 and 0.0199±0.0025, respectively.
Experimental validation of GSD mechanism
10 µL polynucleotides solution (100 µM, IDT) were dropped on plasma cleaned coverslips (#1.5, Fisher Scientific) and dried at 20 • C overnight to form hydrogel thin films. We used an inverted microscope (Nikon, Eclipse Ti-U) with an objective lens (Nikon, TIRF 100×, 1.49 NA) and used a 532 nm diode-pumped solid-state laser with 1-W maximum output for illumination. We determined the fraction of residual singlet state molecules using a pump-probe mode with a constant probe (0.3 kWcm −2 ) and pump pulses of varying intensity (100 ms, 1-25 kWcm −2 ) for shelving the molecules into dark states. The fluorescence recovery was monitored for calculating the recovery lifetime by applying an exponential fitting.
Single-molecule photon localization miroscopy
To examine the properties of nucleic acid blinking, we built a single molecule optical imaging system based on an inverted microscope. As shown in Fig. 1, a 532 nm diode-pumped solid-state laser with 300-mW maximum output was passed through the microscope body (Nikon, Eclipse Ti-U) and was focused at the back focal plane of an objective lens (Nikon, TIRF 100×, 1.49 NA). The intensity of the illumination beam fluence was adjusted by an achromatic half-wave plate (HWP, AHWP05M-600, Thorlabs) and a Glan-Taylor polarizer (GTP, GT10, Thorlabs). The illumination beam size was controlled by a dual lens assembly. A long-pass filter (BLP01-532R-25, Semrock) was used to reject the reflected laser beam. The fluorescence image was collected through a 550-nm long-pass filter before video acquisition by an EMCCD (Andor, iXon 897 Ultra). We performed single molecule imaging of 20-base Poly-G DNA with I e x =7.14 kWcm −2 by acquiring movies consisting of 1,000 frames at exposure times of 10-ms per frame.
Control experiments on glass coverslips
In order to verify the fluorescence blinking events we observed from the polynucleotide are not introduced from contamination on the glass coverslip, we performed control experiments with using cleaned and water-covered microscope coverslips. Cleaned coverslips (Fisherfinest TM Premium Cover Glasses, Fisher Scientific) were treated by a plasma cleaner (SBT PC-2000) to remove organic contaminants. Water-coated coverslips were prepared by spin coating nuclease free distill water at 3,000 rpm on the cleaned coverslip and then air dried. We recorded blinking events from these samples over 50 seconds under the 532-nm illumination (7.14 kWcm −2 ). As shown in Fig. 2, the results indicate the cleaned and water-coated coverslips have far fewer (two orders of magnitude fewer) fluorescence blinking events compared with polynucleotide deposited coverslips, which exclusively demonstrates the blinking is not from contamination
Photochemical characteristics of nucleotides at physiological concentration
The physiological concentration of DNA in interphase nuclei is ∼0.10-0.40 g/mL and even higher in metaphase chromosomes [19][20][21], which correspond to concentrations about 0.26-1.04 M in solution, respectively. These physiological concentrations are significantly higher than the concentrations (10-100 µM) used in most photochemical studies of nucleic acids [15][16][17][18]. To establish the photochemistry of DNA for conditions approaching those observed in chromatin, we examined ultra-pure nucleotide solutions with various concentrations. At these higher concentrations, it is critical to first demonstrate that all observations are produced intrinsically from the nucleic acids and are not consistent with results from trace contaminants or impurities. Therefore, throughout our experiments we utilized the highest grade reagents available (Molecular or HPLC grade) and performed rigorous measurements of negative controls to rule out the possibility of introduced impurities (see Materials and methods). In order to satisfy the sample quantity and quality needed for fluorescence spectrophotometry, we chose commercially available, HPLC grade (>99.9% pure for all samples and 100% pure for adenine) mononucleotides (Sigma-Aldrich), including adenine (A), guanine (G), cytosine (C), and thymine (T), that were dissolved in nuclease free distilled water. Further, we verified the purity of the mononucleotides using LC-TOF mass spectrometry on all of the mononucleotides, confirming that all of the samples are analytically pure, with the remaining components consisting of nucleic acid metabolites (e.g. the naturally disassociated ribose and phosphate groups). It is an established fact that the fluorescence spectrum of DNA extends into the visible range even under UV excitation at diluted condition. This has been frequently observed and reported elsewhere [15,17]. However, this phenomenon has long been overlooked due to DNA's weak absorption coefficient in visible. One important observation is that mononucleotides illuminated at 532 nm have exceptionally longer fluorescence decays with respective lifetimes of ∼2 ns, which are significantly longer than the reported fluorescence lifetimes of mononucleotides (∼1 ps) at 267-nm illumination [22], and comparable to fluorescence lifetimes of high quantum yield fluorophores. This is also consisitent with the relatively high quantum yields (QYs) of these nucleotides measured under visible illumination (QY ∼ 0.02-0.11, see Materials and methods), which are significantly (two to three orders of magnitude) higher than the reported QY measured at 267-nm illumination (3x10 −4 ) [22]. Particularly significant is the observation that these properties are reproducibly distinct for each mononucleotide.
Furthermore, visible fluorescence has also been previously observed in ex vivo nucleic acid studies [23]. Incidentally, the observation of this autofluorescence within the visible range (473-632 nm) was in a study designed for the elimination of autofluorescence in nucleic acid samples deposited on glass for microarray measurements. While the authors report similar visible fluorescent findings in nucleic acids deposited on microarray glass slides, they dismiss the utility of exploring the observed phenomenon. Critically, this observation confirms that visible light fluorescence has been measured in unmodified nucleic acids in earlier and independent studies. Additionally, work has recently been published showing changes in the absorption properties of organic molecules as a function of concentration [24]. In this work that the absorption properties of molecule widely considered to be 'dark', nonanoic acid, change as a function of concentration. Indeed, they demonstrate that while at low concentrations, nonanoic acid has one primary absorbance band at 200 nm, increasing the concentrations of nonanoic acid produces both a red-shift in absorption spectra and creates a secondary absorption peak centered at 270 nm. Moreover, a concentration dependent fluorescence red-shift has been previously identified in carboxylic acids and esters, but has not been studied extensively in more complex organic molecules [25]. The phenomena we found on nucleic acids may be a part of a similar, but under explored, phenomena inherent to organic molecules.
Photochemical characteristics of polynucleotides with different lengths
We examined photochemical properties of nucleic acids with different sequence lengths. In this study, we chose poly-G nucleotides as model systems of nucleic acid polymer sequences. Similarly, we first confirmed their purities with size exclusion HPLC equipped with DAD (see experiment details in Materials and methods). Figure 4 shows the normalized HPLC DAD absorption spectra of 20-base polynucleotide (IDT), in which the composition of elution bands were verified by UV absorbance at 230 nm as well as by using the time of elution. The first elution band at 20 min consists of the isolated 20-base polynucleotides and has appreciable absorbance at 532 nm, which further confirms the presence of visible light absorption of DNA. In addition to the main polynucleotide band, there is an elution band consistent with small molecules at 65 minutes, which has no measureable absorbance at 532 nm above the baseline of HPLC grade water. Since the small molecules band has no measureable visible absorbance relative to the nucleic acids and they exist at such low concentrations relative to the polynucleotides (<1%), it is unlikely that they are considerably contributing to the experimentally observed fluorescence.
We measured the absorbance spectrum of 100-µM mononucleotide and 20-base poly-G nucleotide in the visible range (Fig. 5). The measured Molar extinction coefficient E is 1,200±200 M −1 cm −1 at 532 nm for the 20-base poly-G nucleotide, indicating an order of magnitude increase Critically, a strong increase in the fluorescence intensity is noted as a function of increasing base length [ Fig. 6(a)]. As shown in Fig. 6(b), the fluorescence intensity per nucleotide (FL/N) increase by nearly an order of magnitude from mononucleotides to 20-base polynucleotide. This is consistent with the increasing of Molar extinction coefficient per nucleotide (E/N) shown in Fig. 6(c). For sequence longer than 4 bases, the fluorescence intensities per nucleotide appear no significant variation beyond the experimental error. This is in agreement with the previously measured delocalization length of 3.3±0.5 bp for UV light excitation [26].
Ground state depletion of nucleotides
While the photochemical measurements demonstrate that unmodified nucleic acids have absorption and intrinsic fluorescence under visible light illumination, super-resolution imaging based on single-molecule photon localization requires the ability to detect blinking single-molecule emissions. Given these observed measured relatively high QY and considerable fluorescence emission under visible light illumination, we hypothesized that super-resolution imaging could be accomplished by leveraging GSD with dark-state shelving and stochastic return. GSD has previously been explored for super-resolution imaging using a typical three-level molecular system [7]. When excited by light with intensity I e x [Wcm −2 ], a molecule transits from its ground state (S 0 ) to an excited state (S 1 ) with the average rate k e x = I e x σ/hν, where σ [cm 2 ] is the absorption cross section; h [J·s] is the Planck constant (6.626×10 −34 J·s); and ν [s −1 ]=c/λ is the frequency of the transition. From this state, the molecule can relax non-radiatively, emit a fluorescence photon with a probability equal to QY , or transit to a dark (e.g. triplet) state (T) via intersystem crossing (ISC) with a probability Φ. Φ is a key characteristic of the intersystem crossing from singlet state to a triplet state. If Φ 1, the dark states have a lifetime τ much longer than that of fluorescence (i.e. τ ∼ several hundreds of milliseconds τ fl ∼ nanoseconds). Therefore, molecules increasingly shelve in a long-lived dark state with each illumination and no longer fluoresce. However, the excited molecules may return to their ground state with the average rate k = 1/τ, after which they again become excitable. This process creates the required "on" and "off" periods, or blinking for PLM. It can be described by a system of three differential equations: = −k e x n 0 + k fl n 1 + kn 2 dn 1 dt = +k e x n 0 − k fl n 1 − k i sc n 1 , where n 0,1,2 [dimensionless] are the population probabilities of the molecule and i n i = 1. In this model, the parameters governing the behavior of the system are k fl and k i sc , which are respectively the rate constants of fluorescence and internal conversion and follow the relationship of k fl /k i sc = 1/Φ − 1. They can be further defined as k fl = 1/τ fl and k i sc = 1/τ i sc , where τ fl and τ i sc are the fluorescence lifetime and the intersystem crossing lifetime, respectively. To estimate k e x , the absorption cross section was calculated using σ = ln(10)E 10 −20 6 [cm 2 ]. For experimental validation of the proposed GSD mechanism, we used polynucleotides (20base poly-A, G, C, and T, IDT) as model systems. We deposited polynucleotides solution (100 µM, IDT) on a plasma cleaned coverslip surface and dried at 20 • C overnight to form hydrogel thin films (with its concentration approaching the physiological range). At steady state, the population probabilities n i of molecules do not change: dn i /dt = 0, giving Since k fl + k i sc = k i sc /Φ and k i sc k, The measured fluorescence intensity F at the steady state is proportional to population probability of the ground state: F ∼ an 0 , where a is a proportionality constant depending on the quantum yield and the detection efficiency. If we define ε as the ratio of fluorescence intensities after and before GSD, ε is reduced by increasing k e x as where F 0 is the fluorescence intensity before GSD (when k e x →0). We measured fluorescence intensities of nucleotides at the steady state under various illumination intensities and estimated ε by normalizing them with respect to the fluorescence intensity under a moderate illumination (∼0.3 kWcm −2 ). As expected, 1/ε was linearly related to I e x as shown in Fig. 7. This result agrees with the GSD model. Notably, while all polynucleotides were somewhat distinct, polynucleotides containing purines (A and G) and pyrimidines (C and T) share similar features, likely due to the similarity in their molecular structures. Furthermore, the theory of GSD also predicts that once GSD has been induced by a strong pump illumination (I pum p for t d =100 ms), the fluorescence induced by a weaker probe beam (I probe ) will follow the exponential time course of the repopulation of the ground state with where k pum p, probe = 10 −20 ln(10)E I pu m p , pr o be / 6hν . To validate the role of the long-lived dark state in the observed stochastic emission of nucleic acids, we performed the pump-probe measurements for all four types of polynucleotides and chose poly-G DNA to compare against the predictions of the GSD model [Eq. (5)]. As shown in Fig. 8, the experimental recovery data under pump illuminations up to 24 kWcm −2 fit accurately with the GSD model with the measured E of 1,200 M −1 cm −1 for the 20-base poly-G DNA. The fitting gives a recovery time τ of 220 ms, which is comparable to the lifetime of the triplet states in most exogenous dyes, and Φ ∼ 1.67 × 10 −4 . These long recovery time τ and a low value of Φ facilitate the detection of single-molecule signal and, along with a high quantum yield, produce a strong photon count of emission during a blinking event, as discussed below. The recovery lifetimes of all four types of polynucleotides were summarized in Table 1. Notably, although different polynucleotides have distinct τ, similar values were observed for purines and pyrimidines. In addition, we tested the influence of an additional triplet-specific quencher, β-mercaptoethanol, on the rate of fluorescence recovery. As expected for a GSD system, adding the triplet-specific quencher β-mercaptoethanol reduced τ by 36%, thus confirming the shelving of excited electrons in the dark, and most likely, triplet state.
Fluorescence switching of polynucleotides
To examine the properties of nucleic acid blinking, we performed PLM single molecule imaging of 20-base Poly-G DNA with I e x =7.14 kWcm −2 (see details in Materials and methods). We Color was used to denote the density of data spots. Histograms of (d) τ B and (e) N B follow exponential distribution. acquired images consisting of 1,000 frames at exposure times of 10-ms per frame, as illustrated in Fig. 9(a). Because the lifetime τ of the triplet state is much longer than that of fluorescence, the majority of molecules are 'shelved' to their long-lived triplet states under this illumination. As previously described for a GSD system, only a few molecules may return to their ground state at any given time, with the average rate of k = 1/τ, where they can then be repeatedly excited to the fluorescent state [ Fig. 9(b)]. This creates the "on" and "off" periods, or blinking described above, yielding the required stochastic activity for precisely locating molecules with PLM. We measured the detected photon counts of each blinking event, N B , and the corresponding duration of the "on" time, τ B . Figure 9(c) is a scatter plot of N B versus τ B , which indicates N B is linearly proportional to τ B . This demonstrated the fluorescence photon arrival rate during the "on" period Γ = N B /τ B is nearly identical among each blinking event, and thus Γ can be fitted from the plot. Figures 9(d) and 9(e) show histograms of N B and τ B . Critically, N B and τ B follow an exponential distribution as predicted by the theory involving quantum jumps between weak and strong transitions [27,28].
To further examine the blinking dynamics with respect to illumination intensity, we performed single molecule imaging with varying I e x up to 11.0 kWcm −2 in their steady state. Prior to characterizing the blinking properties of the nucleic acids, baseline measurements of the plasma cleaned glass coverslip and water were performed to account for the influence of trace organic residues during sample preparation. The result exclusively demonstrates the blinking event we observed is from the nucleic acids. Based on fundamental photophysics, Γ is given by As shown in Fig. 10(a), we found that the experimentally observed Γ is linearly proportional to the illumination power, which is in strong agreement with the Eq. (6).
Likewise, by assuming that during the "on" period the triplet transition takes place at a constant rate, this produces the average "on" time τ B as a function of I e x that is given by [28] Figure 10(b) shows that in the steady state, the experimental measured blinking times as a function of I e x agrees reasonably well with the theoretical model. The deviation at low I e x is likely due to the low signal-to-background ratio of the recorded blinking which resulted in relatively relatively smaller τ B values.
As N B is given by N B is independent of the absorption cross section and is in principle identical under various illumination intensities. As shown in Fig. 10(c), the measured N B grew asymptotically to reach a constant value with increasing I e x . Notably, N B is lower than the model at low I e x . This is likely due to an excessive background subtraction when the signal-to-background ratio of the recorded blinking is low.
Significantly, although bulk nucleotides absorb weakly in the visible range, the number of photons emitted from each blinking event is large enough for single-molecule-localization based imaging. This is due to the high QY of their visible-light fluorescence, which is much greater than that for the UV transitions and thereby compensates for the relatively weak absorption. Furthermore, the total photon count during an "on" time is independent of the absorption cross section and depends only on QY/Φ. A high QY (e.g. QY =0.0559 of poly-G DNA) and a small Φ for the transitions in the visible spectral range result in a high photon count, which in turn translates into a high spatial resolution of PLM imaging, given by s/ √ N B where s is the full width at half maximum of the diffraction point spread function [29]. As shown in Table 1, although the four polynucleotides generate different numbers of collected photon in individual emission events, they are all capable of providing a sub-20-nm localization precision in PLM.
Conclusion
Generally, the unlabeled products usually have much lower extinction coefficient and likewise fluorescence emission if compared with fluorescence targets. As a result, they are normally neglected when mixed with fluorescent targets. Therefore, the observed anomalous signals from these samples are frequently classified as impurities or contaminations without further analysis. In fact, numerous methods have been developed and employed to suppress or eliminate signals that emanate from endogenous molecules. These methods include chemical means for suppression of endogenous emissions and mathematical analysis that eliminates events deemed unlikely to originate from the fluorescent tag [30][31][32]. The ubiquitous use of these methods has created a widely accepted notion that endogenous molecules lack the capacity for usable fluorescence emission. In contrast, we discovered the fluorescence emission of nucleotides in visible range, and more importantly, we explored its blinking mechanism, which facilitates super-resolution imaging by using the principle of single molecule localization microscopy.
In conclusion, we characterized the fluorescence of nucleic acids under visible light illumination as well as their fluorescence depletion and recovery that accurately matches a GSD model. In the photoswitching process, experimentally measured Γ, τ B and N B as a function of I e x all satisfy the fundamental photophysics of single molecule fluorescence blinking, which in addition to the retraced parameter, Φ, are consistent with that obtained by the GSD model. Remarkably, the fluorescence characteristics of nucleic acids under visible light illumination make them ideal candidates for use as blinking fluorophores for super resolution imaging in biological systems.
(1) They exhibit a long shelving lifetime τ at the range of hundreds of milliseconds, which is ideal for efficient depletion based on the GSD model. Consequently, only a small number of molecules fluoresce at the same time upon GSD, which is an essential requirement for beating the diffraction-limit in PLM. (2) Although nucleic acids have weak fluorescence under visible illumination due to their low absorption in bulk, the photon counts of individual emission events are comparable to those of some of some of the most potent exogenous dyes used in PLM due to their high QY and low intersystem crossing probability Φ. (3) Finally, the fluorescence properties of nucleic acids allows detection at lower illumination intensity (<10 kWcm −2 ), which is highly advantageous compared to the cell-damaging high light intensities typically used in some other super-resolution methods (e.g. up to 10 5 kWcm −2 in STED). In all, this discovery paves a new way to realize label-free optical super-resolution imaging of nucleic acids [14], which may provide an ideal technique to visualize the spatial organization of single or groups of nucleosomes and quantitatively estimates the nucleosome occupancy level of DNA in unstained chromosomes and nuclei.
Conflicts of interest
HFZ and CS have financial interests in Opticent Health. All other authors declare no competing financial interests. mass spectrometry measurements. We thank Dr. Arabela Grigorescu and Theint Aung from the Northwestern University Keck Biophysics Core Facility for their assistance in size exclusion chromatography of polynucleotides. | 7,210 | 2017-04-03T00:00:00.000 | [
"Physics",
"Chemistry",
"Biology"
] |
Depth-First Net Unfoldings and Equivalent Reduction
: In Petri net unfolding, according to the strategies of breadth first and depth first, the biggest problem lies in the potential explosion of the state space. Unfolding generates either accessible trees or branch processes. Making marking reduction or branch cutting accessible proves to be an effective approach to mitigating the state space expansion. In this paper, we propose three reduction rules based on similarity equivalence, conduct state space reduction, present three theorems supported by a case study, and propose a new unfolding algorithm for the unfolding process. In both the new case and the experiments, the completeness, optimality, completeness, and memory and time consumption are reduced by about 60%
Introduction
Exploring the path of a concurrent system [1][2][3][4] may lead to a state space explosion.Partial order semantics [2,[5][6][7][8][9][10][11] and prime event structure have proven to be efficacious techniques in alleviating state space explosions during traversal.The concurrency of the Petri net is a valuable tool for concurrent system modeling.Petri net unfolding technology effectively combines partial sequence semantics and prime event structure.However, it can produce state space explosions during the unfolding process and reach a level of proximity at the factorial (n!) level when traversing paths, with n representing the number of changes.The tireless pursuit of scholars in this field is finding ways to reduce the state space, decrease the storage space, and simplify computational complexity.This paper analyzes the state space reduction and the similarity of change to induce a substantial reduction in the state space.Exploring all conceivable paths of a system that are not exhaustive, as well as executing equivalent path reduction, comprise a viable solution to the problem.Some scholars have made contributions in terms of state space reduction and similarity equivalence.For instance, Jensen [12] applied symmetry to reduce the state space of the CPN (color Petri net) using limited conditions, especially considering only fully symmetrical properties.Chiola [13] made a similar reduction, but also according to the symmetry factor of the emission.Schmidt [14] studied the symmetry reduction in the context of the Petri network.Junttila [15] studied the complexity of the question.However, how to reduce the state space during the unfolding process remains a persisting problem that remains to be solved.
Petri net unfolding [12][13][14][15][16][17][18][19][20] is a concurrency theory that is based on semantics proposed by McMillan.This theory has been extended to support nets with read arcs and applied in the context of both model checking and path traversal in concurrent systems.Prime event structures are utilized to analyze the partial ordering relationship between events, thereby facilitating efficient model checking or path analysis.The starting point of this work is in devising strategies to reduce state space expansion and conserve resources in the process.
events, thereby facilitating efficient model checking or path analysis.The starting point of this work is in devising strategies to reduce state space expansion and conserve resources in the process.
This study explores a transition system (in the context of a bounded net), which unfolds into a tree exhibiting the same behavior.The unfolding of the net can be equivalently conceptualized in terms of true concurrency.The deployment process involves non-deterministic selection or marking a cutout event when the same transition rule is encountered, until no further transitions are possible.The unfolding follows a partial order, and as each new transition may be added, the order of selection changes, leading to a different maximum expansion.
To determine whether a specific place p in the net N can be marked, the unfolding process can be extended.This involves adding a marker Mt to each new transition t as it unfolds.When triggering transition t, the process checks whether Mt has already occurred.If it has, Mt is considered a cut-off node and is not executed.The enabled transitions are repeatedly triggered in a cyclic manner until no further transitions are possible in the net, indicating successful and complete deployment.
Otherwise, the algorithm examines whether an earlier added transition t′ fulfils Mt′ = Mt; in such an instance, it designates t as a cut-off node, signifying that this search branch will no longer be explored.The search concludes without success when no additional transition can be incorporated.In the process of unfolding, it is crucial to implement effective measures for state space reduction.The similarity equivalence can effectively reduce the state space while lowering the computational complexity and memory consumption.
As has been widely acknowledged in the literature, the correctness of the searchi.e., the assurance that the search will always terminate with the correct result-is highly dependent on the strategy employed [6].Several papers [6,7,12] have proposed breadthfirst strategies that have been shown to be correct.
Various strategies can be used to conduct the search, with breadth-first and depthfirst strategies being the most commonly employed.Figure 1 depicts the paths of the deployment net using depth-first and breadth-first strategies, respectively.This paper employs the improved instance in [1] to showcase the correctness of a depth-first search in net unfolding, while conducting an equivalence reduction on the unfolding net.This article is structured as follows: In Section 2, the study defines the transition system and related concepts, alongside a presentation of classical algorithms for unfolding.Section 3 introduces the depth-first unfolding algorithm of the net, followed Symmetry 2023, 15, 1775 3 of 14 by a case study of the unfolding net, thereby establishing the correctness of the depth-first search algorithm.In Section 4, three reduction strategies are proposed and implemented to reduce the unfolded net, resulting in the final reduced equivalent net.Finally, Section 5 provides a detailed discussion on the results.
Among them, ∀x ∈ P ∪ T: Definition 2. (Reachability) ∑ = (P, T, F, M 0 ) as a Petri net: (1) If there is a transition t and the markings M and M satisfy M[t M , then the marking M from M is directly reachable; that is, M[σ M , then the marking M from M is reachable.All sets of identifiers reachable from the marking M are denoted as R(M).
In a net N, nodes x, y ∈ P ∪ T: (1) x and y belong to a causal relationship, denoted as x ≤ y, if and only if there is a path from x to y in N. If x = y, it is recorded as x < y; (2) x and y belong to a conflicting relationship, denoted as x # y, if and only if ∃t 1 , t 2 ∈ T: ) x and y belong to a concurrent relationship, denoted as x co y, if and only if they satisfy ¬(x < y ∧ y < x ∧ x # y), that is, x and y are neither causal nor conflicting.
Definition 9. (Cut-off event) Let β be a prefix for network expansion, with e 1 and e 2 being two of the events, and satisfy Marking ( , then e 2 is called a cut-off event.
Net Unfolding
In this section, we provide an informal but precise definition of unfolding.For formal definitions, readers may refer to [7].
Depth-First Unfolding
Upon net expansion, the process commences with a place for each element of MI, originating from a state.For the Petri net N depicted in Figure 2, MI = {a, b, c, d}.
We proceed to generalize the concept of potential extension: if the current labeled net
Depth-First Unfolding
Upon net expansion, the process commences with a place for each element of MI, originating from a state.For the Petri net N depicted in Figure 2, MI = {a, b, c, d}.
We proceed to generalize the concept of potential extension: if the current labeled net permits reaching a marking m, labeled by a marking M of the original net, and M enables a transition t culminating in a marking M , then the unfolding is augmented with a new event labeled by t, and for each output place p of t in N, a new place labeled by p is added.
{If similarity (new instances) does not exist in π, new instances of places Superscript j++; 12.
else, the superscript j is 0 (0 means there is no superscript) // Counting of newly generated library superscripts starting from 0)} 13.
if !C (e, π, cut_o f f ) then cut_o f f := cut_o f f ∪ {e} // Become a new node and cut it the next time it appears 14. else pe := pe+PoTEXT(π, cut_o f f ) // Become a cutting event (if it already exists, it is a cutting node);
End while
In this unfolding algorithm, π represents the complete prefix of expansion.When line 2 is a place with a token, line 5 calculates the possible changes.When line 6 is not empty, one of the changes occurs, using the stack data structure.Line 9 triggers the enabled event, line 10 adds a new marking, and line 11 determines whether a new marking is added.Meanwhile, markings that are deemed equivalent in terms of similarity are considered identical, otherwise line 12 becomes a cut-off node.Line 13 adds a new cut-off node, and line 14 continues to judge the change events that can be started under the new marking until line 15 has no event to be enabled.When unfolding, it is necessary to realize the state space reduction, and this can be accomplished through the state similarity equivalence method.We perform the state similarity equivalence method to reduce the number of traversed paths.To illustrate the concept of similarity equivalence, as shown in Figure 3, we treat the two identifications, M1 and M2, as equivalent in the algorithm.
The complexity analysis for Algorithm 2 (the depth-first unfolding algorithm) achieves a polynomial complexity with N nodes and m places; at most, the complexity is at the level of n × m.Using the depth-first unfolding algorithm to unfold the Petri net in Figure 2 is detailed in Table 1.
Figure 4 demonstrates a graphical representation of the unfolding process of SecN.These unfolding prefixes are referred to as such.The initial marking bearing with label {a, b, c, d} enables transitions A and B, resulting in two feasible extensions.We opt to incorporate event e1 first, labeled by A. The prefix now possesses a novel reachable marking, corresponding to marking {c, d, i, k} in SecN, which paves the way for a new potential extension labeled C. Let us assume event e2, labeled by B, is added subsequently.line 14 continues to judge the change events that can be started under the new marking until line 15 has no event to be enabled.When unfolding, it is necessary to realize the state space reduction, and this can be accomplished through the state similarity equivalence method.We perform the state similarity equivalence method to reduce the number of traversed paths.To illustrate the concept of similarity equivalence, as shown in Figure 3, we treat the two identifications, M1 and M2, as equivalent in the algorithm.The complexity analysis for Algorithm 2 (the depth-first unfolding algorithm) achieves a polynomial complexity with N nodes and m places; at most, the complexity is at the level of n × m.Using the depth-first unfolding algorithm to unfold the Petri net in Figure 2 is detailed in Table 1.Table 1.Net unfolding process and results of Petri net in Figure 2. The prefix now comprises two additional reachable markings: {a, b, j, l} and {i, j, k, l}.These markings facilitate extensions labeled with D and T, respectively.After event e3, labeled by C, a new marking labeled by {e, h, i} emerges, enabling possible extensions with labels E and H, etc.
Marking
These algorithms construct progressively larger prefixes of the unfolding of SecN.Similar to transition systems, they examine one event at a time, with some events designated as cut-offs, beyond which successors are no longer explored.
The challenge lies in generalizing the definition of cut-off events for Petri nets.McMillan [12,13] proposed a solution.The crux is to associate each event e with an appropriate reachable marking Me of the original Petri net SecN, accomplished in three steps, exemplified by event e3 in Figure 4: -Determine the set [e n ] comprising all predecessors of e n , i.e., the set of all events e m such that the unfolding includes a path from e m to e.In this case, [e3] = {e1, e3}; -Select any occurrence sequence σ i containing each element of [e] exactly once (which is guaranteed to exist) and allow it to occur.Here, σ i = e1e3; -Let m denote the marking of the unfolding reached by firing σ i (which can be proven to be independent of the choice of σ i ) and define Me as the label of m.In this instance, m = {i, e, h} and Me3 = { i, e, h}.
Symmetry 2023, 15, 1775 7 of 14 For each novel event e xt , the algorithms compute and store the marking Mext.It is worth noting that these are the sole markings of SecN known to be reachable by the algorithms.
Conditions ( 1) and ( 2) can now be effortlessly generalized.An event e cut is designated as a cut-off if marking Me satisfies one of these conditions: (a) Me(pT) = 1; the algorithm concludes with the result "reachable"; (b) Me is already known to be reachable: either Me = MI or Me = Me for some other event e .In such a case, e is referred to as the corresponding event of e.
The unfolding prefix in Table 1 4 demonstrates a graphical representation of the unfolding process of Sec These unfolding prefixes are referred to as such.The initial marking bearing with label b, c, d} enables transitions A and B, resulting in two feasible extensions.We opt to inc porate event e1 first, labeled by A. The prefix now possesses a novel reachable marki corresponding to marking {c, d, i, k} in SecN, which paves the way for a new potent extension labeled C. Let us assume event e2, labeled by B, is added subsequently.The prefix now comprises two additional reachable markings: {a, b, j, l} and {i, j, k These markings facilitate extensions labeled with D and T, respectively.After event labeled by C, a new marking labeled by {e, h, i} emerges, enabling possible extensions w labels E and H, etc.
These algorithms construct progressively larger prefixes of the unfolding of Sec Similar to transition systems, they examine one event at a time, with some events des nated as cut-offs, beyond which successors are no longer explored.
The challenge lies in generalizing the definition of cut-off events for Petri nets.McM lan [12,13] proposed a solution.The crux is to associate each event e with an appropri reachable marking Me of the original Petri net SecN, accomplished in three steps, exe plified by event e3 in Figure 4: -Determine the set [en] comprising all predecessors of en, i.e., the set of all events such that the unfolding includes a path from em to e.In this case, [e3] = {e1, e3}; -Select any occurrence sequence σi containing each element of [e] exactly once (wh is guaranteed to exist) and allow it to occur.Here, σi = e1e3;
Equivalence Reduction of Unfolded Nets
Table 1, presented above, has not been reduced, and following unfolding, it remains excessively large.In Table 1, distinct colors represent different states, and this article employs various colors for differentiation.For instance, initial states a, b, c, and d are represented in black, while newly generated states c, d, i, and k are depicted in green.Reduction of a net utilizing equivalence [31][32][33] is explored.
For this unfolding net, research has determined that the unfolding net remains overly extensive.This paper employs equivalence-based reduction.The reduction process presents as follows: For the initial reduction, four rectangles of different colors are reduced based on the same transition (e.g., G, G ; H, H ; E, E ; F, F represent identical transitions) guarantee.
Reduction rule: If the same marking M undergoes the same transformation and yields the same marking M , it is considered a cut-off node.After one reduction, the results are shown in Figure 5.
extensive.This paper employs equivalence-based reduction.The reduction process presents as follows: For the initial reduction, four rectangles of different colors are reduced based on the same transition (e.g., G, G′; H, H′; E, E′; F, F′ represent identical transitions) guarantee.
Reduction rule: If the same marking M undergoes the same transformation and yields the same marking M′, it is considered a cut-off node.After one reduction, the results are shown in Figure 5.The reduced graph is as follows in Figure 6.
Symmetry 2023, 15, 1775 9 of 16 The reduced graph is as follows in Figure 6.Following the second reduction, when only a single transition remains (i.e., no branching path), we implement the reduction.After two alterations have transpired, the subsequent changes that can occur are all tj.For instance, enabling transitions ti and tj, regardless of whether ti occurs first or tj occurs first, the transition that can be triggered after occurrence is tk.In lines 9 and 10 in Table 2, both of which are transitions T that occur, the resulting states are the same; hence, reduction can be performed.Following the second reduction, when only a single transition remains (i.e., no branching path), we implement the reduction.After two alterations have transpired, the subsequent changes that can occur are all t j .For instance, enabling transitions t i and t j , regardless of whether ti occurs first or t j occurs first, the transition that can be triggered after occurrence is t k .In lines 9 and 10 in Table 2, both of which are transitions T that occur, the resulting states are the same; hence, reduction can be performed.Upon the completion of the two transitions t i and t j , the subsequent transition that can occur is t k , that is, enabling transitions t i and t j , irrespective of whether t i occurs first or t j occurs first, the transition that can be triggered.
The third step of the reduction rule: if the same marking M undergoes corresponding transformations and produces an equivalent M , is considered a cut-off node.As illustrated in Figure 7, distinct transitions yield different identifiers, but analogous identifiers are generated.The unfolding paragraph after the final reduction is displayed in Figure 8. ), and then t 1
Case Study and Experiments Result
This paper employs a case study to scrutinize the process of net reduction through unfolding, predicated on the three reduction rules posited herein.Figure 9a illustrates a Petri net, while Figure 9b portrays the unfolded net.Subsequently, this article puts forth a case study to delineate three reduction rules applicable to the unfolding net reduction procedure.Table 3 shows the result of net reduced unfolding.
Case Study and Experiments Result
This paper employs a case study to scrutinize the process of net reduction through unfolding, predicated on the three reduction rules posited herein.Figure 9a illustrates a Petri net, while Figure 9b portrays the unfolded net.Subsequently, this article puts forth a case study to delineate three reduction rules applicable to the unfolding net reduction procedure.Table 3 shows the result of net reduced unfolding.The first step of reduction is premised on (1) identical marking M, following the same transition, yielding the same marking M , regarded as the cut node or, more precisely, the cut-off node.
The second step of reduction is based on (2) the similar transition of marking M, engendering a marking M with an identical structure and functioning as a cut-off node, as demonstrated in Figure 10a.The first step of reduction is premised on (1) identical marking M, following the same transition, yielding the same marking M′, regarded as the cut node or, more precisely, the cut-off node.
The second step of reduction is based on (2) the similar transition of marking M, engendering a marking M′ with an identical structure and functioning as a cut-off node, as demonstrated in Figure 10a.The third step of reduction adheres to the subsequent criterion: (3) identical marking M, upon corresponding alterations, generates an equivalent marking M′, deemed as a cutoff node.The final equivalent unfolding net is exhibited in Figure 11.The third step of reduction adheres to the subsequent criterion: (3) identical marking M, upon corresponding alterations, generates an equivalent marking M , deemed as a cut-off node.The final equivalent unfolding net is exhibited in Figure 11.
Definition 4 .
(Branch process) Let Petri net ∑ = (P, T, F, M 0 ).(O, h) is a branching process of ∑; occurrence net O = (B, E, G) and the homomorphic function h: B ∪ E → P ∪ T simultaneously satisfy: (1) h(B) ⊆ P ∧ h(E) ⊆ T; (2) For any event e ∈ E, the h function acting on • e (resp., e • ) satisfies the bijection of • e to • h(e) (resp., e • and h(e) • ); (3) The h function acting on Min(O) is also limited to the bijection between Min(O) and M 0 ; (4) For any event e 1 , e 2 ∈ E, if • e 1 = • e 2 and h(e 1 ) = h(e 2 ), then e 1 = e 2 .Definition 5. (Configuration) The configuration set C of a branch process can represent the possible set of events that a Petri net may run, which requires the following conditions to be met: (1) Causal closure: e ∈ C ⇒ ∀e ≤ e: e ∈ C; (2) No conflict: ∀e, e ∈ C: ¬(e # e ).Definition 6. (Possible extensions) For configuration C and event set E, C ⊕ E represents an extension of C, if and when C ∪ E is a configuration and C ∩ E = ∅.Definition 7. (Completeness) The branching process β of a Petri net is complete, and there exists a configuration C ∪ {e} for any reachable identifier M and its enabling transition t (i.e., M [t ), and event e satisfies Marking(C) = M ∧ e / ∈ C ∧ t = h(e).
a, b, c, d A, B A/e 1(a, b->i, k) c, d, i, k new a, b, c, d B B/e 2(c, d->l, j) a, b, l, j new c, d, i, k C, B C/e 3(k, c, d->e, h) i, e, h new c, d, i, k B B /e 4(c, d->l, j) i, k, l , j new a, b, l, j A, D A /e 5(a, b->i, k) i , k , l, j new a, b, l, j D D/e 6(a, b, l->f, g) f, g, j new i, e, h E, H E/e 7(e, i->a, c) h, a , c new i, e, h H H/e 8(h->b, d) i, e, b , d new l, k, l , j T T/e 9(i, k, l, j->p) P end i , k , l, j T T /e 10(i, k, l, j->p) P end f, g, j F, G F/e 11(j->a, c) a , c , g, j new f, g, j G G/e 12(g->b, d) f, b , d new h, a , c H H /e 13(h->b, d) a, b , c, d Cut-off i, e, b , d E E /e 14(e, i->a, c) a , b , c , d Cut-off a , c , g, j G G /e 15(g->b, d) a , b , c , d Cut-off f, b , d F F /e 16(j->a, c) a , b , c , d Cut-off
Figure 4 .
Figure 4.A prefix of the unfolding of the net in Figure 2.
Figure 4 .
Figure 4.A prefix of the unfolding of the net in Figure 2.
Figure 5 .
Figure 5.The same transition reduction guarantee.Figure 5.The same transition reduction guarantee.
Figure 5 .
Figure 5.The same transition reduction guarantee.Figure 5.The same transition reduction guarantee.
Figure 6 .
Figure 6.Reduction result of the same transition and next guarantee.
Figure 6 .
Figure 6.Reduction result of the same transition and next guarantee.
Figure 7 .Figure 7 .
Figure 7. Second reduction result and the third Reduction Rule.
2 .
If marking A reaches marking C after transition t1, and marking B reaches marking D after transition t2, and marking C is equivalent to marking D, then t1 is equivalent to t2.
Figure 8 .
Figure 8. Unfolding paragraph after the final reduction.
Figure 10 .
Figure 10.(a) First step of reduction and reduction rule 2. (b) Second step of reduction and reduction rule 3.
Figure 10 .
Figure 10.(a) First step of reduction and reduction rule 2. (b) Second step of reduction and reduction rule 3.
Table 1 .
Net unfolding process and results of Petri net in Figure2.
exhibits a reachable marking, corresponding to {a, c, b, d}.As this is equivalent to Me1, such an extension would constitute a cut-off.
Table 2 .
Results of the first reduction in the expansion of the net in Figure2.
b->i, k) c, d, i, k new
Table 2 .
Results of the first reduction in the expansion of the net in Figure2.
If marking A reaches marking C after transition t 1 , and marking B reaches marking D after transition t 2 , and marking C is equivalent to marking D, then t 1 is equivalent to t 2 .• ), namely h(t 1 ) • = h(t 2 ) • .Marking C is approximately equivalent to marking D, that is, the transitions that can occur under marking C are equal to the transitions that can occur under marking D.
Table 3 .
Results of Net unfolding and Reduction in Figure 2.
Table 3 .
Results of Net unfolding and Reduction in Figure2. | 5,931.8 | 2023-09-16T00:00:00.000 | [
"Computer Science"
] |
Multigroup cross section library for GFR2400
In this paper the development and optimization of the SBJ_E71 multigroup cross section library for GFR2400 applications is discussed. A cross section processing scheme, merging Monte Carlo and deterministic codes, was developed. Several fine and coarse group structures and two weighting flux options were analysed through 18 benchmark experiments selected from the handbook of ICSBEP and based on performed similarity assessments. The performance of the collapsed version of the SBJ_E71 library was compared with MCNP5 CE ENDF/B VII.1 and the Korean KAFAX-E70 library. The comparison was made based on integral parameters of calculations performed on full core homogenous models.
Introduction
The progress in computer technology in the 21 th century gives strong support to the development of modern Monte Carlo codes. Unfortunately, their results are burdened with statistical errors. Moreover, due to CE XS libraries and complex geometry structures Monte Carlo simulations are costly. For these reasons certain reactor applications require effective deterministic approaches, which imply the development of multi-group cross section (XS) libraries. There exist several multi-group XS libraries available for fast reactor calculations; however, each of them carries a unique fingerprint of a system, for which it was developed and optimized. The best way to optimize a XS library is to use as much experimental data as possible, this could be however impossible for systems that have never been built, like the GEN IV Gas-cooled Fast Reactor [1]. The analysts of this reactor are facing a difficult task, to design a reactor without having experimental background. One of the possible ways could be to find a balance between deterministic and stochastic calculation tools and to utilize experimental data of similar fast systems. The first necessary step is to develop and to optimize a multigroup XS library appropriate for deterministic GFR2400 analyses.
Description of the GFR2400 core
The Gas-cooled Fast reactor is one of the GEN IV nuclear reactors selected by the GIF for further development. GFR2400 is a large scale power unit with a thermal power of 2400 MWth. The cross-sectional view of the reactor core is shown in Fig. 1. The reactor core consists of two zones, the inner fuel core (IF) and outer fuel core (OF). The inner and outer fuel cores consist of 264 and 252 (U,Pu,Am)C fuel assemblies with SiC-SiC fib cladding and W14Re-Re refractory liner. The volumetric content of Pu isotopes in heavy metal in the IF and OF fuel a e-mail<EMAIL_ADDRESS>assemblies reach 14.2%, and 17.6%. The core fuel region is surrounded by fiver rings of Zr 3 Si 2 reflector assemblies in the radial direction and by a 1 m high axial reflector below and above the fission gas plena. The reactivity of the GFR2400 core is controlled through two systems of control rods, CSD and DSD assemblies. Both systems accommodate B 4 C absorbers with 90% weight content of 10 B isotope [1]. The fuel loading patter of the GFR2400 reactor is shown in Fig. 2.
The cross section processing scheme
As it was mentioned, deterministic calculations of the GFR2400 reactor require multigroup XS libraries optimized for a given purpose. The XS preparation scheme of our SBJ E71 XS library, developed for GFR2400, is shown in Fig. 3.
The scheme was created based on previous experience of the authors published in [2] and merges evaluated data and deterministic and stochastic calculation tools. It starts with the processing of ENDF/B-VII.1 [3] evaluated data (1). For demonstrational purposes, only one evaluated data source was used, but the XS preparation scheme is not limited to only ENDF/B-VII.1. The scheme is available for any ENDF6 format nuclear data. The processing of evaluated data is based on the selected list of nuclides (2) and temperatures (3) in the NJOY99 [4] code (4). The produced CE XS library (5) and the material composition (6) are used in the MCNP5 [5] code (7) to calculate the core averaged neutron spectrum and to transform it to multigroup NJOY99 weighting flux. The weight flux (8), the ENDF/B-VII.1 evaluated data and the background XSs (9) are then processed in NJOY99 (10) to produce fine group MATXS cross-section libraries (11). These cross-sections are transformed to effective region-wise macroscopic XS data using TRANSX [6] (12) and stored in the ISOTXS library (13). In order to accelerate the deterministic full core calculations group collapsing can be performed in TRANSX (17), based on the RZFLUX (15) region-wise neutron flux obtained from RZ transport The precision of XS processing based on the Bondarenko method is mainly influenced by the background cross-section data, the weighting flux and the energy group structure of the library. The background cross sections are isotope and energy dependent parameters. Since research of these data falls outside our scope, in our approach the background XSs were adopted from the Korean KAFAX-E70 [9] XS library. To determine the influence of energy group structures on the quality of multigroup XS data, 4 energy group structures (LANL 80 g, LANL 187 g, SAND-II 500 g and SAND-II 620 g) were investigated. The 80 g, 187 g and 620 g are standard NJOY99 energy structures. The 500 g library is a modified version of the SAND-II 620 g, where the thermal energy range has been merged to one energy group. In case of each group structure two weighting functions were used. In the presented analysis the GFR2400 neutron spectrum weight function was compared with the standard NJOY99 "IWT8" option, which represents a typical LMFR neutron spectrum. The IWT8 weighting flux is shown in Fig. 4 and the GFR2400 average neutron spectrum in Fig. 5.
Benchmarking
The bias of the created multigroup XS libraries was evaluated through benchmark analyses. Based on the 2015 edition of the handbook of the ICSBEP project [10], recommendations of WPEC SG33 [11] and results of a similarity assessments performed in SCALE6 18 benchmarks were selected and prepared in a form of PARTISN input files. For each benchmark, 4 energy group structures and 2 weighting options were investigated. The calculations were performed with the fine group versions of the SBJ E71 XS libraries and the results were compared between calculation cases as well as with deterministic calculations using the 150 g KAFAX-E70 library and with CE MCNP5 results using the C/E-1 parameter. The benchmark results presented in Fig. 6 show that the use of slightly different weighing fluxes has only minor effect (1.5-29 pcm) on the k eff results. The influence of group structure was also not the main source of k eff deviation. The bias of the SBJ E71 library to the experiments depends mainly on the complexity of geometry models and required simplifications for deterministic codes. The comparison of the SBJ E71 library with KAFAX-E70 and MCNP5 CE results is shown in Fig. 7.
Application for GFR2400
Since the benchmark cases showed promising results it was justified to use our XS library for GFR2400 core calculations. To complete the calculation scheme two commonly used coarse group structures were adopted, the 25 g and 33 g structures. These structures were used for collapsing the IWT8 and GFR versions of the 80, 187, 500 and 620 group libraries. The 25 g and 33 g libraries were used for HEX-Z full core calculations in DIF3D. The results were compared with 150 g KAFAX-E70 and MCNP5 CE calculations, performed on the same homogenous model. The results (see Fig. 8) show significant deviations between the 80 g and the remaining group structures. The 80 to 25 and 80 to 33 structures overestimated the excess reactivity of the system by 301-310 pcm, while the remaining structures underestimated, compared to MCNP5. An interesting finding is that the reactivity deviation between the 620 to 33 and 187 to 33 libraries was only 50 pcm, what allows us to use the 187 g structure for fine group core calculations. It was also found that the average difference between the 25 g and 33 g coarse group structure is only 8 pcm.
Conclusion
Our optimized SBJ E71 XS multigroup library was prepared based on ENDF/B-VII.1 evaluated data and KAFAX-E70 background XSs. It was prepared in various fine and coarse group structures and two weighting flux options were used for each version. The fine group versions were tested through 18 benchmark experiments. The benchmarks showed promising C/E-1 k eff results in comparison with the KAFAX-E70 library, and the MCNP5 CE calculation. In case of simple systems (PU MET FAST cases) the reactivity deviation was less than 300 pcm. In case of complex systems, such as JOYO and ZPPR9, all XS multigroup libraries were characterized by approximately 1000 pcm bias, caused by the homogenization effect. This effect can be seen also from the results of MCNP5 calculations performed on the simplified benchmark models. The core calculations performed on the 3D HEX-Z model of GFR2400 in DIF3D pointed out, that after group collapsing the effect of fine group weighting flux option in NJOY99 becomes negligible. However; this weighting flux plays an important role in the generation of the RZFLUX file, which is used for group collapsing. Except the 80 g structure, the remaining 3 energy structures could be used for fine group calculations with very similar computational bias, however the lower calculation time makes the 187 g library more suitable. The performed calculations showed that the precision of the SBJ E71 library is comparable with KAFAX-E70, but SBJ E71 is more suitable for GFR2400 calculations. As a result of the presented optimization study the 187 g fine group, 33 g coarse group structure and the GFR2400 weighting flux were selected as basic processing options for future uses of the SBJ E71 library. The most important finding of this analysis is that using the SBJ E71 library in 25 g or 33 g structure reliable results can be obtained in approximately 30 s calculation time, while the same CE MCNP5 analysis requires 24 hours of execution on a cluster system. In order to better assess the precision and usability of the SBJ E71 library, more benchmark experiments will have to be evaluated, and the GFR2400 calculations will have to | 2,336.6 | 2017-09-01T00:00:00.000 | [
"Computer Science"
] |
NucDiff: in-depth characterization and annotation of differences between two sets of DNA sequences
Background Comparing sets of sequences is a situation frequently encountered in bioinformatics, examples being comparing an assembly to a reference genome, or two genomes to each other. The purpose of the comparison is usually to find where the two sets differ, e.g. to find where a subsequence is repeated or deleted, or where insertions have been introduced. Such comparisons can be done using whole-genome alignments. Several tools for making such alignments exist, but none of them 1) provides detailed information about the types and locations of all differences between the two sets of sequences, 2) enables visualisation of alignment results at different levels of detail, and 3) carefully takes genomic repeats into consideration. Results We here present NucDiff, a tool aimed at locating and categorizing differences between two sets of closely related DNA sequences. NucDiff is able to deal with very fragmented genomes, repeated sequences, and various local differences and structural rearrangements. NucDiff determines differences by a rigorous analysis of alignment results obtained by the NUCmer, delta-filter and show-snps programs in the MUMmer sequence alignment package. All differences found are categorized according to a carefully defined classification scheme covering all possible differences between two sequences. Information about the differences is made available as GFF3 files, thus enabling visualisation using genome browsers as well as usage of the results as a component in an analysis pipeline. NucDiff was tested with varying parameters for the alignment step and compared with existing alternatives, called QUAST and dnadiff. Conclusions We have developed a whole genome alignment difference classification scheme together with the program NucDiff for finding such differences. The proposed classification scheme is comprehensive and can be used by other tools. NucDiff performs comparably to QUAST and dnadiff but gives much more detailed results that can easily be visualized. NucDiff is freely available on https://github.com/uio-cels/NucDiff under the MPL license. Electronic supplementary material The online version of this article (doi:10.1186/s12859-017-1748-z) contains supplementary material, which is available to authorized users.
Effect of different MUMmer parameters: result comparison approach
In general, the simulated difference is assumed to be correctly detected if its location intersects with the location of a NucDiff-detected difference of the same type. However, some exceptions had to be made in order to allow a fair comparison in cases where there are identical bases nearby just by incident. First, in the cases with all types of deletions and simple relocations and translocations, the detected difference may be located not more than 3 bases before or after the simulated difference. Second, some differences are allowed to have several corresponding types, i.e. simulated simple relocations and translocations may be detected by NucDiff as simple relocations and translocations or relocations and translocations with overlap. In spite of the chosen NUCmer and delta-filter parameter values, we defined that all repeat related differences are detected as non-repeat related if they are shorter than 30 bases. If some fragment was relocated to another place in the query sequence, we defined that it is detected as a simple insertion if it is shorter than 30 bases.
These limits are tool independent and were introduced to avoid detection of random duplications and fragment relocations.
Supplementary figures and tables
Figure S1 Reference fragments placement order depending on query fragment orientations during detection of local differences. a) case shows the placement order of A* and B* when A and B have the same orientation as A* and B*, b) case shows their placement order when A and B have the opposite orientation. The placement relation between A and B, A* and B* may be differ from what is shown here. In Table S1 case 1 may appear only after merging the fragments in the nested fragment cases. It will never be met in the NUCmer output. In all cases with overlaps between query or reference fragments (cases 5, 6, 7, and 8), the lengths of corresponding differences ( In the simulated modifications the following lengths of regions were used: 1. Len(A/C) = 500 bases in all described cases, except reshuffling case 2. Len(x/y) =10500 bases in all described cases 3. Distance between each manipulation case = 10500 4. Len(B) = {5,20,50,65,85,88,100,150,200,250,300,350,400} bases in deletion (1,2,3,5), insertion (1,2,5) , relocation (1,5,6), translocation (2,3) and inversion (1) (2) case. B3 contains one nucleotide difference with each of B1 and B2. The differences are located in the reference sequence at the same positions where B1 and B2 have two differences of one nucleotide lengths. 11. Len(correct/completely wrong seq) = {5,20,50,65,85,88,100,150,200,250,300,350,400} bases in all unaligned sequence cases. All other NUCmer parameters, except --maxmatch, have the NUCmer default values and remained fixed in our tests. The --maxmatch parameter, which tells NUCmer to use all anchor matches regardless of their uniqueness, is not used by default in NUCmer, but is required for NucDiff and thus is used in all tests.
As for the delta-filter filtering parameters, -q parameter (query alignment using length*identity weighted LIS [longest increasing subset]) is required for NucDiff to get the output results needed for the analysis and is present in all tests performed.
In the QUAST-like tests, we ran a test with the same parameter values used by QUAST, except for the -q parameter. It is not used by QUAST, but is required for NucDiff. Table 1 and Table 3) relocation, relocation with overlap, relocation with insertion, relocation with insertion and inserted gap relocation relocation with inserted gap relocation, fake: scaffold gap size wrong estimation unaligned sequence unaligned all translocation types translocation reshuffling local misassembly | 1,289.4 | 2017-07-12T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Transverse single-shot cross-correlation scheme for laser pulse temporal measurement via planar second harmonic generation
We present a novel single-shot cross-correlation technique based on the analysis of the transversally emitted second harmonic generation in crystals with random distribution and size of anti-parallel nonlinear domains. We implement it to the measurement of ultrashort laser pulses with unknown temporal duration and shape. We optimize the error of the pulse measurement by controlling the incident angle and beam width. As novelty and unlike the other well-known cross correlation schemes, this technique can be implemented for the temporal characterization of pulses over a very wide dynamic range (30 fs–1ps) and wavelengths (800–2200 nm), using the same crystal and without critical angular or temperature alignment. © 2016 Optical Society of America OCIS codes: (190.0190) Nonlinear optics; (190.2620) Harmonic generation and mixing, (190.4420) Nonlinear optics, transverse effects in; 230.4320 Nonlinear optical devices. References and links 1. J.-C. Diels and W. Rudolph, Ultrashort Laser Pulse Phenomena, 2nd Ed. (Academic Press, 2006). 2. K. Oba, P. C. Sun, Y. T. Mazurenko, and Y. Fainman, “Femtosecond single-shot correlation system: a time-domain approach,” Appl. Opt. 38(17), 3810–3817 (1999). 3. I. Walmsley and C. Dorrer, “Characterization of ultrashort electromagnetic pulses,” Adv. Opt. Photonics 1(2), 308–437 (2009). 4. J. C. Diels, J. J. Fontaine, I. C. McMichael, and F. Simoni, “Control and measurement of ultrashort pulse shapes (in amplitude and phase) with femtosecond accuracy,” Appl. Opt. 24(9), 1270–1282 (1985). 5. R. Trebino, K. W. DeLong, D. N. Fittinghoff, J. N. Sweetser, M. A. Krumbügel, and D. J. Kane, “Measuring ultrashort laser pulses in the time-frequency domain using frequency-resolved optical gating,” Rev. Sci. Instrum. 68(9), 3277 (1997). 6. B. Alonso, I. J. Sola, Ó. Varela, J. Toro, C. Mendez, J. San Román, A. Zaïr, and L. Roso, “Spatiotemporal amplitude-and-phase reconstruction by Fourier-transform of interference spectra of high-complex-beams,” J. Opt. Soc. Am. B 27(5), 933–940 (2010). 7. B. Yellampalle, R. D. Averitt, and A. J. Taylor, “Unambiguous chirp characterization using modifiedspectrum auto-interferometric correlation and pulse spectrum,” Opt. Express 14(19), 8890–8899 (2006). 8. G. Szabó, Z. Bor, and A. Müller, “Phase-sensitive single-pulse autocorrelator for ultrashort laser pulses,” Opt. Lett. 13(9), 746–748 (1988). 9. J. Paye, M. Ramaswamy, J. G. Fujimoto, and E. P. Ippen, “Measurement of the amplitude and phase of ultrashort light pulses from spectrally resolved autocorrelation,” Opt. Lett. 18(22), 1946–1948 (1993). 10. C. Yan and J. C. Diels, “Amplitude and phase recording of ultrashort pulses,” J. Opt. Soc. Am. B 8(6), 1259–1263 (1991). 11. F. Salin, P. Georges, G. Roger, and A. Brun, “Single-shot measurement of a 52-fs pulse,” Appl. Opt. 26(21), 4528–4531 (1987). 12. R. Trebino, “FROG. The measurement of ultrashort laser pulses,” (Springer, 2000). 13. M. Beck, M. G. Raymer, I. A. Walmsley, and V. Wong, “Chronocyclic tomography for measuring the amplitude and phase structure of optical pulses,” Opt. Lett. 18(23), 2041–2043 (1993). 14. L. Gallmann, D. H. Sutter, N. Matuschek, G. Steinmeyer, U. Keller, C. Iaconis, and I. A. Walmsley, “Characterization of sub-6-fs optical pulses with spectral phase interferometry for direct electric-field reconstruction,” Opt. Lett. 24(18), 1314–1316 (1999). 15. G. Berden, S. P. Jamison, A. M. MacLeod, W. A. Gillespie, B. Redlich, and A. F. van der Meer, “Electrooptic technique with improved time resolution for real-time, nondestructive, single-shot measurements of femtosecond electron bunch profiles,” Phys. Rev. Lett. 93(11), 114802 (2004). Vol. 24, No. 19 | 19 Sep 2016 | OPTICS EXPRESS 22210
Introduction
Lasers delivering ultrashort pulses play nowadays an increasingly important role in many research and technological fields.Material processing, high-resolution imaging and detection, investigation of complex molecular system's dynamics, biomedical science and medicine are only few examples where the interaction of femtosecond light pulses with different media is the main tool.These applications are meaningful only if one is able to perfectly characterize the laser pulses used in the experiment.Most of the parameters such as the peak power, spectral bandwidth, chirp content or M 2 factor strongly depend on the temporal duration and shape of the pulse [1].The femtosecond time scale is beyond the reach of standard electronic display instruments.Many different techniques, most widely based second harmonic generation (SHG) correlations, have been extensively adopted for a partial or complete temporal characterization of ultrashort laser pulses [2][3][4][5][6][7][8][9][10][11].Among them, techniques based on frequency resolved optical-gating (FROG and XFROG), or on spectral interferometry (SPIDER) allow for a detailed reconstruction of the pulse characteristics, both in amplitude and phase [12][13][14].Beside these complex and expensive techniques, classical auto-correlation is still a valuable tool in many situations where only partial information such as pulse duration is required.When the temporal pulse shape has to be also retrieved, cross-correlation techniques between the unknown pulse and a selected reference are usually implemented [15][16][17][18].Since these methods relay on the detection of the second harmonic (SH) generation, the requirement of phase matching critical alignment, temperature tuning and the need of using very thin crystals make them far from user-friendly.
Few years ago, a novel auto-correlation scheme based on the detection of the transverse second harmonic signal generated by the overlapping of two non-collinear laser beams in a nonlinear crystal with a random-sized domain distribution, has been proposed as a simple and effective single-shot technique for the partial temporal characterization of Gaussian pulses with duration from 30 fs up to few hundreds of fs [19][20][21].The particular two-dimensional (2D) distribution of the needle-like, oppositely oriented nonlinear domains in specific ferroelectric crystals creates a continuous set of reciprocal lattice vectors with different modulus and orientations within the plane perpendicular to the optical axis, providing phase matching for nonlinear frequency conversion over a very broad wavelength and angular range [22].The SH is generated in a whole plane perpendicular to the input pulse propagation direction, including the transverse one, making possible the detection of the transverse second harmonic generated (TSHG) signal.These transverse autocorrelation (TAC) technique removes the demand of thin nonlinear crystals or critical angular and/or temperature tuning needed in traditional AC setups.Moreover, recording the auto-correlation trace from the top of the crystal (transverse direction) allows following the evolution of the AC trace all along the propagation distance within the crystal.
In this work we implement, for the first time to our knowledge, the transverse singleshot cross-correlation technique (TSCC) to the measurement of the duration and shape of ultrashort non-Gaussian laser pulses.This method combines the capability of typical cross-correlation methods, where the spatially resolved nonlinear signal generated by the overlap between a reference and an unknown pulse provides information of the temporal cross-correlation signal, with the advantages of the transverse detection of the crosscorrelation trace provided by our TAC technique.Finally, we study and optimize the limitations imposed by the pulse duration.
Experimental configuration, results and discussions
Our experimental set-up is schematically shown in Fig. 1.The pulse to be measured (P) (from now on called "unknown" pulse) is combined with a reference pulse (R) in a characteristic single-shot cross-correlation scheme (Fig. 1(a)).Both pulses overlap with an external angle of 2α within a Strontium Barium Niobate (SBN) crystal.The crystal shows as-grown random-sized nonlinear domains with an inverted sign of the 2nd order nonlinearity, oriented along the optical axis (z-axis in Fig. 1).The non-collinear interaction between the two pulses generates a SH signal which, due to the characteristics of the SBN crystal, is emitted in a whole plane (xy) perpendicular to the one containing both propagating beams and the optical axis z.The TSHG is schematically shown in the Fig. 1(b).A CCD camera placed above the crystal records the spatially resolved TSH signal generated by the two pulses along the whole propagation distance within the crystal.A typical TSH trace recorded by the CCD camera is shown in the inset of the Fig. 1(b).This signal represents the spatial intensity cross-correlation trace, I CC (z), directly related to the temporal intensity cross-correlation I CC (t).To retrieve the temporal crosscorrelation trace from the corresponding spatially dependent one, a calibration factor, depending on the geometry of the setup, must be applied.This calibration can be properly obtained by measuring the auto-correlation of the reference pulse, as detailed in [22].The cross-correlation signal intensity I CC (t) between the reference pulse with the intensity I R (t) and the unknown pulse I P (t) can be mathematically represented by their convolution.So the unknown pulse I P (t) can be retrieved if we assume the reference temporal profile is known using the relation: where and 1 − denote the Fourier transform and inverse Fourier transform operators.In our experiment we have used pulses at 800 nm provided by a Ti:Sapphire laser working at 76 MHz, with a duration of 180 fs (at FWHM) and energies of 20 nJ/pulse.The pulse was divided into two identical replicas using a beam splitter.One of the replicas served as reference pulse (R) while the second one was used to generate the "unknown" pulse.In this work, since we want to explore and prove the capabilities and resolution of our technique for the temporal profile measurement, we have used a controllable "unknown" pulse.For this purpose we have built a double-Gaussian pulse generator, based on a Michelson-type configuration, generating a controllable pulse profile from the overlapping of two temporally delayed Gaussian pulses (A and B in Fig. 1(a)).The delay between the two pulses (T sep ) was carefully controlled moving the mirror M 1 with a high precision motorized linear stage over a distance D sep , with 0.1 μm step resolution.
2 / , where c is the speed of light.The reference and the doublepeak unknown pulse were then overlapped inside a 5 mm long SBN crystal with an external intersection angle α = 25°.The TSHG signal at 400 nm was recorded using a Spiricon SP620U CCD camera with 230 px/mm resolution, placed above the crystal.Two half-wavelength plates were used to control the polarization of the beams incident onto the SBN for maximizing the efficiency of the SH process (ee-e interaction).As a first step we have to calibrate the "unknown" pulse.Zero delay calibration was obtained by scanning mirror M 1 until the CCD image of TAC trace formed by the reference signal R and signal A overlapped completely with the CCD image of TAC trace formed by signal R and signal B. This measurement served to characterize the reference beam and provided the calibration factor between the spatial and temporal domains by using the relation (2), as detailed in [21]: where ∆z AC is the FWHM width of the TAC trace sequence recorded by the CCD, while TAC is the FWHM duration of the TAC temporal signal.The obtained space-time decoding factor was 19.4 fs/pixel (1 pixel equal to 19.4 fs).This value will set the resolution of our measurements in this particular configuration.Experimental measurements of the TAC trace corresponding to the reference pulse are shown in Fig. 2(a) where the TAC signal at position x = 2 mm (Fig. 2(b)) shows a reference pulse FWHM duration in intensity of T = 178 fs.The recorded images correspond to that shown in the inset of Fig. 1(b) and have been zoomed in the transverse direction in Figs. 2 and 3. Note that the apparent tilting of the traces is due to the fact that we used different scale in the x and z directions in the plots in order to show better the transverse dependence of the I CC traces.The experimental cross-correlation trace between the R pulse and the P pulse for T sep = 300 fs is shown in Fig. 2(c).The cross-correlation profile at the position x = 2 mm is plotted in Fig. 2(d), where the two-peaked asymmetric cross-correlation trace of the unknown pulse can be clearly seen.Using the same calibration factor obtained in the TAC measurement, one can directly retrieve the temporal cross-correlation profile, I CC (t), from the spatially resolved measurement I CC (z).Note that this technique permits to observe the evolution of the cross-correlation trace (transverse profile in z direction) along the crystal propagation distance (x direction).Since the incident pulses are affected by material dispersion during propagation inside the crystal, a dynamical evolution of the pulses can be detected and used to determine the pulse chirp as shown in [20].This fact becomes relevant for pulses shorter than 100fs, but it is not a limiting factor in our experiment where dispersion lengths are of the order of L D = 25 mm or larger.
The unknown temporal profile can be retrieved from the deconvolution between the I CC (t) experimental data and the reference pulse, using Eq.(1).We have observed that direct use of discrete experimental profile data leads to the appearance of oscillations in the retrieved signal profile.We have used the following procedure to obtain a smooth temporal profile: i) obtain a fit of the experimental temporal cross-correlation and get a smooth function of I CC (t); ii) obtain a fit of the reference pulse and iii) use these fitting functions to retrieve the temporal pulse profile using Eq. ( 1).
In principle, any fitting algorithm could be used to obtain a smoothed version of the experimental data.Here we use fitting functions for the cross-correlation data corresponding to a superposition of two Gaussian functions as follow: where T sepcc is the pulses separation and T cc the individual pulse duration at FWHM in intensity.
Figure 2(d) shows the fitted curve in red where T cc = 250 fs and the temporal separation T sepcc = 299 fs.The reference pulse was fitted using a Gaussian function: with T R = 178 fs.Using these two functions and applying the deconvolution procedure we have obtained the retrieved unknown temporal pulse shown in Fig. 2(e) (blue dots).From the corresponding fit (red line), using the same functional form as the Eq. ( 3), we can directly determine the unknown pulse peak separation T sepP = 299 fs and the temporal duration of each individual Gaussian pulse, T P = 175 fs.These results are in very close agreement with the real values used in our experiment for the generation of the unknown pulse, T sep and T.
An estimated error ε between the retrieved values and the values set for the generation of the unknown pulse is calculated by using the following expression: giving, in this particular case, a value of 1.9%.We have generated several different double-peak pulses setting T sep to 200, 267, 367, 1333 fs.3(c1-c4).The errors obtained for each case using Eq. ( 5) are 4.4%, 1.4%, 0.6% and 9.9% respectively, showing that the longer the pulse the larger the error, keeping all values under a 10%.
Once we have described the capabilities of the method to measure the transverse cross-correlation traces, in order to check our experimental results and to identify limitations of the technique we numerically solved the nonlinear interaction (in the phasematching regime) and propagation of the reference and unknown pulses through the SBN crystal to obtain the cross-correlation trace.For short pulses, the material dispersion acting during propagation along the crystal leads to a pulse lengthening, which affects the trace profile during propagation.This effect can be used to determine the chirp parameter of the pulse incident in the crystal as has been demonstrated in [19].However for the purpose of this work, the pulses used where so long that no significant dispersion effects were expected.In the case of long pulses the factor that cannot be overlooked in order to obtain a valid measurement is the effect of the beam size on the recorded trace.
To check the relevance of this effect in our particular configuration, we should consider the beam size relative to the pulse duration.In a previous work [21] we discussed two extreme situations: when the condition T R /R 0 <<tan(α/c) holds (R 0 is the spatial FWHM width in intensity, for our laser beam R 0 = 1mm) the I CC provides the direct mapping of the temporal pulse shape, with no limitation imposed by the finite beam size.In the case T R /R 0 >> tan(α/c), the CCD recorded TSCC trace sequence I CC does not give a proper TAC because the beam size limits the overlapping region.Due to the duration of our pulses we need to consider these effects in our actual setup.We extended the TAC trace simulation method in [21] to obtain the TSCC trace recorded in our experiment.The reference and unknown pulse can be written respectively as: where (x 1 , z 1 ) and (x 2 , z 2 ) are coordinate systems oriented along the propagation direction of each one of the beams; T R and T P are the FWHM duration in intensity measured for the reference and unknown pulses; T sepP is the experimentally retrieved unknown pulse peak separation and u is the velocity of the pulse in the SBN crystal.By considering that the ee-e interaction is phase-matched in our crystal by the random domains and after performing a change of variables to a common reference system (x,z), the recorded TSCC trace set can be simulated by: When T P /R 0 << tanα/u, the spatial part can be safely approximated by 1 and the I CC is decided solely by I temporal which is the direct mapping of the temporal pulse shape.When T P /R 0 >> tanα/u, the TSCC trace sequence I CC is strongly affected by I spatial .As can be seen in this expression, the influences of this term will depend strongly on the incident angle of the overlapping beams.
In order to further analyze the effect of the incident angle and beam diameter on the TSCC trace, we simulated the particular case where Last column in Fig. 4 shows the trace at the position indicated by the dashed white line of I temporal (red line) and I CC (blue dots) for each one of the situations.As we can see in plot (a 4 ), a too small angle is limiting the capability to record properly the TSCC trace sequence so the blue dots do not match the temporal CC signal given by the red line.For the case of plot (b 4 ), a too small beam radius leads to a narrow I spatial component, which also leads to a failure in the TSCC reconstruction.As the pulses to be measured become longer, one should increase the incident angle or/and expand the beam diameter to get an error-free TSCC trace set.The plot in (c 4 ) shows that if the conditions of beam radius and incidence angle are adequate one can properly record the proper CC trace.
The experimental values of beam radius and incidence angle were selected according to these considerations.The simulated TSCC traces obtained using the retrieved values of pulse duration, T P , and peak separation of the unknown pulses, T sepP , after the deconvolution process (with the values of incidence angle and beam radius used in the experiment) are shown in Fig. 3(d1-d4) respectively, showing very good agreement with the experimental recorded TSCC traces of Fig. 3(a).The increased error with unknown pulse duration mentioned in a previous paragraph could be due to the fact that increasing the pulse duration we are approaching the limit imposed by the spatial part of I CC .
Conclusions
In conclusion, we demonstrate that using a nonlinear SBN crystal and the corresponding TSHG, we performed transverse cross-correlation single-shot measurements and we can characterize the duration and temporal shape of ultrashort laser pulses with complex temporal profiles.Using this procedure, one could measure TSCC traces for pulses ranging between 30 fs up to 1 picosecond.We have measured several two-peaked Gaussian beams over a broad temporal window and studied the role played by factors such as the incident angle or beam radius as error sources for the final resolution of this pulse measurement technique.We have shown that, if we want to perform measurements in the long pulse duration range an increase of the incident angle or/and an expansion of the beam diameter would be required.In this technique the vertical emission of TSHG removes the requirement of a thin nonlinear crystal and enables one to measure the undistorted pulse at the entrance of the crystal.As important characteristic, the property of automatic phase matching without angular alignment or temperature control, makes the technique applicable to a very broad wavelength range and enormously simplifies the operation process.
Fig. 1 .
Fig. 1.(a) Schematic representation of the TSCC setup; (b) The unknown pulse is overlapped with the reference pulse and focused onto the SBN crystal with the intersect angle 2α; the vertical emitted TSHG signal from the top of the SBN crystal is detected by a CCD; the TSCC trace sequence at different x position (x0, x1…xn) constitute the TSCC trace set.
Fig. 2 .
Fig. 2. (a): The CCD recorded TAC trace set along the 5 mm SBN crystal; (b): The TAC trace sequence at x = 2 mm; (c): The CCD recorded TSCC trace set along the 5 mm SBN crystal; (d): ICC(t), the TSCC trace sequence at x = 2 mm; (e): IP(t), the retrieved unknown pulse sequence (blue dots), the Gaussian analytical fit (continuous red line), the original "unknown" pulse (pink dots) plotted used the experimental values of Tsep and the pulse duration for each single pulse measured in Fig. 2(b).
Figures 3(a1)-3(a4) show the CCD recorded TSCC traces set along the SBN crystal for each pulse.The corresponding cross-correlation signals selected at x = 2 mm position, are shown in Figs.3(b1)-3(b4) together with the fitted curves (red line).The retrieved unknown temporal pulses are shown in Fig.
the I CC represented by Eq. (8) can be written as the product of two functions, one related to the spatial characteristics, I spatial integral of the temporal component of I CC , I temporal , the I CC represented by Eq. (8) can then be rewritten as: T R = 180 fs; T P = 180 fs; T sepP = 300 fs; I C /I D = 0.8.The simulated I temporal , I spatial and I CC traces for different incident angles and beam diameters are shown in Fig. 4. The first column, (a 1 ), (b 1 ) and (c 1 ) shows the temporal cross-correlation of the pulse with no spatial contribution for two different incidence angles 12° (a 1 ) and 30° (b 1 -c 1 ) evidencing the role played by the incidence angle.Temporal broadening is not observed during propagation due to the long pulses used in the simulation (notice that for such pulse durations the group velocity dispersion length is L D = 190 mm for the cross-correlation signal).The second column (a 2 ), (b 2 ) and (c 2 ) shows the effect of the spatial part (I spatial ) for three different situations: (a 2 ): α = 12°, R o = 0,8 mm; (b 2 ): α = 30°, R o = 0,3 mm and (c 2 ): α = 30°, R o = 0,8 mm.The finite size of the beam leads to a strong reduction of the overlapping region, so the TSCC trace remains unchanged only at a particular region where I spatial is approximately 1.The effect of the spatial part depends strongly on the beam diameter and incidence angle.The third column (a 3 ), (b 3 ) and (c 3 ) shows the complete TSCC trace set, I CC given by Eq. (10). | 5,383 | 2016-09-19T00:00:00.000 | [
"Physics"
] |
Sequestration of Martian CO2 by mineral carbonation
Carbonation is the water-mediated replacement of silicate minerals, such as olivine, by carbonate, and is commonplace in the Earth’s crust. This reaction can remove significant quantities of CO2 from the atmosphere and store it over geological timescales. Here we present the first direct evidence for CO2 sequestration and storage on Mars by mineral carbonation. Electron beam imaging and analysis show that olivine and a plagioclase feldspar-rich mesostasis in the Lafayette meteorite have been replaced by carbonate. The susceptibility of olivine to replacement was enhanced by the presence of smectite veins along which CO2-rich fluids gained access to grain interiors. Lafayette was partially carbonated during the Amazonian, when liquid water was available intermittently and atmospheric CO2 concentrations were close to their present-day values. Earlier in Mars’ history, when the planet had a much thicker atmosphere and an active hydrosphere, carbonation is likely to have been an effective mechanism for sequestration of CO2.
T he possible reasons for the depletion of Mars' early dense and likely CO 2 -rich atmosphere remain contentious [1][2][3][4][5][6] . On Earth, the replacement of olivine by carbonate, termed carbonation, is an effective way to sequester and store atmospheric CO 2 . For example, the Samail Peridotite in Oman annually binds 4 Â 10 7 kg of CO 2 via carbonation 7 . The replacement of olivine by carbonate is exothermic, and hence once the activation energy barrier is overcome and while CO 2rich fluids and olivine are freely available, the carbonation reaction can be self-perpetuating 8 . On Mars, anhydrous silicate minerals, including olivine, are abundant throughout the crust 9 . Secondary mineral assemblages consisting of carbonates and phyllosilicates have also been observed, exposed at the planet's surface, by orbiters and rovers [10][11][12][13][14][15] , and studied directly where they occur in Martian meteorites (for example, the nakhlites) 16 . Their presence is consistent with the interaction of liquid water with the crust, at least sporadically. During the first billion years of Mars' history, the atmosphere is believed to have been thicker than at present and CO 2 -rich, with up to 5 bars of pressure [1][2][3][4][5][6] , and dry river valley networks and outflow channels [17][18][19] demonstrate the former presence of surface waters. As all the reactants for carbonation (that is, CO 2 , liquid water and olivine) were present on Mars, it has been suggested as a viable mechanism by which the planet lost its early CO 2 -rich atmosphere 20,21 .
Here we seek evidence for carbonation by examination of Lafayette, a Martian meteorite that crystallized c. 1,300 Ma during the Amazonian epoch. Lafayette is an olivine clinopyroxenite that contains carbonate, which formed by aqueous activity within the outermost c. 30 m of the planets' crust 22 . K-bearing phyllosilicates that are intergrown with the carbonate have been dated to 633 ± 23 Ma (ref. 23), thus temporally constraining the waterrock interaction. Using scanning electron microscopy (SEM), electron probe microanalysis (EPMA) and electron backscatter diffraction (EBSD), we studied a thin section of Lafayette (USNM 1505-5) and grains that had been mechanically separated from bulk samples of the meteorite (NHM 1959 755). The secondary minerals in Lafayette have been identified previously as ferroan saponite, Fe-rich smectite and siderite 24 ; (Supplementary Table S1). All three occur within veins that cross-cut olivine grains. Petrographic relationships demonstrate that the siderite formed by isovolumetric replacement of (001) parallel olivine vein walls, which was itself later replaced by Fe-rich smectite. The carbonate has also partly replaced a plagioclase feldspar and apatite-rich mesostasis. Mass balance calculations show that these reactions required only the introduction of liquid water and CO 2 into the region of the Martian crust from which Lafayette was derived, and that carbonation in one part of the crust may have been coupled with crystallization of the Fe-rich smectite in another. The small volume of siderite in Lafayette indicate that carbonation was limited during the Amazonian, but this reaction is likely to have been far more widespread within crustal rocks that were exposed to groundwater charged with CO 2 from the thicker Noachian atmosphere.
Results
Mineralogy and petrography. Lafayette belongs to the nakhlite group of meteorites that were ejected from Mars at 10. (73.5±6.7 vol%) and olivine (16.7±5.7 vol%) with an interstitial groundmass (mesostasis) (9.8 ± 1.2 vol%) that is dominated by plagioclase feldspar with lesser apatite, titanomagnetite and Si-rich glass 24,26,27 (Supplementary Table S1). This nakhlite contains a suite of secondary minerals, principally siderite and smectite 24,27 , that are identified as Martian in origin 24 ; (Supplementary Table S1). The siderite and smectite occur within olivine-hosted veins and form patches between augite and olivine grains. Lafayette has the highest abundance of secondary minerals among the nakhlites, occupying c. 1 vol% (ref. 28).
Olivine-hosted veins. EBSD mapping of olivine grains demonstrates that the axes of most of the veins lie parallel to (001) ol (Fig. 1a). Two vein types can be recognized by differences in their size and mineralogy: the narrow veins are 1-2 mm wide, have planar or finely serrated walls and contain a compact and very finely crystalline Mg-Fe silicate that has been identified as ferrous saponite 24 ; (Table 1; Supplementary Table S1). Many of the narrow veins extend only part way into olivine grains from intergranular boundaries or from intragranular fractures (Fig. 1a). These veins pass into lines of ferrous saponite inclusions whose faceted shape is defined by {111} ol (Fig. 1a). In contrast, the larger veins are up to 40 mm wide, cross-cut entire olivine grains and have coarsely serrated walls (Fig. 1a). Some veins originate from intragranular fractures, whereas others cross-cut the fractures and so clearly post-date them ( Fig. 1a,b). These veins contain a 1-2mm-wide axial strip of ferrous saponite that is flanked by bands of a fibrous Mg-Fe phyllosilicate up to 2 mm wide (Fig. 1b,c). This phyllosilicate has been previously interpreted to be a Fe-rich smectite 24 (or a smectite intergrown with serpentine 28 ); (Table 1). Siderite (Table 1) occurs only within those parts of veins that are wider than 4 mm, which corresponds to the deepest notches (Fig. 1b). The walls of these notches lie parallel to the traces of {102} ol or {111} ol (Fig. 1b). Siderite has an irregular interface with the Fe-rich smectite (Fig. 1c) and is also cross-cut by narrow smectite veins and occasionally also contains smectite 'sprays' and 'rosettes' 24,28 . Where veins bifurcate, a wedge composed of olivine, siderite, Fe-rich smectite or any combination of these minerals occurs between them (Fig. 1c).
Mesostasis patches. Patches containing siderite, Fe-rich smectite and titanomagnetite are 100-150 mm in size (Fig. 1d) and restricted in their occurrence to discrete millimetre-sized regions of Lafayette. These patches are comparable in size, shape and petrographic context to pristine areas of mesostasis (Fig. 1d). The mesostasis siderite is depleted in Mn and enriched in Ca relative to that in the olivine-hosted veins ( Table 1). In all occurrences, the mesostasis siderite is enclosed and cross-cut by Fe-rich smectite (Fig. 1d).
Discussion
The olivine-hosted veins of ferrous saponite are interpreted to be the first products of water-rock interaction. The grain boundaries and intragranular fractures from which the veins have propagated must have served as conduits for the aqueous solutions. As the fine-scale serrations on vein walls are comparable in morphology ARTICLE to etch pits in olivine grains from the Nakhla (Martian) meteorite 29 , and naturally weathered terrestrial rocks 30 . The veins are inferred to have formed by dissolution of olivine and concomitant precipitation of ferrous saponite (Fig. 2a,b; Supplementary Note 1; Supplementary Table S2). The veins are formed by coalescence of lines of ferrous saponite inclusions beyond their tips, and they have probably exploited defects parallel to (001) ol , such as subgrain boundaries. This mechanism of vein formation is equivalent to 'centripetal' replacement of terrestrial olivine, and serrated vein walls are also a characteristic of this reaction 31 . The presence of Na, Al, P, K and Ca in the ferrous saponite 24 (Supplementary Note 1) indicates that the cations were not sourced solely from the olivine, and are likely to have been derived from dissolution of mesostasis feldspar and apatite.
The formation of ferrous saponite veins was an important driver of subsequent olivine carbonation for two reasons. First, the vein walls served as conduits for CO 2 -rich fluids to gain access to grain interiors, and it has been demonstrated experimentally that partial serpentinisation of terrestrial olivine increases its susceptibility to carbonation 32 . Secondly, the absence of siderite on grain boundaries that lie parallel to (010) ol and (100) ol (Fig. 1a) shows that replacement was crystallographically controlled and most effective on surfaces parallel to (001) ol (that is, the vein walls). This control on siderite formation by the crystal structure of olivine demonstrates that the carbonate has formed by replacement. Additional evidence for replacement is that siderite cross-cuts preexisting fractures (Fig. 1a,b), and is intergrown with wedges of olivine between closely spaced veins (Fig. 1c). Siderite grew most rapidly parallel to [001] ol and the dissolution-reprecipitation front was guided by the olivine crystal structure to make the {102}/{111} notches (Fig. 2c). Such coarsely serrated olivine-carbonate interfaces are also diagnostic of terrestrial carbonation 7,32 . The olivine-hosted siderite was subsequently replaced by the Fe-rich smectite 28 on a volume-for-volume basis. The dissolutionreprecipitation front extended uniformly inwards from the ferrous saponite-siderite interface so that only carbonate in the deepest notches remains (Figs 1b and 2d).
The patches of siderite and Fe-rich smectite between augite and olivine grains are also interpreted to have formed by isovolumetric replacement, first of the mesostasis minerals and second of siderite by Fe-rich smectite. The presence of mesostasis-derived elements in the ferrous saponite indicates that the apatite and plagioclase feldspar had undergone dissolution during early stages of water-rock interaction. However, two lines of evidence demonstrate that the siderite formed predominantly by replacement rather than by filling pores resulting from the congruent dissolution of the mesostasis: (i) nowhere in Lafayette has siderite been observed to cement fractures, despite the evidence that they were present before carbonation (Fig. 1a,b), and (ii) the mesostasis siderite is enriched in Ca and depleted in Mn relative to olivine-hosted siderite, which mirrors the compositions of the precursors (that is, olivine is the main source of Mn in Lafayette and mesostasis is the main source of Ca; see below). The mesostasis siderite was subsequently replaced by Fe-rich smectite (Fig. 1d), which is compositionally comparable to Fe-rich smectite in the olivine-hosted veins ( Table 1). Differences within Lafayette in the degree of replacement of mesostasis may reflect contrasts in original mineralogy that rendered some regions especially susceptible to carbonation (for example, greater volume of Si-rich glass or apatite). However, in the absence of evidence for significant millimetre-scale heterogeneities in the mineralogy of the mesostasis, it is more likely that the carbonating aqueous solutions were in contact with some parts of Lafayette for long periods of time. This is most likely because of the presence of localized regions of elevated permeability, for example, resulting from the partial dissolution of the mesostasis before or during ferrous saponite formation.
The conclusion that siderite and Fe-rich smectite both formed by isovolumetric replacement can be tested by calculating the exchange of elements during these reactions. These calculations assume that only water and CO 2 were sourced from outside of Lafayette. Carbonation of olivine required import to the reaction site of Ca, Mn and CO 2 if cations common to both minerals had been conserved (Supplementary Note 1 and Supplementary Table S3 the precursor silicate (Table 1). However, crystallization of Ca-carbonates is often kinetically favoured over Mg-carbonates during replacement of terrestrial olivine owing to the weaker hydration of the Ca over the Mg ions 32 . As carbonation of olivine would have enriched the parent fluid with respect to Mg and Fe, it may have been coupled with replacement of the mesostatis by siderite (Supplementary Note 1 and Supplementary Tables S3 and S4). These reactions do not balance precisely owing to a small deficit of Ca and Mn. The Mn may have been sourced from earlier replacement of olivine by ferrous saponite (Supplementary Note 1 and Supplementary Table S2), and the Ca could have come from congruent dissolution of the mesostasis. As siderite obtained a maximum of 53% (by mass) of its cations from olivine and 22% (by mass) from the mesostasis (Supplementary Tables S3 and S4), supply of elements from the dissolving primary minerals may have been insufficient to have supersaturated the interfacial solutions with respect to siderite. The more significant driver for carbonation is likely to have been an increase in the pH (from acidic to alkaline) and bicarbonate activity of the fluid films accompanying dissolution of olivine and the mesostasis. The carbonation reactions and subsequent replacement of the siderite by Fe-rich smectite may also have been coupled. All of the ions required for the Fe-rich smectite entered the bulk solution during carbonation of olivine and mesostasis, or were acquired from the siderite during its replacement (Supplementary Note 1 and Supplementary Tables S3-S6). Linked crystallization of siderite and Fe-rich smectite in Lafayette is expected because these complementary reactions are commonplace during the experimental and natural carbonation of olivine 7,33 . Replacement of siderite by Fe-rich smectite would have enriched bulk solutions in Ca, Mn, Fe and CO 2 that could have been available for further carbonation (Supplementary Note 1 and Supplementary Tables S5 and S6). Therefore, depending on the scale and interconnectivity of the aqueous system, siderite and Fe-rich smectite could have been crystallizing simultaneously within different parts of the nakhlite parent rock, with isovolumetric carbonation in one region stimulating crystallization of Fe-rich smectite in another. This suggestion is consistent with previous calculations of element exchange during aqueous alteration of Lafayette 24 , which indicated that water/rock ratios were low, and most of the secondary mineral cations were derived locally from olivine and the mesostasis.
A previous model for formation of secondary minerals within the nakhlite meteorites 28 hypothesized that they were aqueously altered within a post-impact hydrothermal system, with the siderite cementing serrated fractures that had been opened by shock. The present study has demonstrated the coarsely serrated veins in Lafayette formed by crystallographically controlled carbonation of olivine, so there is no evidence (or requirement) for a genetic link between aqueous alteration of Lafayette and an impact event.
On Mars, carbonate minerals are potentially important sinks for CO 2 with the ability to store the gas over geological timescales. The absence of pore-filling siderite in Lafayette shows that within the region of the crust of Mars that this meteorite has sampled, carbon has been mineralized by replacement. Secondary minerals occupy 9 vol% of each Lafayette olivine grain 28 , and as siderite once occupied two thirds of each vein (that is, the current volume of siderite plus Fe-rich smectite), Lafayette originally contained 1 vol% of olivine-hosted siderite, corresponding to storage of 15.88 kg of CO 2 m À 3 .
Given the evidence for CO 2 drawdown in the Amazonian, we hypothesized that carbonation was a major CO 2 sink during the Noachian with prevailing environmental conditions enhancing the effectiveness of this reaction (that is, higher atmospheric CO 2 concentrations 1-6 and greater availability of liquid water [17][18][19] ; Fig. 3). For example, complete carbonation of olivine and mesostasis in a Noachian Lafayette-type crust would store 312-513 kg CO 2 m À 3 (175-356 kg CO 2 m À 3 in olivine and 137-175 kg CO 2 m À 3 in the mesostasis). However, the ALH 84001 orthopyroxenite meteorite is currently our only physical sample of Noachian crust and relative to Lafayette, its equivalent carbonation potential is much lower (that is, it has only minor quantities of olivine 34 ARTICLE which were embedded in resin and polished. Following carbon coating, backscattered electron (BSE) images were obtained using a Zeiss Sigma field-emission SEM operated at 20 kV/1 nA. The crystallographic orientation of secondary mineral veins within olivine grains was determined by EBSD using a FEI Quanta 200F field-emission SEM equipped with a TSL-EDAX EBSD system. EBSD mapping was undertaken following removal of the carbon coat and with the microscope operated at low vacuum (B50 Pa) and 20 kV. Maps were acquired at a rate of B20 Kikuchi patterns per seconds and with a step size of B0.1 mm. The orientations of poles to various planes are plotted in upper hemisphere stereographic pole figures. Kikuchi patterns could not be obtained from the siderite. | 3,843 | 2013-10-22T00:00:00.000 | [
"Environmental Science",
"Geology",
"Physics"
] |
The North Atlantic Coast Comprehensive Study and the US Army Corps of Engineers Sandy Recovery Program
The Disaster Relief Appropriation Act of 2013 (P.L. 113-2) recognized the need to comprehensively evaluate the existing and planned measures to reduce the flooding risk from tidally-influenced storm surges as well as other alternatives for areas at risk to future storm damages. The legislation directed the US Army Corps of Engineers to undertake a Comprehensive Study of the Sandy impacted areas in the North Atlantic Division (Maine to Virginia). This paper reviews the findings and outcomes of the NACCS and their application across the USACE’s Sandy Recovery Program.
Introduction
On January 29, 2013, the Disaster Relief Appropriations Act, 2013, Public Law 113-2 [1], was enacted to assist in the recovery in the aftermath of the hybrid cyclone-nor'easter known as Hurricane Sandy.The Act directed the Secretary of the Army to "…conduct a comprehensive study to address the flood risks of vulnerable coastal populations in areas that were affected by Hurricane Sandy within the boundaries of the North Atlantic Division of the Corps of Engineers…" (the region extending from Maine to Virginia).The study area included the 10 northeast States and the District of Columbia and focused on locations that were greatly impacted by Hurricane Sandy.
In responding to the legislated mandate, the purpose of the "North Atlantic Coast Comprehensive Study: Resilient Adaptation to Increasing Risk" [2] (NACCS) was to develop strategies accessible to all stakeholders that would facilitate preparations for future storms, climate change, and sea level change.This paper summarizes the findings and outcomes of the study and discusses how they are being implemented in the Sandy Recovery Program.
Hurricane Sandy
Hurricane Sandy was an extraordinary storm, resulting in significant damages in the coastal areas extending from Cape May, New Jersey to Montauk Point, New York and concentrated in the New York-New Jersey Harbor.Peak water levels indicate that Hurricane Sandy was at least greater than a 200 year event, greatly exceeding project design levels.This resulted in damages throughout the New York City metropolitan area.Beyond the New York Bight, including New Jersey, along the north shore of Long Island, NY, Connecticut, Rhode Island, southern Massachusetts, and the Atlantic coasts of Delaware and Maryland, storm tides, although still significant, were considerably lower, typically a 20 to 30 year event.Farther away, in Massachusetts north of Cape Cod, New Hampshire, and Maine to the north and the Chesapeake Bay coastline of Maryland and Virginia to the south, Hurricane Sandy was less than a 10 year event [3].
The Congressional response to the devastation in the wake of Hurricane Sandy represented an effort to address the needs of the regional system and vulnerable populations at risk in coastal regions in the U.S. Army Corps of Engineers (USACE) North Atlantic Division.The series of high magnitude, devastating storm events (Hurricanes Katrina and Rita in 2005, Hurricane Irene in 2011, and Hurricane Sandy in 2012), as well as the trend toward sea level change as a probable future condition, comprehensively evaluate the existing and planned measures to reduce the flooding risk from tidally influenced storm surges as well as other alternatives for areas at risk to future storm damages.
Foundation of the NACCS
The Comprehensive Study is based on the "Infrastructure Systems Rebuilding Principles" advanced by the National Oceanic and Atmospheric Administration (NOAA) and the USACE [4].The purpose of the Rebuilding Principles were to improve long-term performance of coastal rebuilding and restoration actions undertaken through the Infrastructure Systems Recovery Support Functions under the National Disaster Recovery Framework following Hurricane Sandy by implementing Executive Order 11988 and these consistent principles on a regional scale that anticipate a changing environment; integrate economic, social, and environmental resiliency and sustainability; and promote long term community protection.The three Principles are: 1) Work together in a collaborative manner across multiple scales of governance (i.e., local, State, Tribal, and Federal) and with relevant entities outside the government to develop long-term strategies that promote public safety, protect and restore natural resources and functions of the coast, and enhance coastal resilience; 2) Improve coastal resilience by pursuing a systems approach that incorporates natural, social, and built systems as a whole; and 3) Promote increased recognition and awareness of risks and consequences among decision makers, stakeholders, and the public.These Principles built on lessons learned from Hurricanes Katrina and other major storm events, including Sandy.
The Hurricane Sandy Performance Evaluation Study [6], another requirement of the Sandy legislation, provided specific recommendations which were also foundational to NACCS.The report assessed the performance of constructed coastal storm risk reduction projects during Sandy to determine if they had reduced damages as intended.The Evaluation Study concluded that delivery of more comprehensive protection to affected coastal areas would require a broader approach to the investigation and planning of flood and coastal storm damage reduction projects that includes consideration of potential flooding of back-bay reaches of barrier islands among other concerns.It also found that can communities differ in their valuation of coastal environments and that reconciling those differences can be challenging.The Evaluation Study recommended that the efficacy of natural and engineered dunes in reducing risks of coastal storm damages be evaluated.Finally, the report recommended that a broader range of project benefits, including resilience and recovery, be considered to more accurately evaluate the impacts of extreme storm events.The NACCS Study area was defined by the very high and high impact areas.Following Sandy, Federal, State, and local government agencies and NGOs initiated a major response and recovery effort to repair, replace, and restore homes, industry, and critical infrastructure under the National Disaster Recovery Framework.This effort, which culminated in the Hurricane Sandy Rebuilding Strategy [6], has changed the physical and cultural landscape of the impacted areas and has heightened social and political awareness of the potential impacts from future storms.
To more clearly articulate the universe of measures and how USACE would use them to manage coastal storm risk, the USACE published "Coastal Risk Reduction and Resilience: Using the Full Array of Measures" [7].This report introduced the term "Natural and Nature-Based Features" (NNBF) to refer to the universe of natural features, created and evolving over time through the actions of physical, biological, geologic, and chemical processes operating in nature (Figure 1).Nature-based features are those that may mimic characteristics of natural features but are created by human design, engineering, and construction to provide specific services such as coastal risk reduction.Scientific research to better understand the role of natural landscapes nature-based features and natural processes in the context of coastal and fluvial flood risk has and continues to be undertaken internationally [8][9][10][11].The USACE paper advocated an integrated approach to risk reduction through the incorporation of natural and nature-based features in addition to nonstructural and structural measures that also improve social, economic, and ecosystem resilience.
The North Atlantic Coast Comprehensive Study
The goals of the Comprehensive Study were to (1) provide risk reduction strategies to subjected vulnerable coastal populations, and (2) promote coastal resilient communities to ensure a sustainable and robust coastal landscape system, considering future sea level rise and climate change scenarios, to reduce risk to vulnerable population, property, ecosystems, and infrastructure.The The study was to evaluate flood risks and identify areas warranting additional analysis, as well as the institutional and other barriers to providing protection.The final report for the NACCS Study (NACCS) was submitted by the Assistant Secretary of the Army for Civil Works to the US Congress on 28 January 2015.
Rising sea levels and climate change are expected to have a profound effect on the coastal region in the study area.Impacts will likely include shoreline retreat from erosion and inundation, increased frequency and magnitude of storm-related flooding, temperature changes, and saltwater intrusion into the estuaries and aquifers.Relative sea level rise will not only inundate larger coastal areas, but will also be a driver of change in habitat and species distribution, as will other effects of climate changes such as increased sea surface temperatures.Additionally, the presence of developed shorelines behind many of these habitats will prevent natural barrier island overwash and migration landward in response to relative sea level rise.Habitat changes may be structural or functional; species that depend on coastal habitats for feeding, nesting, spawning, protection, and other activities could be severely impacted if this critical habitat is converted or lost.Additional services provided by coastal habitats would also be affected.
The NACCS addresses sea level change in accordance with an internal guidance document on Sea Level Change with applicable to all coastlines within the United States [12].In the case of the NACCS, relative sea levels are rising throughout the entire study area.USACE guidance specifies a method for developing relative sea level change (RSLC) scenarios to be used in developing the range of plausible future conditions in the planning process.In addition, NOAA recently recommended its own set of sea level change scenarios in a report entitled Global Sea Level Rise Scenarios for the US National Climate Assessment [13].The NACCS considered scenarios from both documents.USACE guidance also specifies a risk-based framework for evaluation of RSLC impacts to projects in the presence of other forces (in this case erosion, storm surge, riverine flooding events, etc.).
NACCS Findings and Outcomes
The Comprehensive Study identified and evaluated coastal risks and conditions of ten states, from New Hampshire to Virginia, and the District of Columbia.Across this region, and in many other coastal settings, communities face tough choices as they prepare for changing conditions, including potentially devastating coastal storms.A central NACCS finding is that a more comprehensive protection can only be realized when individuals and government agencies at non-federal and Federal levels collectively recognize, understand, and act to manage and effectively reduce risks attributed to threats posed by flooding and coastal storms.Managing coastal storm risk as a shared responsibility by all levels of government and individual property owners, requires that diverse perspectives be addressed and balanced.Adapting to risk and considering combinations of solutions across agencies and partners is key to being ready for the next big storm.
Another major finding is that risk management and resilience are enhanced when the full array of coastal storm risk management measures are evaluated as part of an integrated plan.Figure 3 illustrates the measures discussed in the NACCS which include: • Structural and NNBF • Non-structural • Policy and programmatic elements, and • Blended solutions…which are particularly key for resilience and adaptation planning over time.
The Coastal Storm Risk Management Framework
One of the major outcomes of the NACCS is the the Coastal Storm Risk Management Framework (Figure 4).Framework looks at vulnerability across the Study reaches and identifies measures that could be used to manage risk.The Study does not make specific project recommendations, but illustrates a systems approach and how it can be applied through using the Framework.The Framework was developed to provide regional partners with a methodology that they can adjust to meet their needs/values within their specific communities.The NACCS Framework offers a common science-based decision framework for the integration of coastal investments and wise coastal zone planning.
It is scalable/customizable for any coastal watershed.Managing coastal flood risk is complex.There are economic, social, and environmental factors to consider, layers of governments involved, and dozens of ways to reduce risk, from using manmade features like levees and seawalls to using natural features like salt marshes and maritime forests.Because every location is different, there is no one fixed solution set.Having a methodology that public and private interests can follow together to assess risk and identify solutions is offered as a primary tool in achieving the integration of all levels of govenernment and partners.The framework is being used in the studies that followed NACCS in the Northeast and other US coastal regions.
The Coastal Storm Risk Management Framework includes evaluations of strategies in response to increased risk from coastal storms and sea level rise.Subsequent analyses at a community specific scale can be undertaken to incorporate climate change adaptation and projected future vulnerabilities.
Complex interactions between alluvial and tidally influenced tributaries will change.The combination of extreme water levels and sea level change (some areas of the NACCS study area will likely experience variations in the effects of sea level change due to relative effects of land and tidal processes) will vary across the study area.Furthermore, the coastal landscape responses will vary across the study area because of the myriad of geomorphological and land use characteristics.Flood frequency, erosion and sedimentation, and environmental responses will depend on site and regional characteristics.Thus, subsequent analyses at a community-specific scale must consider the various components of long-term climate change adaptation and the various strategies and corresponding measures for projected vulnerabilities.This approach will allow Supporting the Framework are other technical products and tools including storm suite modeling, coastal GIS analysis, and economic depth-damage elated evaluations, for the affected coastlines.The Framework and tools stemming from the Comprehensive Study are portable and can be adapted for use in other coastal regions.
The USACE Sandy Recovery Program
The USACE Sandy Recovery Program has made significant progress in restoring the coastal risk reduction projects that were damaged by Sandy.The Flood Control and Coastal Emergencies (FCCE) program has restored 25 projects that were constructed at the time of the Sandy and performed as designed.Restoration to their initial design templates was needed as considerable beach nourishment material was lost during the storm causing significant shoaling in the region's navigation system.The Sandy Recovery Program has also restored 86 channels providing safe navigation to deep-water ports, intra-coastal waterways, and harbors throughout the northeastern states.
At the time that Sandy occurred there were 19 projects that were authorized for construction, but which, for a variety of reasons, had not been fully completed.The largest and most complex of these, Fire Island to Montauk Point, NY (83 miles of coastal Long Island) and Rockaway-Jamaica Bay, NY (in New York City) are being reformulated using the findings and outcomes of the NACCS, as are the 16 studies that were underway when Sandy made landfall.
The NACCS identified 9 highly vulnerable coastal areas, termed Focus Areas, that warranted additional research as they had neither projects or studies underway when Sandy occurred.Shown on Figure 5
Figure 1 .
Figure 1.Natural and Nature-Based Features
Figure 3 .Figure 4 .
Figure 3.The Full Array of Coastal Storm Risk Management Measures , the Focus Areas fall into two groups: large urban centers (New York-New Jersey Harbor and Tributaries, New York and New Jersey; City of Baltimore, Maryland; Metropolitan Washington, District of Columbia;, and City of Norfolk Virginia) and embayment areas west of the Atlantic coast (Coastal Rhode Island; Coastal Connecticut; Nassau County Back Bays, New York; New Jersey Back Bays, New Jersey; and Delaware Inland Bays and Delaware Bay Coast, Delaware).As of May 2016, studies have been initiated for City of Norfolk and New Jersey Back Bays and initial planning steps are being taken to develop comprehensive, resilient strategies for these vulnerable locations.
Figure 5 .
Figure 5.The North Atlantic Coast Focus Areas | 3,448.2 | 2016-01-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
COMPETITIVE ADVANTAGE IN MEDIATING THE EFFECT OF FINANCIAL FLEXIBILITY ON FINANCIAL PERFORMANCE: INDONESIAN SHARIA STOCK INDEX
: Resources are considered a key driver of organizational performance. However, the means to improve performance through resources remains an issue that is yet to be conclusively addressed in both theoretical and practical terms. This study aims to examine the roles of competitive advantage and Islamic Label (IL) in the relationship between financial flexibility and the performance of companies listed on the Indonesian Sharia Stock Index (ISSI). Using a longitudinal approach covering the period 2012–2021 and observing 88 companies, a total of 880 observations were obtained for this study. The statistical technique used for the analysis was variance-based structural equation modeling utilizing partial least squares with the statistical tool of WarpPLS. The study revealed that competitive advantage is able to partially mediate the impact of financial flexibility on firm performance. Proxies of competitive advantage, such as receivable turnover and financial leverage, were found to be significant to all performance proxies, namely ROA, ROE, and Tobin’s Q. However, IL did not significantly enhance the link between financial flexibility and performance. It was found that the mediating impact of competitive advantage on the relationship between financial flexibility and performance indicated that improving performance through financial flexibility is indirect. Thus, contingency factors should be given due consideration in enhancing resource-based performance. For Islamic label companies, competitive advantage can be built to enhance performance by optimizing resource
INTRODUCTION
Companies that possess sufficient financial flexibility have been shown to be able to respond to opportunities, expand investment, overcome unexpected events in the future (Cherkasova and Kuzmin, 2018;Ma & Jin, 2016), withstand cash flow shocks from negative external influences (Gamba & Triantis, 2008;Bancel & Mittoo, 2011), and their side effects.Financial flexibility can be a source of gaining a competitive advantage.Companies are considered to have a competitive advantage if they can generate excess returns from their resources (Gjerde et al. 2010).In general, competitive advantage can be measured using two approaches: the resources-based (traditional) and non-resources (market/industry-based) approaches, or an expanded approach (Dickinson and Sommers, 2011;Porter, 2012).The resources-based view approach includes proxies for economies of scale, product differentiation, innovation, and capital requirements, among others, for companies to achieve good performance.
For companies that operate based on Islamic principles, obtaining an Islamic label can be beneficial in increasing their performance in their market segment.The Islamic label requires the company's operations and finances to comply with a set of Islamic rules and principles (Guizani, 2019).One of these principles is the prohibition of interest (usury), which applies to all types of systems, whether fixed or floating, simple or compounding, or nominal or excessive (Naz et al. 2017).As such, Islamic finance prohibits the attraction of excessive amounts of debt (El-Alaoui et al. 2018) and instead advocates for low debt levels to avoid excessive uncertainty or gharar (El Alaoui et al. 2017).Previous research suggest that debt levels should range between 30-40% of total equity (Derigs & Marzban, 2008).Alternatively, other literature suggests that total outstanding debt may not exceed one-third of market capitalization (Elgari, 2002;Hussein & Omran, 2005) or total assets (El-Alaoui et al. 2018).In Indonesia, the Fatwa of the National Sharia Council No. 40/DSN-MUI/X/2003 stipulates that the maximum total debt to total assets ratio is 45%.
Enforcement of leverage ratio requirements on IL has implications for several things.First, the capital structure of IL differs from non-IL companies (Yildirim et al. 2018).Specifically, IL's capital structure tends to be lower or more conservative than non-IL companies (Alnori and Alqahtani, 2019;Kutan et al. 2018;Cheong, 2021).According to the financial flexibility hypothesis, a conservative capital structure can help companies become financially flexible (Marchica and Mura, 2010) by maintaining unused borrowing power (Modigliani & Miller, 1963).Therefore, unused debt capacity is closely related to and seen as the main source of financial flexibility (Hess & Immenkötter, 2014;DeAngelo et al. 2011;Denis & McKeon, 2012), and companies with a conservative capital structure tend to have high financial flexibility (Xie and Zhao, 2020).Second, since IL companies rely on liquid assets, such as cash reserves, for funding to meet financial and operational needs (Alnori and Alqahtani, 2019), they tend to accumulate cash (Alnori and Alqahtani, 2019;Bugshan et al. 2021).This is supported by empirical studies showing that IL companies have a higher level of cash holdings than non-IL companies (Akguc & Al Rahahleh, 2018;Alnori and Alqahtani, 2019;Bugshan et al. 2021;Guizani & Abdalkrim, 2021).Therefore, it can be concluded that companies with a high level of religiosity tend to have significant cash (Chen et al. 2016).
Previous research has shown inconclusive findings regarding the effects of financial flexibility on performance.Several studies have found evidence of a positive impact on performance (Rapp et al. 2014;Ma & Jin, 2016;Al-Slehat, 2019;Chang & Ma, 2019;Embaye & Haile, 2019;Teng et al. 2021).In contrast, other studies have confirmed a negative influence (Agha & Faff, 2014;Dong and Mao, 2016) and a U-shaped relationship (Kusnadi, 2011;Arslan-Ayaydin et al. 2014), indicating an interval effect on performance.Most research on financial flexibility and performance has focused on conventional companies, both globally and in developing countries.Few studies have analyzed the relationship between financial flexibility and performance of Islamic Label (IL) companies in developing countries, especially Indonesia.In general, IL operates within a very strict regulatory framework for financial ratios, one of which is the leverage ratio (Musse et al. 2021).Such conditions result in limited or less external funding access for IL (Akinsomi et al. 2015;Alnori and Alqahtani, 2019), leading to a more conservative capital structure compared to non-IL (Kutan et al. 2018;Cheong, 2021).Previous empirical research has shown that companies with high financial flexibility tend to have a conservative capital structure (Xie and Zhao, 2020).Referring to the conservatism of IL's capital structure and the conservative financial implications of financial flexibility, exploring IL in line with the postulates of the RBV theory, which states that the profitability of a company is determined by its competitive advantage (Grant, 1991).Therefore, there is a positive and significant relationship between competitive advantage and performance (Newbert, 2008;López-Gamero et al. 2009;Zhou et al. 2009;Sungyuan and Ussahawanitchakit, 2015).
Previous literature shows that the importance of being a financially flexible company comes from the notion that financial flexibility is a dimension of intangible assets (Kuo et al. 2006).Financial flexibility, as an intangible resource, provides companies with the ability to cope with unexpected events in the future (Denis and McKeon, 2009;Arslan-Ayaydin et al. 2014;Ma & Jin, 2016;Cherkasova and Kuzmin, 2018), potentially leading to the development of competitive advantages (Yi, 2020).These empirical findings are consistent with previous studies that demonstrate financial flexibility's potential to create competitive advantages (Chegini & Bashiri, 2017).
Previous empirical research has indicated that the usury principle in an IL capital structure differentiates it from that of non-IL structures (Yildirim et al. 2018) and affects its balance sheet (Adamsson et al. 2014).In comparison to non-IL structures (Kutan et al. 2018;Cheong, 2021), the capital structure of IL is more conservative or relatively lower (Alnori and Alqahtani, 2019).Financially conservative companies have been shown to be more profitable and perform better than non-conservative companies in previous research (Machokoto et al. 2021).Therefore, this study predicts that the conservatism in the IL capital structure would result in higher performance compared to non-IL structures.Based on the available literature, the following hypotheses have been developed: H1: Financial flexibility has a significant impact on performance H2: Financial flexibility has a significant impact on competitive advantage H3: Competitive advantage has a significant effect on performance H4: Competitive advantage mediates the relationship between financial flexibility and performance H5: Islamic Label moderates the effect of financial flexibility on performance The variables in this study include financial flexibility as an exogenous variable, performance as an endogenous variable, competitive advantage as a mediator, Islamic companies, especially in Indonesia, can broaden our understanding of organizational capabilities in improving performance.
Most of the studies analyzed the impact of financial flexibility on financial constraints.However, a limited number of studies (Kuo et al. 2006) have investigated the effect of financial flexibility from the perspective of the resource-based view (RBV) and organizational behavioral theory.To obtain more valid results in the indirect relationship between financial flexibility and performance, this study also investigates whether the Islamic label (IL) of sample companies can contribute to the model of the relationship.Therefore, the aims of this study are to (1) explore the direct relationship between financial flexibility and performance, (2) investigate the mediating effect of competitive advantage on the relationship between financial flexibility and performance, and ( 3) analyze the interaction effect of the Islamic label (IL) on the direct relationship between financial flexibility and performance, thus filling the identified research gap.
METHODS
The sample for this research included all companies continuously listed on the Indonesian Sharia Stock Index (ISSI) during the period of 2012 to 2021.Data sources were obtained from the Indonesian Capital Market Directory (ICMD) financial and annual reports, company performance summaries, and the IDX Facts Book.Out of 624 companies, 110 companies were continuously listed on the ISSI.After eliminating incomplete data and financial reports in foreign currency, 88 companies were obtained that met the research sample criteria.Therefore, by multiplying the total companies with the observation period of 10 years, the total observations were 880 observations.
Existing literature suggests that the financial flexibility of a company is one of the most important organizational capabilities or internal capacities (Guo et al. 2020).It has been found that confirmed financial flexibility has a positive impact on performance (Rapp et al. 2014;Ma & Jin, 2016;Al-Slehat, 2019;Chang & Ma, 2019;Embaye & Haile, 2019;Teng et al. 2021).These empirical results are consistent with the RBV perspective, which supports the relationship between internal resources and performance (Miller & Shamsie, 1996).The link between competitive advantage and performance is analysis technique used was variance-based structural equation modeling using partial least squares with the statistical tool of WarpPLS.
RESULTS
The results of the descriptive statistics of exogenous and endogenous variables.1.
Label as a moderator, and control variables (Figure 1).The measurement specifications for each variable need to be specified.The variable of financial flexibility was measured using three proxies adopted from the correlation value between the construct and other constructs.Therefore, the indicators are not unidimensional.
Additionally, internal consistency reliability was evaluated using the composite reliability coefficient for each construct.Table 4 shows that the composite reliability coefficient of each variable or construct meets the internal consistency reliability criterion set at >0.7.Therefore, it can be declared that all constructs were reliable.Based on validity and reliability testing, the overall indicators and constructs of this study are valid and reliable.
The testing for validity was based on the loading factor and Average Variance Extracted (AVE) for each construct.The loading factor value was used as the basis for the validity of each construct and was set at >0.6, while the value for AVE was set at >0.5.Table 2 shows that the constructs or variables measured, based on the loading factor and AVE, meet the required criteria.Therefore, the correlation between indicators and their constructs, or latent variables, confirms convergent validity.
Moreover, the discriminant validity was tested using Fornel-Larcker.Table 3 shows that the square root value of the AVE for each construct is higher than To examine the hypotheses, the next step is to test the significance of the influence of exogenous variables on endogenous variables, as well as the strength of the relationships among the variables.The results are shown in Figure 2 and Table 5.
The findings of this study indicate that financial flexibility (FF) positively affects performance (Perf) with a path coefficient value of 0.179 and is significant with a p-value of 0.000 <0.05.Therefore, the first hypothesis is accepted, demonstrating that the FF possessed by the sample companies can enhance performance.The results support earlier empirical studies that suggest FF has a significant impact on performance (Al-Slehat, 2019;Chang & Ma, 2019;Embaye & Haile, 2019;Teng et al. 2021).
Theoretically, the outcomes align with the Resourcebased View (RBV) theory, which posits that resources play a significant role in performance (Kweh et al. 2013).Additionally, the results correspond with the FF hypothesis, which states that low leverage attributes, or capital structure conservatism, can contribute to a company's financial flexibility (Marchica and Mura, 2010).
The research findings indicate that financial flexibility (FF) has a significant and negative impact on competitive advantage (CA), with a path coefficient value of -0.286, a p-value of 0.001, and a variability (R2) of 8.2 percent.Consequently, the second hypothesis was supported.The study further reveals that all proxies (UDC, DCF, and Retained Earnings Ratio to Total Assets) in financial flexibility (FF) have an adverse impact on competitive advantage (CA), which is represented by the inverse of receivables turnover and financial leverage.The harmful influence of FF on receivables turnover inverse implies that higher FF corresponds to higher accounts receivable turnover, which indicates shorter credit terms, limiting the buying power of clients.For that reason, the favorable effect of FF on accounts receivable turnover indicates that the FF possessed by the sample companies can be utilized to establish competitive advantage, visible in the rise of bargaining power of buyers.The elevated bargaining power of buyers, reflected in a higher level of accounts receivable turnover, can shorten the cash conversion cycle, enhancing the efficiency of cash flow management and leading to increased profitability, consistent with the theoretical perspective of the cash conversion cycle (Deloof, 2003;Eljelly, 2004).The results corroborate the view that financial flexibility constitutes a component of intangible assets (Kuo et al. 2006), which can be utilized to establish competitive advantage (Yi, 2020).The findings of this study support previous empirical research (Chegini & Bashiri, 2017;Yi, 2020) and comport with the behavioral organizational theory and RBV theory, which suggest that there is a link between resources and capabilities with competitive advantage (Barney, 1991;Grant, 1991) or that competitive advantage is a function of resources and capabilities (Wernerfelt, 1984;Conner, 1991).and Sommers, 2012).Financial leverage is the inverse of borrowing capacity from the perspective of financial flexibility (Dickinson and Sommers, 2012).Therefore, the negative relationship between competitive advantage and performance indicates that competitive advantage (proxied by financial leverage) can improve performance (ROA, ROE, Tobin's Q).
Statistical analysis also revealed that competitive advantage (CA) mediates the relationship between financial flexibility (FF) and performance (PERF), with a p-value of 0.016 and a path coefficient of 0.051.Hence, CA can function as a mechanism or provide an additional 22.24% contribution to the FF-PERF relationship model.Consequently, the fourth hypothesis is validated.These results indicate that the impact of financial flexibility on performance is more indirect, via competitive advantage.This finding confirms earlier empirical evidence (Yi, 2020) and extends prior research that did not investigate the role The results also showed a positive effect of competitive advantage on performance (Perf) with a path coefficient value of -0.179, which is significant with p-values of 0.000 <0.05.Thus, the third hypothesis was accepted.Therefore, the negative effect of CA on performance indicates that Inverse Receivable Turn Over (Inverse RTO) and Financial Leverage, proxies for CA among the sample companies, are able to increase performance.This empirical evidence supports the agency theory, which states that companies operating in a risky business environment, such as facing high competition, tend to reduce the use of debt because risks will increase, making debt expensive and the outcome uncertain (Jensen and Meckling, 1976;Botosan and Plumlee, 2005).Thus, when the intensity of competition is high, the effect of debt on performance becomes negative (Jermias, 2008), and debt does not provide any real benefits.On the other hand, a company will be credible in facing competition if it has sufficient financial flexibility or borrowing capacity (Dickinson breakeven point.Conversely, a higher portion of fixed costs in the cost structure leads to a greater intensity of competition, increasing the required level of breakeven point.The effect of financial flexibility on performance depends on a cost-benefit analysis of the resources owned by the company.Managerially, this implies that if the returns generated by the company's resources are greater than the costs, the company has a competitive advantage.Financial flexibility is one of the factors that affect company performance.
The theory of the firm perspective provides a view that financial flexibility in the form of cash is a versatile resource that can be used in various activities to generate competitive advantage.This implies that financial flexibility, also known as financial slack, is seen as excess financial resources that have a role in creating competitive advantage.From the perspective of behavioral organizational theory, slack acts as a buffer between the organization and external contingencies, facilitating the company's adaptation to environmental changes.
In addition to improving long-term performance, slack also has a positive impact on risk-taking, facilitating innovation, change, and creating an excellent advantage.Therefore, it is crucial to manage internal resources as the main drivers of competitive advantage.Lastly, these findings show that financial constraints faced due to the requirements of Islamic law principles are not an obstacle in building and producing competitive advantage.
Conclusions
The study's results indicate that financial flexibility has a significant impact on performance and a significant effect on competitive advantage.Empirical testing further demonstrated a significant influence of competitive advantage on performance.
Competitive advantage was proven to be a mediating variable, mediating the relationship between financial flexibility and performance.However, the study found no moderating effect of the Islamic label on this relationship.
Theoretically, the findings highlight some implications.First, they contribute to RBV theory, agency theory, and add evidence to previous research.Second, the of competitive advantage in the FF-PERF relationship.
For firms in the sample, these results imply that enhancing performance through financial flexibility can be achieved by developing competitive advantage.
The results showed that the Islamic Label (IL) does not significantly moderate the relationship between FF and PERF, with p-values = 0.117 and a path coefficient value of -0.040.The findings indicate that capital structure conservatism as an Islamic label, as measured by NPND (Non-Positive Net Debt), is unable to moderate the effect of financial flexibility on performance.Thus, the fifth hypothesis is rejected.One justification for this finding is related to the relatively strict regulations on IL financial ratios, including the leverage ratio (Musse et al. 2021).These conditions result in limited or reduced access to external funding for IL firms (Akinsomi et al. 2015;Alnori and Alqahtani, 2019).Referring to the perspective of the financial constraint hypothesis, limited access to funding leads to fewer opportunities to accumulate cash and debt capacity.Companies facing financial constraints tend to be less profitable as a result (Bessler et al. 2013).
Lastly, in testing for the effects of the control variable proxied by size and sales growth, the results showed that these variables have significant effects on performance, with a path coefficient value of 0.236 and a p-value of 0.001 <0.05.These findings are consistent with previous empirical research (Ma et al. 2015;Mahmood et al. 2018;Song et al. 2021;Teng et al. 2021;Zhang et al. 2022).However, the exogenous variables' variability in explaining the endogenous variables' variation is very limited (12.5%).Therefore, this study recommends using various proxies for financial flexibility, considering that there is no widely accepted proxy for measuring financial flexibility (Teng et al. 2021;Zhang et al. 2022).
Managerial Implications
This research produces several managerial implications for companies listed in the Indonesian Sharia Stock Index (ISSI).The results regarding the effect of financial flexibility on competitive advantage, proxied by financial leverage, are in line with the role of financial flexibility in minimizing fixed costs.This has direct managerial consequences in terms of lowering the cost structure.A lower cost structure means it is not necessary to achieve a high sales volume to reach the empirical evidence regarding the effect of Islamic Label (IL) moderation in the relationship between financial flexibility and competitive advantage contributes to the role of Islamic finance in RBV.Agency theory also expands the corporate governance literature and Islamic finance studies.
Practically, the findings emphasize IL as a potential driver of quality information and therefore tend to provide reliable and relevant information for Islamic companies indexed in ISSI.This indicates a high degree of accountability enforcement.The presentation of reliable information through transparency, fairness, accountability, and ethical behavior is a major part of good governance.To ensure good governance, sample companies need to increase good governance to reduce agency risk.They must ensure that business activities are carried out correctly in an Islamic ethical manner.This also practically has consequences to establish sustainable growth and long-term corporate value.
Recommendations
For companies listed in ISSI, Islamic principles should be adhered to in their financial management to cover business ethics.From the perspective of business organizations, IL must be operated automatically in a moral, ethical, and socially responsible manner, rather than focusing solely on profit maximization.Based on such Islamic laws and principles, IL should be simultaneously implemented in the two functions of economic and social functions of company operation.The application of dual functions results in IL being beneficial to improving sustainability practices and long-term performance compared to non-IL practices.
Figure 1 .
Figure 1.Research Model (FF = Financial Flexibility; CA = Competitive Advantage; Perf = Performance (Performance); Inverse RTO = Inverse Receivable Turn Over: FL = Financial Leverage; UDC = Unused Debt Capacity; DCF = Debt Flexibility + Cash Flexibility; R/E to TA = Retained Earnings Ratio to Total Assets; ROA = Return on Assets; ROE = Return on equity; NPND = Non-Positive Net Debt)
Figure 2 .
Figure 2. Path Coefficient and p-value (Teng et al. 2021))ndZhang et al. (2022), namely Unused Debt Capacity (UDC), Debt Flexibility + Cash Flexibility (DCF), and Retained Earnings Ratio to Total Assets (R/E to TA).The variable of financial performance indicators is proxied by three indicators, namely ROA(Teng et al. 2021), Tobin's Q (Arslan-Ayaydin et al. 2014), and ROE(Teng et al. 2021).The variables of competitive advantage in this study were measured by two proxies from the non-resources approach adopted fromDickinson and Sommers (2012), namely Inverse Receivable Turnover (Inverse (Teng et al. 2021;Zhang et al. 2022) variable of Islamic Label (IL) was measured inMachokoto et al.'s (2021)study, which demonstrated IL's financial characteristics, including financial conservatism with a Non-Positive Net Debt (NPND).Lastly, control variables were also examined in this study.In reference to previous research, the control variables in this study utilize two proxies, namely size(Zhang et al. 2022) and sales growth(Teng et al. 2021;Zhang et al. 2022).The
Table 1 .
Descriptive statistics of variables
Table 2 .
Loading factor and average variance extracted | 5,110 | 2023-09-30T00:00:00.000 | [
"Business",
"Economics"
] |
Efficiency of the SQUID Ratchet Driven by External Current
We study theoretically the efficiency of an asymmetric superconducting quantum interference device (SQUID) which is constructed as a loop with three capacitively and resistively shunted Josephson junctions. Two junctions are placed in series in one arm and the remaining one is located in the other arm. The SQUID is threaded by an external magnetic flux and driven by an external current of both constant (dc) and time periodic (ac) components. This system acts as a nonequilibrium ratchet for the dc voltage across the SQUID with the external current as a source of energy. We analyze the power delivered by the external current and find that it strongly depends on thermal noise and the external magnetic flux. We explore a space of the system parameters to reveal a set for which the SQUID efficiency is globally maximal. We detect the intriguing feature of the thermal noise enhanced efficiency and show how the efficiency of the device can be tuned by tailoring the external magnetic flux.
Introduction
The SQUID is the most sensitive instrument which is capable of detecting and measuring even extremely small magnetic fields. It has been used successfully not only for magnetometry but also for voltage and current measurements. Its applications go far beyond the research laboratories often into commercial apparatus exploited in metrology, geophysics and medicine, see the reviews [1,2]. The SQUID has been the topic of various extensive theoretical and experimental studies. Yet, a number of open problems of this setup still remain to be resolved. A prominent example may be the efficiency of the SQUID as a thermodynamical machine converting the input energy into its other forms. It is the subject of this paper.
We study an asymmetric SQUID driven by an external current and analyze the charge transport and voltage induced across the device. The asymmetric SQUID is modeled as a ratchet far from equilibrium, i.e. as a classical Brownian particle moving in a spatially periodic potential with broken reflection symmetry and driven by a timedependent force. In this mechanical analogy, the voltage across the SQUID corresponds to the particle velocity. The most basic measure for characterizing the motion of the Brownian particle is its long time average velocity v . However, alone it does not give any information on quality of transport process. Is it effective or ineffective? To answer this question, we need to consider its other attributes. One of them are fluctuations of the velocity around its average value which in the long time regime are represented by the variance σ 2 v = v 2 − v 2 . Then, typically the instantaneous velocity v(t) takes values within the interval of standard deviation, v(t) ∈ [ v − σ v , v + σ v ]. Note that if fluctuations are large, i.e. if σ v > | v |, then it is possible that the particle moves for some time in the direction opposite to its average velocity v , spread of velocities is large and overall transport is not effective. The next feature which is important in answering the question about the quality of transport phenomenon is related to the ratio of energy input into the system and its energetic output. How much of the energy input is converted into directed motion of the particle and how much of it is wasted by spreading out into environment and dissipated as heat? A proper quantifier to characterize this aspect of transport is the efficiency of the system. By using the correspondence between the SQUID and the mechanical ratchet system, we study three measures for evaluation of transport quality: average voltage, its fluctuations and the efficiency of the SQUID. In the previous paper [3] we analyzed the average voltage in this setup for wide parameter regimes: covering the overdamped to moderate damping regime up to its fully underdamped regime. We found the intriguing features of a negative absolute and differential conductance, repeated voltage reversals, noise induced voltage reversals and solely thermal noise-induced ratchet voltage. We identified a set of parameters for which the ratchet effect is most pronounced and showed how the direction of transport can be controlled by tailoring the external magnetic flux. The main emphasis of that work laid on formulating and exploring conditions that are necessary for the generation and control of transport [4,5], its direction, Figure 1. Schematic asymmetric SQUID composed of three Josephson junctions and the equivalent circuit composed of two junctions. The Josephson phase difference is ϕ 1 = ϕ u +ϕ d , the externally applied current is I, the current through the left and right arms is I 1 and I 2 , respectively. The external magnetic flux is Φ e and the instantaneous voltage across the SQUID is V = V (t). The long time average voltage V across the SQUID is expressed by the relation V = φ 1 /2e = φ 2 /2e. magnitude as well as its dependence on system parameters. However, apart from these well investigated questions other important features concerning the quality of transport [6,7,8] have remained unanswered. Therefore in this paper we concentrate on this topic and connection between the directed transport expressed in terms of the dc voltage, its fluctuation characteristics and energetics of the SQUID.
Theoretical aspects considered in the paper concern not only our specific SQUID ratchet but a much wider class of systems and problems. There are many experiments on a number of ratchet systems [9], in particular superconducting ratchets [10], a part of which can be controlled by an external magnetic field [11,12,13,14] as well as theoretical studies of such systems driven by harmonic and biharmonic external currents [15,16,17,18,19]. However, the efficiency of transport has not been analyzed in the above-cited papers.
The structure of the paper is as follows. In Sec. II, we recall the model of a SQUID rocking ratchet which is composed of three resistively and capacitively shunted Josephson junctions. In Sec. III we define mean values of arbitrary state functions in the long time regime. Then in Secs. IV-VI, the quantities characterizing the quality of the transport such as the voltage fluctuations, the energy balance and the (Stokes) efficiency are introduced, respectively. In Sec. 7 we elaborate on key aspects of transport efficiency in the system: starting from the power delivered by the externally applied current, covering the tailoring of the Stokes efficiency of the device, up to presentation of the regime for which thermal noise enhances the efficiency and discussion about the impact of variation of the external magnetic flux on the efficiency of the SQUID. Finally, the last section provides a summary.
Model of the SQUID ratchet
The asymmetric SQUID [10,21,22,23,24] is presented in Fig. 1. It is a loop with two resistively and capacitively shunted Josephson junctions [25] in the left arm and one in the right arm. The crosses denote the junctions and ϕ k ≡ ϕ k (t) (k = u, d, 1, 2) are the phase differences across them. Each junction is characterized by the capacitance C k , resistance R k and critical Josephson supercurrent J k , respectively. The SQUID is threaded by an external magnetic flux Φ e and driven by an external current I = I(t) which is composed of the static dc current I 0 and the ac component of amplitude A and angular frequency Ω, namely To reduce a number of parameters of the model, we consider a special case when two junctions in the left arm are identical, i.e.
In some regimes [3], two junctions in a series can be considered as one for which the supercurrent-phase relation takes the form J 1 sin (ϕ 1 /2), where ϕ 1 = ϕ u + ϕ d . This result is also derived in Ref. [26] for an effective double-well structure described in terms of a double-barrier potential (cf. Eq. (23) therein). The total magnetic flux Φ piercing the loop is a sum of the external flux Φ e and the flux due to the flow of currents, where L is the loop inductance and i ≡ i(t) is the circulating current which tends to screen the magnetic flux. In the "dispersive" operating mode of the SQUID [27], i.e. when the condition |Li| Φ 0 holds true (Φ 0 = h/2e is the flux quantum), the phase ϕ ≡ ϕ 1 obeys the Stewart-McCumber type equation of the form [3] 2e Cφ + 2e where the effective supercurrent J(ϕ) reads The parameters are: is the Boltzmann constant, T is temperature of the system andΦ e = 2πΦ e /Φ 0 is the dimensionless external magnetic flux. Thermal fluctuations are modeled by δ-correlated Gaussian white noise ξ(t) of zero mean and unit intensity The Stewart-McCumber equation (3) has the form of a Langevin equation and describes a non-Markovian stochastic process for the phase ϕ. In the extended space {ϕ,φ}, it has the Markovian property and all well known methods can be applied to analyze it. Eq. (3) can be interpreted in the framework of a model of a classical Brownian particle. It helps develop the intuition and interpretation. In the one-to-one correspondence, the particle position x translates to the phase ϕ, the particle velocity v =ẋ to the voltage V ∝φ, the conservative force to the supercurrent J(ϕ), the external force to the current I(t), the mass m to the capacitance m ∝ C and the friction coefficient γ to the normal conductance γ ∝ G = 1/R. It is important to note that the friction γ is not proportional to the normal resistance R (as one could expect in the case of electrical circuits) but to the inverse of R. The reason is that plasma oscillations of the junction are more damped if more normal electrons couple to the oscillating condensate (i.e. when G is greater). The voltage accelerates normal electrons and their kinetic energy is dissipated into heat. Thus the plasma oscillations converts into heat with a rate proportional to the conductance G [28].
Asymptotic mean values
The main characteristics of the system are the current-voltage curves in the long time regime. It can be shown that for the external current (1), they can be extracted from the relation for the averaged voltage V developed across the SQUID [3] where dW (t) = ξ(t)dt is the differential of the Wiener process of zero mean and the second moment dW (t)dW (t) = dt. The pair {ϕ, V } form a Markovian process and its probability density P = P (ϕ, V, t) obeys the Fokker-Planck equation with the initial condition P (ϕ, V, 0) = p(ϕ, V ), where a given probability density p(ϕ, V ) describes the initial distribution of the phase ϕ(0) and voltage V (0).
For any state function f (ϕ, V ), its mean value f (ϕ, V ) t at time t is calculated from the relation Because the system is driven by the time-periodic current I(t), the probability density P (ϕ, V, t) approaches for long time the asymptotic periodic form P as (ϕ, V, t), namely [29,30] P as (ϕ, where the Fourier coefficients W n (ϕ, V ) are solutions of the differential equations obtained from the Fokker-Planck Eq. (9). The time-dependent asymptotic mean value is also a periodic function of time. If we are interested in its time-independent form, the time averaging over the additional period T = 2π/Ω of the ac current has to be performed In a particular case, when , we obtain the stationary statistical moments of the voltage V k .
Fluctuations of voltage
The asymptotic average voltage V calculated according to the prescription (13) is the most important transport characteristics of the system. The magnitude of the instantaneous voltage V (t) can be much larger than its mean value. Moreover, the fluctuations of voltage in the long time regime can also be large. They are described by the voltage variance The voltage typically ranges within the interval of several standard deviations If the standard deviation σ V is large, i.e. when σ V > | V |, the voltage V (t) can spread far from its average value and even assume the sign opposite to it. It is a case for protein motors in biological cells where the instantaneous velocity changes direction very rapidly and its absolute value is several orders of magnitude larger than the average velocity [31].
Energetics of the SQUID
The SQUID is a device which converts input energy into its other forms. It is provided by the external current I(t) and the energy flow is determined by the equation of motion (3). In the mechanical interpretation, the kinetic energy of the particle corresponds to the energy stored in the system of capacitance C, namely The particle potential energy translates to the Josephson energy accumulated in the junction when the supercurrent flows through it The sum is the total energy of the system. Its balance can be obtained from Eqs. (7)- (8). For this purpose we apply the Ito differential calculus to both functions E C (V ) and E J (ϕ) Next, for both sides of the above equations, we calculate the mean values. Exploiting the Ito martingale property we find the average value of the term V dW (t) t = 0 and obtain the energy balance equation in the form [6,32] d dt where · t denotes a mean value at time t according to the prescription (10). In the right hand side of this equation there are three components, each of them is related to the separate process responsible for the energy change. Let us point out that the first term in Eq. (21) is always negative whereas the third is positive. The former describes the rate of energy loss due to dissipation and the latter refers to the energy provided by thermal equilibrium fluctuations. According to the equipartition theorem, in the thermodynamical equilibrium, when I(t) = 0, the relation CV 2 /2 eq = k B T /2 holds true. It is utilized in Eq. (21) to get (22). The second term in (21) As a consequence, in the stationary regime the mean power P in delivered to the system by the external current I(t) over the period T is expressed by the relation [32] From the above equation it follows that the amount of the energy input to the SQUID from the external driving I(t) depends not only on the current itself (i.e. on I 0 , A, Ω) but also on properties and parameters of the device: its temperature T , the resistance R and the capacitance C. In contrast, the energy supplied by thermal fluctuations does not depend on the external current but only on T, R and C.
Efficiency of the SQUID
A generic definition of the efficiency of a device converting energy is a ratio between the output (work, power) and the input (power) energy. Depending on the choice of input and output, different definitions of the efficiency characterize various aspects of energy conversion in the device. To explain the problem, we use the mechanical interpretation of (3). Then the average voltage V corresponds to the velocity of the Brownian particle or the Brownian motor. The thermodynamic efficiency is defined as a ratio of the work done by the motor to the energy input. If the particle is working against a constant force (load) i 0 then in stationary state the efficiency is defined as follows In the considered case, there is no a load and the Brownian motor does not transport external objects. It works against the friction "force" (By the way, it has not the unit of Newton but if the above formula is again multiplied by the factor /2e then it has the correct physical unit.) When the external force I(t) is switched off, the velocity of the motor is damped to zero and the system tends to thermodynamical equilibrium. Because the motor works against the friction force, we can utilize its mean value to get another definition of efficiency, namely This quantity is called the Stokes efficiency [33,31,34]. Let us note that it depends explicitly on mass C of the Brownian particle and and only implicitly on the friction coefficient R via the Langevin equation (3).
It should be mentioned that (27) is not the rate of the work done by the motor on its surroundings (viscous medium). Moreover, it is not a mean power P R to overcome the friction force which correct form reads However, this expression cannot be put as a numerator in the definition of the efficiency because there are regimes where the mean velocity is extremely small (numerically zero), V ≈ 0 but V 2 = 0 and the efficiency could be large even though the particle does not move on average in one direction. It is the main reason why the Stokes efficiency is more adequate is such cases as considered in the paper. Another possible definition of the efficiency is based on the remark that what we observe in the long time regime is the average velocity. Therefore we can introduce "kinetic power" as the "kinetic energy" of the particle per the period T , One should note that it is not exactly proper definition of the kinetic power as it should be proportional to V 2 instead of V 2 . However, we replaced it with the latter for the reason explained before. Nevertheless, it is still a measure of performance of the motor. If the average velocity increases then P k also grows. We can insert it as a numerator in (25) and then we get the kinetic efficiency This quantifier can be used only when the time-periodic force is switched on. Then qualitatively, it is similar to the Stokes efficiency. However, the dependence on the mass C, the friction coefficient R and the period T is different. Both the Stokes efficiency and the kinetic efficiency are consistent with our intuition: a decrease of fluctuations σ 2 V leads to a smaller input power and hence to an increase of the efficiency. Consequently, the transport is optimized in regimes that maximize the directed velocity and minimize its fluctuations. Because the kinetic efficiency is proportional to the Stokes efficiency, below we analyze only the last one.
Dimensionless model
There are several dimensionless forms of Eq. (3) in dependence of the choice of scaling time. In this system there are four characteristic frequencies: plasma frequency ω 2 p = 2eJ 1 / C, the characteristic frequency of the junction ω c = 2eRJ 1 / , the frequency ω r = 1/RC related to the relaxation time and the frequency Ω of the ac current. There are three independent characteristic time scales related to these frequencies (note that Here, we follow [20] and define the new phase x and the dimensionless timê t as Then (3) where the dot and prime denotes a differentiation over the dimensionless time s and the phase x, respectively. We introduced a spatially periodic potential U (x) of period 2π of the following form [20] U (x) = − sin(x) − j 2 sin(2x +Φ e − π/2).
This potential is reflection-symmetric if there exists x 0 such that U ( for any x. If j = 0, it is generally asymmetric and its reflection symmetry is broken, see Fig. 2. We classify this characteristics as a ratchet potential. However, even for j = 0 there are certain values of the external fluxΦ e for which it is still symmetric. The dimensionless capacitanceC is the ratio between two characteristic time scales C = τ r /τ c , where the relaxation time is τ r = RC. Other re-scaled parameters are j = J 2 /J 1 , F = I 0 /J 1 , a = A/J 1 and ω = Ωτ c . The rescaled zero-mean Gaussian white noiseξ(s) has the auto-correlation function ξ (s)ξ(u) = δ(s − u) and its intensity D = ek B T / J 1 is the quotient of the thermal and the Josephson coupling energy. The dimensionless voltage v(t) =ẋ(s) = V (t)/RJ 1 and therefore the physical average voltage V is given by the relation In particular, after such a scaling procedure the dimensionless input power P in is expressed as and consequently, the Stokes efficiency reads The key feature for the occurrence of the directed transport v = 0 is the symmetry breaking. This is the case when either the dc current F = 0 or the reflection symmetry of the potential U (x) is broken. The system described by (32) becomes deterministic when the thermal noise intensity D is set to zero. Even in this case it exhibits complex dynamics including chaotic regimes [35,36]. The application of noise generally smooths out its characteristic response function. There are two classes of states of the driven system dynamics: the locked states, in which the phase stays inside finite number of potential wells and the running states for which it runs over the potential barriers. The latter are crucial for the occurrence of the transport. They can be either chaotic (diffusive) or regular.
Details of simulations
The Fokker-Planck equation (9) corresponding to the Langevin equation (3) cannot be solved by use of closed analytical forms. Therefore, in order to obtain the relevant transport characteristics we have to resort to comprehensive numerical simulations of the driven Langevin dynamics. We have integrated (32) by employing a weak version of the stochastic second order predictor corrector algorithm [37] with a time step typically set to about 10 −3 ·2π/ω. Since (32) is a second-order differential equation, we have to specify two initial conditions x(0) andẋ(0). Moreover, because for some regimes the system may be non ergodic in order to avoid the dependence of the presented results on the specific selection of initial conditions we have chosen phases x(0) and dimensionless voltageṡ x(0) equally distributed over interval [0, 2π] and [−2, 2], respectively. All quantities of interest were ensemble-averaged over 10 3 − 10 4 different trajectories which evolved over 10 3 − 10 4 periods of the external ac driving. Numerical calculations were done by use of a CUDA environment implemented on a modern desktop GPU. This scheme allowed for a speed-up of a factor of the order 10 3 times as compared to a common present-day CPU method [38,39]. Part of our so obtained results are presented next.
Power delivered by external current
Let us begin analysis of the SQUID efficiency by looking at the power P in delivered by the external current I(t). Notably, it depends implicitly not only on the parameters of the applied external current (F , a, ω) but also on the quantities characterizing the device like the capacitanceC. We have found that generally the input power (35) tends to increase for larger values of the dc current F and ac driving amplitude a. The dependence on the frequency ω is more complex. However, in most cases P in is relatively large when ω is small. It is because very fast oscillation of the driving current cannot induce neither the average voltage v nor v 2 . In panel (a) of Fig. 3 we show the representative dependence of the input power P in on the dimensionless capacitanceC of the SQUID. One can observe that P in is maximal for the overdamped or close to damped regime and decreases whenC grows. Since in the mechanical framework the capacitanceC translates to the mass of the Brownian particle it is intuitively clear that when the inertial term becomes large then the device needs more power to response equivalently. Perhaps the most surprising is the fact that P in depends explicitly on the thermal noise intensity D, i.e. on temperature of the system. Typically, it decreases for increasing D. However, there are also regimes for which P in is enhanced by thermal noise. In panel (b) of Fig. 3 we exemplify this situation. Indeed, for a wide interval of temperature the input power is almost monotonically increasing function of the noise intensity D. Finally, the influence of the constant external magnetic fluxΦ e on P in is depicted in the last panel. It is remarkable that one can tune the input power P in by changing the external magnetic flux. The reader should note that for the presented regime it is maximal whenΦ e = 0, i.e. potential U (x) is reflection symmetric. In such a case there is no average voltage drop v = 0 across the device when additionally the dc current F vanishes. It follows that large input power P in does not necessarily translate into the efficient directed transport.
Tailoring Stokes efficiency
The system described by Eq.
(32) has a 7-dimensional parameter space {C, a, ω, F, j,Φ e , D}. We set F = 0 and check how it depends on the remaining system parameters. We limit our considerations to positive a because the system (32) is symmetric under changes of sign of a. Depending on the magnitude of the dimensionless capacitanceC of the device it can operate in three distinct regimes: overdamped (C → 0), damped (moderateC) and underdamped (C → ∞). We note that the conditions that are necessary for the generation and control of the direction of transport have been extensively studied in these regimes in our recent work [3]. Since very fast oscillation of the driving current cannot induce the average voltage v it is sufficient to limit our considerations to low and moderate ac driving frequencies ω. We have performed scans of the following area of the parameter spacẽ C × a × ω ∈ [0.1; 10] × [0; 10] × [0.1; 1] at a resolution of 200 points per dimension to determine the general behavior of the system. The results are depicted in Fig. 4.
We can see that regardless of the regime in which the device operates its Stokes efficiency η S is zero or negligibly small for a < 1 and high frequencies ω. This is due to the fact that the rocking mechanism is either too weak or too fast to induce finite average voltage v . The areas of non-zero efficiency η S have a striped structure. For a given amplitude a, the ratchet behavior generally tends to disappear as the frequency ω grows. On the other hand, for a given frequency, there is optimum amplitude a that maximize the Stokes efficiency. The increase of the capacitanceC causes blurring of the regions for which the efficiency is nonzero. Moreover, this tendency is often accompanied by its reduction. Consequently, the studied device operates best in the overdamped or close to damped regimes.
Optimal regime
We have explored the parameter space of the system (32) and we have been able to detect a regime for which the efficiency η S is globally maximal. It is in the vicinity of the point Let us begin with the dependence of the transport characteristics on the dc current F . It is depicted in Fig. 5(a)-(c) for small values of F ∈ (−0.5, 0.5). Panel (a) presents the current-voltage curve. In the low temperature limit (D = 10 −5 ) the average voltage is almost quantized at values nω, n = 0, ±1, ... For a symmetric potential, these plateaus correspond to standard Shapiro steps [27]. However, in our case also steps at half integer multiplies of ω can be observed. This is due to the deviation of U (x) from a simple sin x form, which is the sole case for which steps lie only at integer values of nω [20]. However, in both the symmetric and asymmetric cases a proper amount of noise is sufficient to wipe out their evident structure [40]. This Shapiro-like currentvoltage curve is characteristic for the device operating in the low temperature limit of overdamped or damped regimes. Panel (b) of the same Fig. 5 presents the dependence of the voltage fluctuations σ v on the dc current F . It is rather complicated non-linear and non-monotonic function of F without any immediately obvious relation to the average voltage of panel (a). However, the most important observation is that the voltage fluctuations are minimal for F = 0. This fact is of fundamental importance for the influence of F on η S . In Fig. 5(c) we can see that η S is locally maximal for F = 0. For large values of F (not shown here) the mean voltage is an almost linear function of F and the efficiency approaches the value 1. It support the statement in Ref. [31] that when the Stokes efficiency is close to 1, the driving resembles the constant force.
The role of the frequency ω of the ac current is illustrated in Fig. 5(d)-(f). Panel (d) presents the dependence of the average voltage v on ω. In the adiabatic limit ω → 0 it undergoes rapid oscillations. This behavior has its reflection in the influence of the frequency on the voltage fluctuations σ v (see Fig. 5(e)). According to the previous statement very fast oscillations of the driving current cannot induce the nonzero average voltage. Therefore for sufficiently high frequency there is no transport and as a consequence the Stokes efficiency η S is zero. However, a strong peak of efficiency is observed for moderate value of ω = 0.406. It is associated with the fact that for this frequency the average voltage v is maximal and simultaneously its fluctuations σ v are minimal.
The impact of the amplitude a of the ac current is shown in Fig. 5(g)-(i), respectively. In particular, the resonance-like behavior is observed in the dependence of the average voltage v on the amplitude a (see Fig. 5(g)). Apart from two clearly visible peaks there is almost imperceptible small directed transport. This fact has critical impact on the functional dependence of the efficiency η S . It is proportional to v 2 so it vanishes too when the device response is zero. The influence of variation of the amplitude a on the voltage fluctuations σ v is depicted in panel (h). It is almost linearly increasing function of a. Only one evident deviation from this trend can be noted, i.e. a local minimum around a = 1.55 corresponding to the first high peak in Fig. 5(g). It should be stressed that there is no any contradiction with the dependence of the average voltage since v = 0 does not necessarily mean v 2 = 0 and therefore σ v can at the same time assume nonzero value.
The dependence of all relevant transport characteristics on the capacitanceC is very complicated as shown in Fig. 5(j)-(l). The first panel of this group shows the average voltage versus the capacitanceC. We note the important feature of the voltage reversal [35,41]: starting from zero, the voltage changes its sign from positive to negative and again in the opposite direction asC grows. Therefore the capacitance can serve as a parameter to manipulate the direction of transport processes. The efficiency η S is maximal close to the border between the overdamped and damped regimes. It is (almost) zero in the underdamped limit which corresponds to large capacitanceC → ∞. This is a consequence of the fact that for this regime the average voltage v vanishes or is negligibly small. The last three panels of Fig. 5 depict the influence of the thermal noise intensity D on all previously studied quantities. An increase of the noise intensity D leads to both monotonic decrease of the induced average voltage v and increase of its fluctuations σ v . Consequently, the efficiency is the best in the low temperature regime when the deterministic dynamics of the system (32) plays a crucial role.
Noise enhanced Stokes efficiency
We have found the opposite scenario when thermal noise enhances the efficiency. This perhaps surprising effect is exemplified in Fig. 6. Panel (c) presents the dependence of the efficiency η S on D. Evidently, in some intervals of D, the increase of temperature causes the increase of η S . There is also optimal value of temperature or equivalently the thermal noise intensity D ≈ 0.0004 for which the efficiency takes its maximum. Moreover, in this case the ratchet mechanism is solely activated by thermal equilibrium fluctuations as for low noise intensity no rectification can be observed. This statement is confirmed in the functional dependence of the average voltage v which is presented in panel (a). It is also remarkable that in this regime an increase of thermal noise intensity D leads to a decrease of voltage fluctuations σ v .
Impact of external magnetic flux
As it was shown before, the efficiency can be tuned in several ways. However, it seems that from the experimental point of view the simplest method is to vary the external constant magnetic fluxΦ e . The dependence of the average voltage v , its fluctuations σ v and the efficiency on the external magnetic fluxΦ e in the previously presented regime for which thermal noise induces the ratchet effect (cf. Fig. 6) is depicted in Fig. 7. From the symmetry considerations of (32) it follows that for an arbitrary integer number n, the Fig. 7. However, this is not the case for the voltage fluctuations σ v , where they are symmetric aroundΦ e = 0. A careful inspection of panel (b) reveals that one can reduce magnitude of σ v by nearly two times just by correct adjustment of the external magnetic flux. This fact has further consequences in the dependence of the efficiency which is depicted in panel (c). It can be slightly tuned by a small variation of the external magnetic flux.
In Fig. 8 we present how the Stokes efficiency behaves in the parameter plane {Φ e , j} that specifies the form of the spatially periodic potential U (x). For both sufficiently small and large j it vanishes completely. One can observe that for a given external magnetic fluxΦ e the Stokes efficiency generally tends to increase as the parameter j grows. On the contrary, for a given j there is an optimal value of the external magnetic fluxΦ e for which the Stokes efficiency is maximal. We note that for two presented regimes, the set structure of the non-zero efficiency in the parameters plane {Φ e , j} is radically different. The left panel looks like butterfly wings and the right is similar to a rocking horse.
Summary
In this paper, we comprehensively studied the Stokes efficiency of the asymmetric SQUID in the case of the non-zero capacitance of all Josephson junctions and in presence of thermal noise. It allowed to analyze transport properties in the system for the entire scale of regimes: starting from overdamped, by damped and finally underdamped one. We focused on the connection between the directed transport characterized by the voltage across the SQUID and its efficiency. In particular, we examined voltage fluctuations and energetic performance of the device. We derived the expression for the power delivered by externally applied current and discuss its dependence on the system parameters. Apart from the expected influence of the current parameters I 0 , A and Ω it also depends on the thermal noise intensity D, i.e. on temperature of the system.
We have found that regions of low efficiency of the SQUID dominates in the parameter space. However, we have identified remarkable and distinct regimes of high efficiency η S ≈ 0.65. It turns out that the device operates best in the overdamped or close to damped regimes. Moreover, with the help of the computational power of modern GPU supercomputers we have identified the tailored set of parameters for which the efficiency η S is globally maximal and for this regime we discussed impact of variation of almost all system parameters on the relevant transport quantities. In particular, it follows that thermal fluctuations often have destructive influence on the energetic performance of the device. Moreover, we were able to detect also the regime for which thermal noise enhances the efficiency by inducing the large average voltage and minimizing its variance. Last but not least, we discussed in detail the impact of the external magnetic fluxΦ e on the performance and effectiveness of the SQUID.
Our results can readily be experimentally verified with an accessible setup consisting of three resistively and capacitively shunted Josephson junctions formed in an asymmetric SQUID device. Some partial transport characteristics like voltage have been experimentally studied in the overdamped regime [10,24]. However, the underdamped regime has not been tested and the efficiency has not been measured which makes our study a challenge for experimentalists. | 8,558.2 | 2015-02-18T00:00:00.000 | [
"Physics"
] |
An innovative rainwater system as an effective alternative for cubature retention facilities
The paper focuses on the possibilities of rainwater flow control in an innovative rainwater system which is equipped with a retention canals system. Sewage retention canal is a modern solution that provides effective retention of excess rainwater by using a capacity of sewer pipes and manholes. The retention is possible by using special damming partitions which have flow openings. The hydraulic working of the traditional rainwater system and the innovative rainwater system were compared with each other. The analysis was based on the results obtained from simulations using hydrodynamic modeling. Maximum possible values of rainwater outflow intensity from outlet nodes for the traditional rainwater system and the innovative rainwater system were discussed. On the basis of the analysis it was shown that the innovative rainwater system outweighs the classic rainwater one. It discharges two functions: transports and simultaneously retains excess rainwater in canals.
Introduction
Nowadays, rainwater retention is one of the most serious problems of water and sewage management. Proper management of rainwater is an extremely difficult task because of changing climatic conditions. The purpose of rainwater management is to provide an effective way to manage excess rainwater based on principles of sustainable development and with the least possible interference with the environment [9,15,16].
Dynamic development of urban areas and progressive urbanization in recent years, have contributed to the reduction of green areas and have caused an increase of paved surfaces [12,17,40]. These phenomena disturb the balance between precipitation processes and runoff, soaking and transpiration of rainwater [3]. Due to the intensification of the degree of development in recent years, a negative impact of climate change is observed, which results in more frequent extreme rainfall [10,16,13,38]. According to hydrological forecasts, the frequency of extreme precipitation will increase in the coming years [20,31,32]. These phenomena cause an increase of rainwater surface runoff, which negatively affects not only sewerage systems, but also water receivers [15,33]. A lack of a sufficient hydraulic reserve in the existing sewage system makes increasingly local flooding and overflow rainwater from the sewerage system on the land surfaces [9,4,12,13,14]. All of these phenomena enforce to look for an effective water management method in order to reduce the risk of flooding in urban areas and prevent failure of the operation sewage system.
The existing sewerage systems, because of overloading, require an extension or building additional retention facilities. In the case of projected sewerage systems, the main problem is the cost of construction canals with significant geometries and cubature facilities for rainwater retention. Additionally, it is also necessary to have enough area of land for the construction of retention facilities, which in the case of urban areas is often impossible.
As it was shown in many works [8,9,23,37] upper spaces in the canals are empty and are not fully used even during the maximum rains. The sewage retention canal [7] is a modern solution, which allows a practical use of this space and includes it into the usable retention capacity of the sewer system. In this solution vertical damming partitions are installed in manholes at certain distances.
Rainwater flows in sewage systems are most often rapid. A very large volume of rainwater are transported in a short time through the system of canals to the receiver. Such a situation causes numerous technical difficulties and a number of negative environmental consequences such as a rapid inflow of rainwater to the receiver, an increase in the speed of water flow in the river, floods, intensification of erosion phenomena, movement of river sediments, disturbances in functioning of aquatic ecosystems.
Traditional retention facilities usually occupy large areas, which in cities are valuable for residential, commercial and service development. In addition, they are expensive investments but necessary due to the regulation of the outflow of excess rainwater. All activities supporting this process and reducing the cost of its implementation are expected and valuable.
In the paper the role of retention in drainage systems is discussed. A hydraulic model of an innovative rainwater system equipped with a retention canals is presented, the model subcatchment is characterized, and the research methodology is described. The results are obtained based on simulations using hydrodynamic modelling. Additionally, the hydraulic functioning of traditional stormwater drainage system and innovative storm water drainage system equipped with retention canals were compared. On the basis of the analysis, a lot of advantages of the innovative sewage systems over the classic sewage system were shown.
The role of retention in drainage systems
In recent years, there has been a rapid development of the urbanization which, according to forecasts, will be growing rapidly [30,34]. A replacement of natural permeable areas with paved surfaces brings an increase of surface runoff and more rainwater discharge through sewer system [17]. Additionally, an extreme weather phenomenon such as heavy rains have been observed more frequently recently [13,32,35]. These cause a number of negative effects, for example hydraulic overloading of the rainwater system and treatment plants, local flooding, overload and pollution of the rainwater receiver [15,29]. As a result, an increasing part of the costs is spent on repairing the consequences of flood. Therefore, it is needed to improve methods to design sewage systems and search for new effective ways of retaining and controlling rainwater flow in sewage systems [28,17,19,35]. At first, the rainwater flow in stormwater systems should be reduced and delayed using infiltration and retention devices at the place of rainfall generation [27,10]. These solutions are not always able to use, so a careful analysis of their advisability should be conducted [9]. Retention tank ( fig. 1) and an additional transit canal ( fig. 2) have been the most well-known design solutions to reduce hydraulic overloading in the sewer system so far [28,27]. The use of retention tanks has both economica and environmental advantages. The problem of hydraulic overload sewage system and the objects working with it is solved and the stormwater receivers are protected against an excessive volume flow and pollutants by using retention tanks. Additionally, they allow the use of smaller geometries of sewer pipes and prevent overflowing the sewer system during heavy rains [26].
If an underground infrastructure is limited, the additional transit canal can be put in with the existing sewer system. The next possibility is to put the additional transit canal outside the urbanized area if there is dense underground infrastructure and surface development. However, location on the sewer system would interfere with investment and generate high investment costs [27].
The solutions mentioned have a limited scope of applications despite lots of advantages. There is no space for construction of such objects due to a rapid development of buildings and underground infrastructure. In addition, there are high investment costs. These are the basic disadvantages of current retention facilities. The lack of ability to use them and the growing problems of rainwater management make it necessary to look for modern solutions for rainwater retention [23,25].
One of them is the sewage retention canal ( fig. 3). This solution can be applied to both designed and already existing sewer systems. The innovative retention sewage canals can replace a retention tank or reduce its required volume. That makes the investments costs lower. It is an effective solution compared to traditional ones. It does not require an additional area to build the special retention facilities [8,9]. The main advantage of this solution is maximizing the retention capacity of the sewage systems. This in turn, allows hydraulic relief of the sewer systems, and gives an opportunity to connect new subcatchments to the existing sewage, and reduces the cost of constructing new sewage systems. The use of innovative retention canals equipped with damming partitions does not even require simple control systems as well as energy supply [28]. Such a sewerage system can be a successful alternative for Low Impact Development facilities and traditional retention reservoirs or cooperate with them in order to maximize the efficiency of the whole sewerage system [9,23]. This solution minimalizes the risk of urban flooding, does not interfere with the natural environment, protects rainwater receivers and complies with the principles of sustainable development. The hydraulic model of the innovative rainwater network is discussed in sec. 3.
Hydraulic model of an innovative rainwater system
Retention sewage canal is a patented solution RP no. 217405 [7]. Its primary advantage is an ability to utilize the capacity of the sewer systems, including pipes and manholes, which had not previously been utilized in full. It enables retention of excess rainwater. In many cases, this solution makes sewer system to function without any additional retention facilities, especially retention tanks [4].
This solution consists in equipping the canalization with a system of retention canals with special damming baffles. The damming elements are installed in inspection manholes, perpendicular to the flowing wastewater ( fig. 4). Damming partitions enable damming of rainwater throughout the sewage systems [28]. There is an opening flow at the bottom of each baffle and an overflow edge at the top, which is the leading discharge overflow [8,24]. The damming baffles are mounted to the inside walls of the canals.
The principle of operation of the innovative rainwater sewage system is shown in Figure 5 allow for effective use of the drainage system capacity [23]. Mounted damming partitions into canals create rainwater retention chambers. It is recommended to start filling these chambers from the highest chamber which has the smallest opening. The next lower chambers have larger flow openings [28].
The rainwater inflow to the accumulation chamber located below depends on the rainwater outflow from the chamber located above and the surface runoff entering the sewage systems. The efficiency of the innovation sewer systems is determined by the critical values of the stormwater outflow from the damming baffle Q o Imax . The slope, diameters of retention canals and the geometry of the damming partition have a significant influence on the outflow Q o Imax . A designer should not only fit slopes and diameters of canals but also design correctly the dimensions of damming baffles, including their height and size and the shape of flow holes. This is a basic task to be performed by a designer. The establishment of critical rainfall is necessary to project damming partitions. A full utilization of the space in the canal ensures the lowest rainwater outflow Q o Imax from an outlet node. It is determined by the value of the rainwater flow reduction coefficient β KR [28].
Model catchment with innovative rainwater system
A model catchment consists of 80 sub-catchments, where a total drained catchment area equals F = 80 ha. The same hydrological parameters were assumed for each sub-catchment. Three concepts of rainwater sewer system were considered, varying in canal slopes.
Concept I -the canals bottom slope amounts i k = 1 ‰. Concept II -the canals bottom slope amounts i k = 2 ‰. Concept III -the canals bottom slope amounts i k = 3 ‰.
It was assumed that the sewage system examined ha a linear sewer system in each design concept. It consisted of 80 pipes of equal length (Fig. 6).
Hydrodynamic modelling with the SWMM 5.1 program was used for the analysis. The surface runoff coefficient Ψ = 0.5, the slope of the drainage area i ż = 10 ‰, the catchment roughness coefficient n z = 0.015 s/m 1/3 and the canals roughness coefficient n k = 0.010 s/m 1/3 were assumed.
At the first stage, three concepts of traditional rainwater sewer system were considered. For each of them the maximum value of rainwater outflow from the sewer at outlet node Qo Tmax and the calculative time for rainwater sewage system dimensioning t m were determined. At the next stage, each of sewers was equipped with damming partitions. Three different spacings between the damming baffles L KR were assumed, such that nine variants of innovative sewer system equipped with the retention canals system were obtained. For that model of the sewer system, the maximum value of the rainwater outflow from outlet node in the innovative systems Qo Imax and the calculative reliable rainfall time to dimension the innovative rainwater canals t M were determined.
Precipitation model and research methodology
Precipitation models are used in the design of rainwater and combined sewer systems and facilities working with them. They allow determining the relationship between the intensity of the critical rainfall and the rainfall time and the probability of its occurring. The knowledge on critical rainfall is needed during hydrodynamic modelling [11,19].
The data on the functioning of the innovative rainwater system come from hydrodynamic modelling using the Bogdanowicz and Stachy rainfall model. It was developed on the basis of rainfall measurements from 20 meteorological stations of Institute of Meteorology and Water Management in the years 1960 -1990 in Poland [8,22]. It is a probalistic model of maximum rainfall heights. It considers the time of rainfall and probability of occurring [9]. It is described in the publication [21] by the following formula (1): where: h -maximum rainfall height, mm; t -rainfall duration, min; p -probability of rainfall occurrence, p ∈ (0;1]; α -parameter depending on the region of Poland R and time t, -. The parameter α depends on the region of Poland and the rainfall duration [22,21]. The precipitation model can be used for the whole Poland except for mountainous regions. The rainfall model of Bogdanowicz and Stachy is recommended for rainfall frequency C = 2, 5, 10 years [10]. The simulation of the phenomena in the sewer system was performed by using hydrodynamic modelling with the help of Storm Water Management Model program (SWMM 5.1). The probability of rainfall occurrence was assumed p = 50 %. The phenomena were simulated using the dynamic wave model. It can truly reflect the functioning of the sewer during changing water flows in time, the occurrence of backwater and the retention of rainwater in the sewer [39]. The flow along the sewer system is gravitational. The hydrodynamic models obtained reflect different conditions of sewer system operation considering three different canals bottom slopes and three different spacing between damming baffles.
Results and discussion
The analysis of the innovative rainwater system functioning in relation to the classical rainwater system was based on the results from Tables 1 and 2. Table 1 presents the data from simulation for traditional stormwater system with three variants of canals bottom slopes like i k = 1 ‰, i k = 2 ‰ and i k = 3 ‰. Drained catchment is F = 80 ha. For those three concepts, the values of the maximum rainfall outflow at the outlet node from catchment Qo Tmax and the calculative times for sewage system dimensioning t m were determined. Table 2 shows the values of parameters after equipping the classic stormwater system with damming baffles. Each conception of innovative rainwater sewage system takes into account three various distances between damming partitions like L KR1 = 200 m, L KR2 = 300 m, L KR3 = 400 m. In that way 9 different variants of innovative rainwater systems with retention canals were analyzed. TThe rainwater flow reduction coefficient β KR of the outflow from the innovative rainwater system was determined along with the parameters Q o Imax and t M . The cross-sections of conduits and slopes of canals bottoms were identical in each concept.
The data presented in Tables 1 and 2 are the results of simulation from hydrodynamic modelling [1]. They present that the value of the maximum rainwater outflow from the outlet node in innovative system Qo Imax is lower than Qo Tmax in the classic stormwater sewer system in each case.
The results of the research presented in Tables 1 and 2 confirm that with an increase of the slope of the canal bottom i k , the critical time for dimensioning the traditional sewerage system t m decreases. It simultaneously causes that the rainwater outflow at the outlet node Qo Tmax increases. The canal slope i k affects directly the retention capacity of the innovative system. The rainwater retention effects of the system increase as the slope decreases. This is why the higher canals slope i k , the higher flow velocity of rainwater in the sewage ν TK., so that the rainwater is transported faster towards the outlet node and the flow time t p in the sewer decreases. Changes of the canal slopes i k affect the value of maximum rainwater outflow both in the classic sewage Qo Tmax and in the innovative sewage Qo Imax . This relation is shown in Fig. 7, and takes into account different spacing damming baffles L KR .
In the case of traditional sewerage system for slope of canals i k = 1‰, the maximum rainwater outflow from the sewage is Qo Tmax = 2887.7 dm 3 /s. However, for slope i k = 2‰ the outflow is Qo Tmax = 3692.8 dm 3 /s, for slope i k = 3‰ outflow intensity increases almost 1.5 times to the value Qo Tmax = 4175.9 dm 3 /s. For example, considering an innovative system for the slope of bottom i k = 1 ‰ and damming barriers spacing L KR1 = 200 m, it is possible to obtain an almost threefold reduction of flow from the value Qo Tmax = 2887.7 dm 3 /s to Qo Imax = 981.6 dm 3 /s. For L KR2 = 300 m, the flow is reduced by 2.7 times (the outflow rate is 1063.4 dm 3 /s). For L KR3 = 400 m, the outflow is almost reduced by 2.5 times (Qo Imax = 1159.6 dm 3 /s) compared to the traditional storm sewer.
The results of the simulation showed that the spacing of damming baffles measurably affected the rainwater flow reduction from the drainage catchment's outflow. The flow reduction effects increase according to the decreasing slopes of the canals i k and to the decreasing damming baffle spacing L KR . Table 3 presents the multiplicity of rainwater outflow reduction in the innovative rainwater system depending on the spacing of damming partitions for distance L KR1 = 200 m, L KR2 = 300 m i L KR3 = 400 m and bottom slopes i k = 1 ‰, i k = 2 ‰ i i k = 3 ‰.
The above results show that after applying damming partitions in a traditional sewerage system for the slope i k = 1 ‰, the flow from the outlet decreases by more than two times in almost all cases and by almost three times in some cases. In sewerage system for the slope i k = 2 ‰, there is a smaller reduction of the flow, around from 2.2 to1.6 times less. In sewerage system for a slope i k = 3 ‰, the flow reduction was less than 2 times in every case.
Hydrograms are often used in order to reflect the reversibility of rainwater flow in the canal [2]. A comparison of rainwater outflow variability from a traditional and innovative system with retention canal system at time t was based on the hydrogram shown in Figure 8.
In the case of classical stormwater systems, the hydrogram has an unfavorable pointed shape. The use of damming partitions causes that the peak rainwater outflow intensity at the outlet node is significantly reduced and the shape of the hydrogram flattens. For a smaller distance between damming baffles, the hydrogram flattens more. The studies [1] have confirmed that spacing of damming partitions L KR impact on the value of the parameter Qo Imax independent of the considered time t. The smaller damming spacing, the greater the reduction of outflow Qo Imax from the innovative system. Figure 9 shows the relationship between the critical time for rainwater sewage system dimensioning t m and the critical time for innovative rainwater sewage system dimensioning t M . Different damming baffles spacing L KR was also considered for the innovative sewer system. Establishing the correct value of rain duration gives the basis for its dimensioning. However, in practice it turns out to be a very difficult task to solve because of complexity of studied phenomena [33].
As shown by the curves in Figure 9, the change of canals slope i k directly affects the determined value of critical time for dimensioning both the traditional and the innovative system. In both systems, an increase of canal slope i k results in a decrease of critical time for dimensioning sewage systems. As the spacing of the damming baffles L KR shortens, the value of the critical time t M increases. Many factors, especially parameters characterizing the drainage catchment impact on the value of critical ran duration. The biggest differences between the critical time for rainwater sewage system dimensioning t m and the critical time for innovative rainwater sewage system dimensioning t M occur at slopes i k = 1‰. Table 4 summarizes the values of the critical rain duration times t m and t M considering different slopes of the sewer canals i k and the spacing baffles L KR . For instance, in a traditional sewerage system with a slope i k = 1 ‰, the critical rainfall duration hits the t m = 32 min. After equipping this sewage with damming partitions with L KR1 = 200 m, the critical t M. hit 88 min. It shows a difference of ΔT = 56 min and is the highest recorded ΔT difference among all the considered design variants. For L KR2 = 300 m spacing, the value of ΔT = 52 min and for L KR3 = 400 m spacing, the value of ΔT = 46 min. This results in an important conclusion related to the damming partitions spacing L KR . An increase of the spacing between baffles Table 3: A comparison of rainwater outflow from traditional and innovative sewer system taking into account different variants of their working. The dependence between the values of the determinate time for the dimensioning of the traditional rainwater system t M and the determinate time for the dimensioning of the retention canals system t M is described by the coefficient of the determinate times γ TM [1] by the following formula (2):
Ratio of maximum rainwater outflow traditional to innovative system, at various slope of canals bottom i k and damming baffles spacing LK
where: t M -duration of the maximal (critical) rainfall determined for innovative rainwater sewage system dimensioning, min; t m -duration of the maximal (critical) rainfall determined for traditional rainwater sewage system dimensioning, min. The relationship between the coefficient γ TM and the sewer slope i k and the spacing of damming baffles L KR is shown in Figure 10. The results confirm the rule that the value of the coefficient γ TM is always larger than 1. This proves that the critical time for the dimensioning of retention sewer systems t M is always larger than the critical time for the dimensioning of traditional rainwater sewage system t m .
The value of coefficient γ TM decreases with increasing canal slopes i k and increasing distance between damming partitions L KR . The biggest values of the coefficient γ TM were determined for the canals slopes i k = 1 ‰ and they are γ TM > 2.
The sewage flow reduction coefficient is another important parameter which characterizes the work of innovative rainwater system. This coefficient plays a key role to determine the usable capacity of the retention tank [2,5,6]. Its value depends on an inflow and outflow rate. The larger the volume of rainwater necessary for retention, the smaller the value of the β coefficient. The value of coefficient β is greater than zero and less than unity for the classical sewage working with a retention reservoir [2,5,18]. In order to determine the reduction of the rainwater flow in the innovative rainwater sewage equipped with the retention canal system, the rainwater flow reduction coefficient β KR was found. The value of coefficient β KR is ratio of the critical rainwater outflow intensity Qo Imax to Qo Tmax . It is determined by the following relation (3): where: Qo Tmax -maximum value of rainwater outflow from the traditional rainwater sewer at outlet node, dm 3 /s; Qo Imax -maximum value of rainwater outflow from the innovative rainwater sewer at outlet node, dm 3 /s. It can be confirmed that the use of retention canals provides the expected flow reduction by determining the value of the rainwater flow reduction coefficient in the innovative sewer system β KR . The smaller the value of coefficient β KR , the greater the effects of rainwater retention in the sewage with retention canals system. When classic storm water sewage system is designed, the value of coefficient β is determined at the initiation stage. This value affects the required volume of retention tanks [2,5]. However, in the case of the innovative rainwater system, the value of the rainwater flow reduction coefficient β KR is not determined at the design stage, but it is the resulting value. It is calculated on the grounds of formulated procedure at the final stage of simulation calculations conducted as part of hydrodynamic modelling.
The effect of slope of canals bottom i k and damming partitions spacing L KR on the value of reduction coefficient β KR is shown in Fig. 11. The value of this coefficient increases with a growth of the sewer slope i k . This is because the velocity of rainwater flow through the sewage increases and rainwater is retained in the sewage system for shorter time. The retention capacity of the sewage system decreases. Another parameter that affects the value of the reduction coefficient β KR is the damming partitions spacing L KR . The larger the damming baffles spacing L KR , the smaller the retention capacity of the sewage system and the value of the β KR coefficient is larger. On the ground of Figure 11 it can be ascertained that the largest differences between the values of β KR coefficient are obtained when the slope of the canals bottom i k is changed. Considering the constant damming baffle spacing L KR and taking into account the change of canals slope i k , the largest differences β KR coefficient were recorded at spacing L KR = 400 m. For instance, by reducing the canal slope from ik = 3‰ (for which the reduction coefficient β KR = 0.75) to i k = 1‰ (β KR = 0.40), it was possible to obtain the value of the coefficient β KR as low as 0.35. For changing the canal slope from the value i k = 3 ‰ to i k = 2 ‰ (β KR = 0.64), the difference between the reduction coefficients was 0.11. On the other hand, decreasing the canals slope from i k = 2 ‰ to i k = 1 ‰, the reduction coefficient was lower by as much as 0.24 was achieved. These results indicate that even with significant spacing of damming baffles L KR , the use of retention rainwater canals for sewage with low slope of canals bottom i k is fully justified.
In the case of decreasing the distance between the damming partitions L KR and keeping the constant slope of canals i k , the desired decrease of the value of flow reduction coefficient β KR can be obtained. By reducing the distance between the damming baffles from L KR3 = 400 m to L KR1 = 200 m for the sewage system with a slope ik = 1‰, β KR that was lesser by 0.06 was obtained. For the sewage system with slope i k = 2 ‰, this difference is 0.16 and in the case of the sewage system with slope i k = 3‰, the value of the reduction factor β KR less by 0.16 was obtained.
The key issue is the choice of an optimal solution [6]. Therefore, it is necessary to consider that the use of a smaller damming partitions spacing is justified and it provides the expected effect of reducing the rainwater flow. The studies have confirmed that each solution should be individually analyzed, both economically and ecologically. For example, for the variant with the slope of sewage canals i k = 1 ‰ and the damming partitions were localized at L KR3 = 400 m, the flow reduction coefficient was β KR = 0.40. When reducing the spacing by 100 m, the flow reduction was determined to the value β KR = 0.37. However, on reducing the spacing by 200 m, the value of β KR coefficient was 0.34. In this case, increase in the density of the damming partition spacing causes a slight flow reduction. Considering the variant for the slope i k = 2 ‰ and L KR3 = 400 m whose β KR = 0.64, decreasing the spacing by 100 m causes a decline of the value of this coefficient by 0.07. In the case of decreasing the spacing by 200 m, the coefficient β KR is already smaller by 0.16. In this situation, changing the spacing of the baffles affects the value of the outflow reduction coefficient more than in the previous variants. As the results for some variants show, the effect of rainwater flow reduction using close partitions spacing is the same as for larger spacing or slightly more beneficial. Therefore, it is recommended to consider the economic costs which come from the implementation and subsequent exploitation of the innovative system.
The study [2] demonstrates that there is a close relationship between the rainwater flow reduction coefficient β and the critical time for multi-chamber tanks dimensioning T MW in the traditional rainwater sewage system. The work [1] showed that there was a close relationship between the rainwater flow reduction coefficient β KR in the innovative rainwater system and the critical time for innovative rainwater sewage system dimensioning t M . This phenome is shown by the curve in Figure 12.
This relationship was formulated based on the pairs of results for all variants presented earlier in Table 2, including the calculative time for the innovative rainwater sewage system dimensioning t M and the corresponding rainfall flow reduction coefficient β KR . Based on this, trend lines were created and the equation describing this relationship was determined. To conclude one can say that there is a close relationship between the time t M and the reduction coefficient β KR . The results are well fitted to the curve as evidenced by the high value of the coefficient of determination R 2 = 0.9836. The studies [1] have shown that for specific design conditions, it is possible to establish an unambiguous curve of the relationship between the critical time t M and the reduction factor β KR.
Summary and final conclusions
The paper presents possibilities of rainwater outflow control in an innovative rainwater system using the canal retention. The hydraulic functioning of the traditional rainwater sewage and the innovative rainwater sewage after equipping it with a retention canal system were compared. A total of 9 different functioning variants of the innovative rainwater system were analyzed. Each variant of the innovative system with damming baffles showed more favorable hydraulic conditions than in the case of an identical traditional system.
On the basis of simulation studies and an analysis carried out on the model urban catchment, a number of important conclusions of cognitive and application significance can be formulated. 1. The value of the maximum rainwater outflow at the outlet node from the innovative rainwater system Qo Imax is always lower than the value of the maximum rainwater outflow from the identical traditional sewer system Qo Tmax . 2. Equipping the innovative system with damming partitions enables effective use of the sewage system capacity. This, in turn, reduces the rainwater outflow Qo Imax at the outlet node. 3. The slope of the canal bottom i k and damming partitions spacing L KR influence the value of the maximum rainwater outflow Qo Imax at the outlet node of the innovative rainwater system. 4. An increase of the rainwater outflow flow from the sewage outlet Qo Imax with a growth of the canal slope i k occurs regardless of the established damming partitions spacing L KR . A decrease of the rainwater outflow flow from the sewage outlet Qo Imax of the canal slope i k occurs regardless of the established damming partitions spacing L KR by analogy. 5. Equipping the sewerage system with a system of retention canals allows beneficial flattening of rainwater outflow hydrogram. It has affect on the reduction of the required capacity of retention reservoirs cooperating with the sewerage system. 6. The critical rainfall time for the retention canals dimensioning operating in the innovative system t M is always greater than the critical rainfall time for the traditional sewer system dimensioning t m . 7. The value of the coefficient of the critical times γ TM is always greater than the 1.0. It confirms the occurrence of the rainwater retention phenomenon in the retention canals of the innovative rainwater system.
On the basis of the analysis carried out, it was concluded that the key parameter in the innovative rainwater system is the slope of the canal bottom i k and the spacing between the damming partitions L KR. A properly designed sewage with damming partitions allows full utilization of the sewage capacity and flow reduction, replacing cubature objects. This solution can be successfully applied in new and existing sewerage systems instead of assigning new land for construction of, for example, retention reservoirs. An innovative rainwater system provides an efficient rainwater management and prevention of urban flooding. It can become a breakthrough, as well as a simple and effective solution to solve the problems associated with rainwater management. | 7,876.4 | 2021-12-01T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
A method for statistical analysis of repeated residential movements to link human mobility and HIV acquisition
We propose a method for analyzing repeated residential movements based on graphical loglinear models. This method allows an explicit representation of residential presence and absence patterns from several areas without defining mobility measures. We make use of our method to analyze data from one of the most comprehensive demographic surveillance sites in Africa that is characterized by high adult HIV prevalence, high levels of poverty and unemployment and frequent residential changes. Between 2004 and 2016, residential changes were recorded for 8,857 men over 35,500.01 person-years, and for 12,158 women over 57,945.35 person-years. These individuals were HIV negative at baseline. Over the study duration, there were a total of 806 HIV seroconversions in men, and 2,458 HIV seroconversions in women. Our method indicates that establishing a residence outside the rural study area is a strong predictor of HIV seroconversion in men (OR = 2.003, 95% CI = [1.718,2.332]), but not in women. Residing inside the rural study area in a single or in multiple locations is a less significant risk factor for HIV acquisition in both men and women compared to moving outside the rural study area.
Introduction
This paper is concerned with modeling repeated residential movements of a group of individuals over a certain period of time, and with the assessment of the predictive associations between these multivariate patterns of residential changes and health outcomes of interest such as HIV acquisition. To a good extent, the statistical literature on human mobility has focused on the estimation of migration flows [1][2][3]. Migration flows are represented as origindestination migration flow tables. These are square tables in which the rows and columns correspond with areas of interest. The (i, j) cell contains a count of the number of individuals that left from area A i and moved to area A j over the course of a specified time frame. The inclusion of other categorical variables lead to higher-dimensional migration flow tables. However, migration flow tables cannot capture the movement of those individuals that resided in more than three areas during the time frame of observation. An example individual that left from an area A 1 to move to another area A 2 , then moved again to area A 3 , would contribute with a count of 1 to the (1,2) and (2,3) cells of the resulting migration flow table. But, the link between residential movements that are associated with the same individual will be lost.
Other important classes of statistical models for human mobility are Lèvy flights models [4] and multiplicative latent factor models [5]. Lèvy flights models make use of a power law to represent the probability that an individual changes their residence over a certain distance. Under this model, moving over a shorter distance is more likely than moving over longer distances, but residential movements over longer distances can still take place even if they occur less often. Multiplicative latent factor models improve the Lèvy flights models with their ability to quantify the desirability of residing in certain areas over other areas. Both the Lèvy flights models and the multiplicative latent factor models are based on the crude assumption that human travel can be seen as a Markov process in which the probability of residing in an area depends only on area in which the previous residence was located, and does not depend on the locations of previous residences. However, it is possible that individuals move repeatedly across multiple areas over longer time periods of several years. Markov process models break residential trajectories that involve multiple residential locations into pairs of consecutive locations of residency, and, by doing so, loose key dependencies that are induced by multiple locations of residency of the same individuals in a reference time frame.
Information about residential locations has also been used in statistical models through the construction of mobility measures-see, for example, [6] and the references therein. These measures are summaries of distances between consecutive residencies, or of time spent in certain locations. While mobility measures can be successfully used as independent variables in a wide range of statistical models, the connection between these measures and the areas in which individuals have resided is lost. The method for analyzing repeated residential movements we follow in this paper allows an explicit representation of residential presence and absence patterns from several areas without defining mobility measures. As such, this method offers a new perspective on what can be learned from this important type of human mobility data.
We assume that the residential locations of N individuals belong to K areas denoted by {A 1 , A 2 , . . ., A K }. For each individual, we know which areas they resided in. These data can be represented as a N × K mobility matrix M = (m nk ), where if individual n resided in area k; 0; if individual n did not reside in area k: Our framework does not impose any constraints on the number of individuals N, or on the number of areas K. Other categorical variables of interest can be recorded as additional columns in the mobility matrix M. By counting the number of times the same combination of levels of the categorical variables in M appear as rows of this matrix, a multi-dimensional contingency table is formed [7]. We propose representing the multivariate patterns of associations in this contingency table with graphical loglinear models that are a special type of hierarchical loglinear models [8,9]. These models are determined by graphs that have vertices associated with each area. They characterize the multivariate dependency structure (e.g., independence or conditional independence) among random variables using graphs [9]. The complete subgraphs of these graphs define interaction terms of joint presence and absence patterns from two, three or several areas. A missing edge between two areas means that, conditional on presence or absence in the rest of the areas, the presence or absence of a random individual in the first area is independent of the presence or absence of the same individual in the second area.
A key step in data analysis with graphical loglinear models consists of the estimation of the underlying graph. This is called the structural learning problem [10,11], and it becomes a very difficult computational problem when many random variables are involved [12,13]. Bayesian methods provide a flexible framework for incorporating uncertainty of the graph structure: inference and estimation are based on averages of the posterior distributions of quantities of interest, weighted by the corresponding posterior probabilities of graphs [14]. Here we follow a Bayesian approach for solving the structural learning problem.
The goal of our statistical analysis is to identify graphs that have vertices associated with each area in the corresponding graphical loglinear models. Based on this approach, we examine the predictive value of residential locations as a driver of HIV transmission risk in a comprehensive population-based demographic surveillance site in the KwaZulu-Natal Province, South Africa-the Africa Centre, now Africa Health Research Institute (AHRI) [15]. Specifically, we analyze mobility patterns of 21,015 individuals who were HIV negative at baseline, and were registered in the AHRI demographic surveillance system. Their mobility patterns are defined by residential histories over the study period. The AHRI site is characterized by high adult HIV prevalence (24% in adults aged 15 years 30 and older in 2011), and high levels of poverty and unemployment (in 2010, 67% of adults over the age of 18 in the rural study area were unemployed) [16]. The geographical location of this demographic surveillance area is ideal for our aim.
Background
Historically, human mobility has been one of the key drivers in the spread of HIV at a global scale [17][18][19][20][21][22][23][24][25][26][27][28][29]. Many studies have provided significant evidence linking increased population mobility with multiple sexual partners, reduced condom use, increased risky behavior (e.g., encounters with commercial sex workers, engaging in transactional sex) [30][31][32], increased sexual behavior [20,[33][34][35][36][37][38], and increased likelihood of HIV acquisition [6,28,39]. Mining settlements, transport corridors, or poor urban or periurban communities exacerbate the effect of the risk factors of HIV acquisition [28,40,41]. It has been empirically demonstrated that an individual's risk of acquisition of HIV is strongly driven by community-level HIV prevalence [16], community-level migration intensity [42], mean number of sexual partners in the surrounding local community [43], as well as ART coverage and population viral load in the local community, respectively [16,44]. These community-level risk factors confer substantial additional risk of new HIV infection after controlling for a suite of well-established individual-level risk factors.
In South Africa, which is the focus of this study, the risk of HIV infection has been shown to be increased by human mobility [19,45,46]. South Africa is one of the countries with the highest burden of HIV, and has a long history of internal labor migration of men that periodically leave their areas of permanent residence to seek temporary employment in mines and factories due to the scarcity of local employment [47]. During the apartheid era which imposed travel restrictions for Blacks, women were typically left behind to take care of families, while men submitted remittances back to their households. Because of economic conditions, this way of life continues to exist in poor rural regions of South Africa including this rural study community. However, as opposed to the aparheid era, in the last decade both men and women frequently establish residencies for various periods of time to work or for many other reasons in locations within the KwaZulu-Natal Province (e.g., Richards Bay or Durban), or in other more distant locations in South Africa (e.g., Johannesburg, Pretoria or Cape Town) [6].
The rapid increase in the adult HIV prevalence in South Africa, from 0.7% in 1990 to 13% in 2000 [48,49], is broadly consistent with ongoing patterns of circular labor migration within the country and increased in-migration from neighboring countries after the collapse of Apartheid [50][51][52]. For example, a phylogenetic study from the KwaZulu-Natal province reveals that external introductions in the early 1990s, via human movement from neighboring countries, played a vital role in driving the early HIV epidemic [53]. Historically, patterns of circular migration in South Africa were shaped by the migrant labor policies of the Apartheid system. From the 1950s until the democratic transition in 1994, Apartheid authorities sought to consolidate white rule by developing urban centers and resettling black Africans into rural and undeveloped homelands. Racial segregation and resettlement was seen as a more rational distribution of African labor between white cities, industries, mines, and farms [54]. Men had to migrate from their homeland residencies to their work place for long periods of time, without the possibility of their families joining them [55]. Because of separate spheres of living, migrant men took other partners and formed second families at the places where they worked [56,57], thus increasing the risk of HIV infection and the probability of transmission upon returning home. Apartheid policies had a profound effect on the stability of the family system, a demographic reality that drove the spread of HIV in the 1990s and thereafter.
Efforts to contain the HIV epidemic after 2000 were stalled by the South African government's refusal to make ART available at public health-care facilities nationwide [58,59]. This refusal was motivated by AIDS denialism among government officials, who claimed that HIV was not the cause of AIDS, that ART was toxic, and that the spread of HIV was being over-sensationalized [60,61]. During this time, the adult HIV prevalence increased to 15.2% [49] and was as high as 29.5% among pregnant women attending antenatal clinics [48]. Following public pressure from AIDS activists and civil society organizations, the South African government made ART with a CD4+ T-cell count eligibility criteria of <200 cells/μL in 2004 [62]. In 2010, treatment eligibility was extended to pregnant woman with CD4+ T-cell counts <350 cells/μL and patients with active tuberculosis [62]. By 2012, the HIV prevalence among 15-49 yearolds was at 18.8% [63] and at 20.6% in 2017 [64].
Study setting
The study was conducted in the Africa Health Research Institute (AHRI) Population Intervention Platform Study Area (PIPSA), formerly the Africa Centre Demographic Information System (ACDIS), in uMkhanyakude District, KwaZulu-Natal Province. PIPSA was commissioned in 2000 by the Wellcome Trust as a platform for longitudinal population based studies of epidemiology and intervention research. This rural study area covers 438 km 2 , and comprises approximately 11,000 households with 100,000 individuals. This community is characterized by high HIV prevalence frequent migration, low marital rates, late marriage especially for men, polygamous marriages and multiple sexual partnerships, as well as by poor knowledge and disclosure of HIV status [15,39,56,65]. Incidence peaked at 6.6 per 100 person-years in women aged 24 years, and at 4.1 per 100 person-years in men aged 29 years over the same period [39].
For over 15 years, PIPSA has continuously collected longitudinal surveillance data on a range of health care and social intervention exposures, as well as health, socio-economic and behavioral outcomes [15]. During the household data collection cycle, households are visited every 6 months by fieldworkers and information supplied by a single key informant.
Population-based HIV surveillance and sexual behavior surveys take place annually. Since 2003, annual HIV testing became part of household surveillance.
Study eligibility
Starting with 2007, all adults and adolescents aged 15-17 residing in the rural study area who were able to provide written consent were eligible to participate in the study. From 2003 to 2006, eligibility was restricted to women aged 15-49 years and men aged 15-54. We note that, although individuals under 18 are legal minors, under South African law, they can consent independently to medical treatment from the age of 14. Minors can legally consent independently to an HIV test from the age of 12, when it is in their best interest, and below the age of 12 if they can understand the benefits, risks and social implications of an HIV test [66].
Ethics statement
Informed written consent was obtained from all eligible individuals. After signing the consent, eligible participants are interviewed in private by trained fieldworkers, who also extract blood from consenting individuals by finger-prick for HIV testing, prepare dried blood spots for HIV testing according to the Joint United Nations Programme on HIV/AIDS (UNAIDS) and World Health Organization (WHO) Guidelines for Using HIV Testing Technologies in Surveillance [15]. Ethics approval for data collection and use was obtained from the Biomedical Research and Ethics Committee (BREC) of the University of KwaZulu-Natal (Durban, South Africa), BREC approval number BE290/16. The BREC was aware that some of the study participants were legal minors, and approved the age range of participation and the specific consent procedure for minors.
Cohort description
From the entire population under surveillance in PIPSA between January 1, 2004 and December 31, 2016, we selected those individuals who consented to test at least twice for HIV after the age of 15, and whose first test was negative. Although the annual participation rates in HIV testing are not high (see Table 1), a number of 8,857 men and 12,158 women satisfy these inclusion criteria. Participants seldom test every year, and, in this cohort, the median time between the last HIV negative and the first HIV positive tests in men was 3.34 years (IQR = 4.64), and in women 2.58 years (IQR = 3.69). The date of HIV seroconversion was assumed to occur according to a uniformly random distribution between the date of the last negative and first positive HIV test [67]. Here seroconversion refers to the transition from infection with the HIV virus to the detectable presence of HIV antibodies in the blood. Fig A from S1 Supporting Information gives the crude annual consent rates, while Fig B from S1 Supporting Information shows the consent rates by age group and gender. Although the overall consent rate changes over time, there does not seem to be any relevant difference in consent by sex and age.
PIPSA collects data about all the individuals that are members of a family unit or a household in the rural study area irrespective of the current residency status. It collects longitudinal residential information about the exact periods of time each study participant spent living in each location. Fieldworkers record changes in residency as the origin place of residence, the destination place of residence and the date of the move. Residencies can be located inside or outside the rural study area. The residential locations inside the rural study area have been comprehensively geolocated to an accuracy of <2m [68]. Repeat-testers can change their place of residence multiple times: they can move between two residencies located inside the rural study area, between two residencies located outside the rural study area, or between a residency inside the rural study area and another residency outside the rural study area. The relevance of looking whether repeat-testers have resided outside the rural study area comes from the findings of Dobra et al. [6]. Their results indicate that, for the same rural study area, the risk of HIV acquisition is significantly increased for both men and women when they spend more time outside the rural study area, or when they change their residencies over longer distances.
For the purpose of this study, the geolocations of the homesteads have been mapped into 45 non-overlapping communities that cover the rural study area-see Figs E and F in S1 Supporting Information. The division of the rural study area into communities is motivated by the results of Tanser et al. [69]. Their study identified a significant geographical variation in HIV incidence in the same rural study area. Specifically, they identified three large irregularlyshaped clusters of new HIV infections. Although these clusters cover only 6.8% of the rural study area, about 25% of the sero-conversions that occurred over this study's period are associated with residencies in them. This suggests the existence of clear corridors of HIV transmission inside the rural study area. Together, the results of Dobra et al [6] and Tanser et al [69] indicate that men and women who reside outside the rural study area, or occupy residencies that are located in the corridors of HIV transmission inside the rural study area are at an increased risk of acquiring HIV.
We note that the exposure period for a repeat-tester starts at the time of their first HIV test, and ends at their HIV seroconversion date for seroconverters, or at the time of their last HIV negative test for those that did not seroconvert. The residential locations occupied before seroconversion coud have contributed to changes in sexual behavior that led to HIV acquisition, while residential locations occupied after seroconversion could be associated with repeat-testers seeking family support, health care or moving away to avoid social stigma [22,38]. For this reason, the residential locations occupied by seroconverters after they acquired HIV were discarded.
Statistical analyses
We determined in which of the 45 communities each of the 8,857 men and 12,158 women lived in during the study period. This information was recorded as binary variables C1, C2, . . ., C45 with levels "yes" or "no" in two mobility matrices, one for men and one for women. We also determined whether a repeat-tester moved outside the rural study area. This information was recorded as a binary variable Outside with levels "yes" or "no". Furthermore, we determined whether a repeat-tester has seroconverted, and whether a repeat-tester was younger than 30 years at start of their observation period. This information was recorded as two additional binary variables Seroconverted and Young with levels "yes" or "no". For example, a repeat-tester that lived in communities C1 and C2, moved outside the rural study area, was older than 30 years at baseline, and has seroconverted, would have C1 = C2 = Outside = Seroconverted = yes and C3 = . . . = C45 = Young = no. The data in the resulting mobility matrices involve 48 binary variables. The mobility matrix for men is available in S1 Data, and the mobility matrix for women is available in S2 Data. They define two dichotomous contingency tables with 2 48 cells, one table for men and another table for women. These tables which we call mobility tables are hyper-sparse: most of their counts are zero. The mobility table for men has only 598 positive counts-see Table E in S1 Supporting Information. Among these counts, there are 292 (48.83%) counts of 1, 48 (8.03%) counts of 2, 30 (5.02%) counts of 3, 28 (4.68%) counts of 4, and 13 (2.17%) counts of 5. The top five largest counts are 192, 186, 180, 177 and 168, respectively. They correspond with men that were less than 30 years old at the start of their observation period, did not seroconvert by the end of their observation period, never moved outside the rural study area, and lived in exactly one of these communities: C7, C37, C40, C39 and C22. The mobility table for women has only 939 positive counts-see Table F
Statistical modeling framework
In this paper we make use of a Bayesian framework for solving the structural learning problem that is suitable for the analysis of hyper-sparse contingency tables with p = 48 variables. This framework [70] determines graphical loglinear models that are a special type of hierarchical loglinear models [8,9]. A graphical model for a random vector X = (X 1 , X 2 , . . ., X p ) is specified by an undirected graph G = (V, E) where V = {1, . . ., p} are vertices or nodes, and E � V × V are edges or links [9]. A vertex i 2 V of G corresponds with variable X i . The absence of an edge between vertices i and j in G means that X i and X j are conditional independent given the remaining variables X V\{i,j} . The graph G also has a predictive interpretation. Denote by nbd G (i) = {j 2 V: (i, j) 2 E} the neighbors of vertex i in G. Then X i is conditionally independent of X Vnðnbd G ðiÞ[figÞ given X nbd G ðiÞ which implies that, given G, a mean squared optimal prediction of X i can be made from the neighboring variables X nbd G ðiÞ . The structural learning problem estimates the structure of G (i.e., which edges are present or absent in E) from the available data x = (x (1) , . . ., x (n) ) by sampling from the posterior distribution of G conditional on the data x, i.e.
where Pr(G) is a prior distribution on the graph space G p with p variables, and Pr(x | G) is the marginal likelihood of the data conditional on G [10]. We use a prior on the space of graphs G p that encourages sparsity by penalizing for the inclusion of additional edges in the graph G = (V, E) [10]: where β 2 (0, 1) is set to a small value, e.g. b ¼ 1=ð 48 2 Þ � 0:00089. Under this prior, the expected number of edges for a graph is 1. This means that sparser graphs with few edges receive larger prior probabilities compared with denser graphs in which most edges are present.
Determining the graphs with the highest posterior probabilities (1) is a complex problem since the number of possible undirected graphs 2 ð p 2 Þ becomes large very fast as p increases. For example, our two mobility tables involve p = 48 variables, and the number of possible undirected graphs in G 48 is approximately 10 325 . This motivated the development of computationally efficient search algorithms for exploring large spaces of graphs that have the ability to move quickly towards high posterior probability regions by taking advantage of local computation. Among them, the birth-death Markov chain Monte Carlo (BDMCMC) algorithm [70] determines graphical loglinear models. BDMCMC is a trans-dimensional MCMC algorithm that is based on a continuous time birth-death Markov process [71]. Its underlying sampling scheme traverses G p by adding and removing edges corresponding to the birth and death events. This algorithm is implemented in the package BDgraph [72,73] for R [74]. By employing the BDgraph package, we ran the BDMCMC algorithm for 250,000 iterations to sample graphs from the posterior distribution (1) on G 48 for the mobility tables for men and women. Figs C and D in S1 Supporting Information give the estimated posterior inclusion probabilities of the ð 48 2 Þ ¼ 1128 edges across iterations. We see that, after about 50,000 iterations, the subsequent posterior edge inclusion estimates stabilize. For this reason, the first 50,000 sampled graphs were discarded as burn-in, and the remaining 200,000 sampled graphs were used to estimate posterior edge inclusion probabilities.
Limitations
Representing residential locations data as mobility matrices leads to information loss, as follows: (i) the order in which an individual resides in two or more areas is no longer accounted for; (ii) residential movements that occur within the same area are missed; (iii) the amount of time an individual maintains a residence in the same area is overlooked; and (iv) the number of times an individual establishes a residence in the same area is lost. Although this loss of information can be seen as significant, the major advantage of our proposed methodology for analyzing repeated residential movements is its ability to capture repeated presence and absence patterns from several areas. For this purpose, mobility matrices suffice.
Another limitation is related to the graphs identified by structural learning in graphical loglinear models. The prior on the graph space (2) gives the same probability of existence of an edge between any two areas irrespective of the actual spatial distance between them. In this application, the use of this prior is justified: there is no reason to assume that more distant areas are less likely to be connected than areas that are closer to each other. In fact, as we will see in the Results section, the repeat-testers were more likely to make residential movements between more distant locations (e.g., a location inside the rural study area and another location outside the rural study area) than between less distant locations (e.g., two locations inside the rural study area). As such, while specifying a prior on the graph space that takes actual physical distances between areas into account is mathematically possible [75], the use of this type of spatial prior in this study was not necessary.
A third limitation of our study is related to mapping the locations inside the rural study area into 45 communities (spatial units), and of all the locations outside the rural study area into an additional spatial unit. These specific choices could induce biases related to the modifiable areal unit problem (MAUP) [76,77]. MAUP identifies the inevitable statistical bias that occurs due to scale (i.e., different sized spatial units) and zoning (i.e., different definitions of boundaries used to define spatial units). Due to MAUP, altering the choices of spatial units employed in a statistical analysis could potentially affect the results reported in a significant manner. However, in our application, the spatial units employed were not arbitrary: the 45 communities have not been defined for the purpose of this study alone. Instead, these communities were employed in several studies conducted in AHRI/PIPSA-see, for example, [69]. These communities have specific social, economic and demographic relevance for the rural study area. For this reason, reporting results based on spatial units constructed with respect to these 45 communities is meaningful.
Descriptive summaries
We recorded residency changes for 8,857 men over 35,500.01 person-years, and for 12,158 women over 57,945.35 person-years. The median observation period for men was 3.72 years (IQR = 4.00), while the median observation period for women was 4.41 years (IQR = 5.47). Tables 2, 3, 4 and 5 give cumulative durations of exposure of the repeat-testers stratified by age, calendar year, marital status and education level. The calculation of person-years is based on a random imputation of the seroconversion date between the date of the last negative and first positive test for HIV sero-converters [67], and on the date of the last negative test for those who are censored. We see that longer exposure periods are recorded for younger study participants between 15 and 24 years old. The length of exposure over calendar years remains relatively unchanged between 2005 and 2011, but has a slight tendency to decrease Table 6 gives seroconversion rates stratified by gender, age (younger or older than 30 years at baseline), and residency outside the rural study area. The largest seroconversion rate 22.47% (95% CI: 21.49-23.45) is for young women who resided in the rural study area for their entire exposure period. The seroconversion rate for young women who resided outside the rural study area is slightly lower: 19.20% (95% CI: 17.88-20.52). The largest seroconversion rate for men is 13.24% (95%CI: 11. 76-14.72), and corresponds to the young group that moved outside the rural study area. The seroconversion rate for young men who did not move outside the rural study area is significantly lower: 7.56% (95% CI: 6.88-8. 24). Table 6 also shows that the seroconversion rates for both men and women in the older age group are higher for the repeat-testers that moved outside the rural study area as compared to the repeat-testers that did not move outside the rural study area. We determined the number of repeat-testers that moved their residence between any two communities, or between a community and a location outside the rural study area. The resulting mobility flow diagrams are shown in Figs 1 and 2. We see that, while men and women move between the 45 communities, substantially larger flows are associated with changes of residencies to and from locations outside the rural study area. Table 7 gives a summary of the frequency of residential movements inside the rural study area, and also between a location outside the rural study area and another location inside or outside the rural study area by age group and gender. Women in the 20-24 age group move outside the rural study area more often than men in the same age group (26.56% vs. 23.31%). Residential movements outside the rural study area become less frequent for women in the 25-29 age group, but are comparable in frequency with residential movements of men in the 25-29 age group. Men in the 30-34 age group move to and from locations outside the rural study area more frequently than women in the 30-34 age group. Residential movements outside the rural study area of women become significantly less frequent in the age groups 35-39, 40-44 and older than 45 as compared to residential movements of men in the same age group. Residential movements inside the rural study area of both men and women are substantially less frequent than residential movements to and from a location outside the rural study area in any age group. However, inside the rural study area, women tend to be more mobile than men in the younger age groups.
We remark that residential movements inside the rural study area occur over much smaller distances (mean = 10.44 km, IQR = 9.14 km) compared to residential movements that involve locations outside the rural study area (mean = 128.50 km, IQR = 178.33 km). The repeat-testers are cross-classified by whether they moved outside the rural study area (Outside: Yes/No) and whether they were less than 30 years old at the start of the study (Young: Yes/No).
Graphical loglinear models for mobility tables
https://doi.org/10.1371/journal.pone.0217284.t006 includes the edges with estimated posterior inclusion probabilities greater than 0.5 as our estimate of the conditional independence graph. The median graph for men's mobility table has 995 edges, while the median graph for women's mobility table has 1,022 edges. We refer to these two graphs as men's and women's mobility graphs.
The overall structure of the two mobility graphs is remarkably similar. In the men's mobility graph, the vertex associated with the variable Outside is connected with the vertices associated with 33 out of the 45 communities-see the map from Fig E in S1 Supporting Information. The subgraph that involves vertices associated with the 45 communities is dense: it has 1,922 edges-97.07% of the 990 possible edges. In the women's mobilty graph, the vertex Outside is connected with vertices associated with 39 out of 45 communities-see the map from Fig F in S1 Supporting Information. The subgraph associated with the 45 communities is also dense: it has 1,962 edges-99.09% of the 990 possible edges. In both graphs, there is no edge between the vertices associated with variables Seroconverted and Young, and the community vertices. This implies that, conditional on the variable Outside, the variables Seroconverted and Young are independent of the community variables C1, . . ., C45 for both men and women. The most relevant differences between the two mobility graphs are related to the edges that link the variables Outside, Seroconverted and Young-see Figs 4 and 5. For men, vertex Outside is connected with vertex Seroconverted, but the edges between vertices Outside and Young, and between vertices Seroconverted and Young are missing. For women, the situation is reversed: the edges between vertices Outside and Young, and between vertices Seroconverted and Young are present, but the edge between vertices Outside and Seroconverted is missing. This has the following implications: (a) for men, variable Young is independent of variables Outside and Seroconverted; (b) for men, only variable Outside is predictive of variable Seroconverted; (c) for women, variable Young is predictive of variable Seroconverted; and (d) for women, given variable Young, variable Seroconverted is independent of variable Outside.
The presence of an edge between Outside and Seroconverted in the subgraph for men means that whether a man moved outside the rural study area is predictive of whether he seroconverts (unadjusted OR = 2.003, 95% CI = [1.718,2.332]). The absence of an edge between Young and Seroconverted in the same subgraph means that age has less predictive power for the HIV seroconversion of a man given that we know whether this man had a residence outside the rural study area. We point out that this does not imply that age is not a risk factor for HIV acquisition in men. For women, the relative predictive importance of moving outside the rural study area and age is reversed: the edge between Outside and Seroconverted is missing, while the edge between Young and Seroconverted is present. Whether a woman is younger than 30 years is predictive of whether she seroconverts (unadjusted OR = 3.091, 95% CI = [2.693,3.561]). However, given that we know the age of a woman, knowing whether she moved outside the rural study area has less predictive power for HIV seroconversion. As such, residential locations seems to matter less for women as a risk factor for HIV acquisition in the presence of age. As an aside, we mention that the presence of an edge that links vertices Outside and Young in the women's mobility graph makes sense: women younger than 30 years are more likely to move outside the rural study area (unadjusted OR = 3.176, 95% CI = [2.787,3.633]). This edge is missing in men's mobility graph because the relationship between variables Young and Outside is weaker (unadjusted OR = 1.306, 95% CI = [1.124,1.523]).
Since the structure of interactions among variables Outside, Seroconverted and Young is essential for our understanding of the mobility tables, we performed a second statistical analysis of the three-way tables cross-classifying these variables-see Tables A and B in Percentages of repeat-testers stratified by gender who changed residences between a location outside the rural study area and another location inside or outside the rural study area (outside residency changes, upper panel), or between two locations inside the rural study area (inside residency changes, lower panel) https://doi.org/10.1371/journal.pone.0217284.t007 S1 Supporting Information. This time we followed a classical approach to hierarchical loglinear model determination [7,78] that also solves the structural learning problem, but is conceptually different from the Bayesian approach implemented in the BDMCMC algorithm. We note that this classical approach is suitable for analyzing these two tables because they involve only three variables and they do not contain any counts of 0. However, this approach is not feasible for analyzing the 48-dimensional mobility tables for men and women due to sparsity and the number of variables involved. Specifically, we fitted the eight hierarchical loglinear models that contain main effects for variables Outside, Seroconverted and Young, and also one, two or all three of the pairwise interactions between these variables. The results are presented in Tables C and D in S1 Supporting Information.
For men, the loglinear model that contains interactions between variables Seroconverted and Outside, and between variables Outside and Young, and the loglinear model that contains all three pairwise interactions do not fit the data well: the p-values for the likelihood ratio test against the saturated loglinear model are 0.348 and 0.215, respectively. The other six hierarchical models fit the data well at the significance level α = 0.05. To select the most relevant model among the remaining six models, we calculated their AIC and BIC. The smallest values for both AIC and BIC are realized for the model that contains the interaction between Seroconverted and Outside, and no interaction involving variable Young. This is precisely the graphical loglinear model we determined before using the BDMCMC algorithm-see Fig 4. For women, the loglinear model that contains all three pairwise interactions does not fit the data well (pvalue = 0.264). The other seven hierarchical models fit the data well at the significance level α = 0.05. Among these seven models, the model that has the minimum value for both AIC and BIC contains interactions between variables Outside and Young, and between variables Seroconverted and Young. As for men, we found the same graphical loglinear model as we did before using the BDMCMC algorithm for the women's mobility table-see Fig 5.
Discussion
We proposed a framework for statistical analysis of repeated residential movements. In the first step, residential histories are converted into a mobility matrix that gives the presence and absence patterns from the areas in which study participants have lived. After the inclusion of additional categorical variables of interest, the resulting matrix is converted into a multidimensional contingency table called mobility table. The multivariate associations in this table are modeled with graphical loglinear models. The structure of the graphs that characterize these models induces independence or conditional independence relationships among the residential areas and the other categorical variables. This framework is able to explicitly account for individuals that moved across several areas. Existing models for human mobility are able to represent only the movement of an individual from one area to another area without consideration of the areas in which the individual has resided in the past. Our framework also goes beyond those approaches that involve the determination of mobility measures of different kinds; such measures loose an explicit connection with the areas in which residencies were located.
We used this framework to link human mobility and the risk of HIV acquisition based on data from a population-based cohort in a hyper-endemic, rural sub-Saharan African context. The residential locations occupied by every study participant were classified as outside or inside the rural study area. The residential locations inside the rural study area was further classified as belonging to one of 45 non-overlapping communities that fully cover the rural study area. We also included age (younger or older than 30 years at the start of the exposure period) as an additional risk factor for HIV acquisition.
We found that, for both men and women, the majority of residential moves involved a destination outside the rural study area, rather than a destination within the rural study area. Thus, households in the rural study area are typical net-senders of mobile individuals to destinations in the KwaZulu-Natal province, or to other, more distant places throughout South Africa [6]. This circular migration stream effectively links a poor, rural community with more affluent urban centers where many employment opportunities are usually available, and also with other rural areas that offer more specialized types of employment (e.g., mining).
Multivariate predictive relationships are revealed in the mobility graphs for men and women we identified. In both graphs, in order to reach any of the communities vertices C1, C2, . . ., C45 from the vertex Seroconverted by following paths of adjacent edges, we must first pass through the vertex Outside. Therefore, once we know whether a man or a woman moved outside the rural study area, knowing which communities inside the rural study area they lived in becomes less relevant for the purpose of predicting whether they seroconverted. For this reason the communities in which an individual resides seem to play a lesser role as risk factors for HIV seroconversion as compared with having a residence outside the rural study area. This finding is surprising because this rural study area has three large irregularly-shaped clusters of new HIV infections near a national road and in a rural node bordering a recent coal mine development [69]. These spatial areas are characterized by HIV incidence rates higher the other surrounding regions. We expected at least some of the communities spanned by these three clusters to be linked by an edge with vertex Seroconverted. However, none of these edges are present in the two mobility graphs. Consequently, while the places of residency inside the rural study area certainly play a role in predicting HIV acquisition risk given the significant clustering of HIV infections in this rural community, their predictive power vanishes when taking into account whether a study participant moved outside the rural study area. While this is true for both men and women, the predictive importance of having a residence outside the rural study area differs for men as compared to women. These differences are evidenced in the subgraphs of the two mobility graphs associated with variables Outside, Young and Seroconverted-see Figs 4 and 5.
Our results indicate that, even if the frequency, duration and distance traveled associated with residential moves is similar for men and women who live in this rural study area [6], there must exist key differences between the behavioral processes that lead to HIV seroconversion of mobile men and women. In order to formulate gender-specific combination HIV prevention strategies for high-risk mobile individuals, particularly in the light of attaining the UNAIDS 90-90-90 treatment targets [79], it is of paramount importance to understand these differences with respect to the complex network of structural, biological and socio-demographic factors that characterize places of residency outside the rural study area, and significantly alter the social context of mobile individuals [42]. Tanser. | 9,931.6 | 2019-06-05T00:00:00.000 | [
"Economics"
] |
The Role of Mastering Musical Instrument Playing Skills Combined with Student Behavior Data Mining and Analysis in the Digital Campus Environment to Improve Students ’ Comprehensive Quality
Music is closely related to people ’ s lives, and it has a certain impact on people ’ s lives. In school teaching activities, mastering the skills of playing musical instruments can e ff ectively improve students ’ music appreciation ability and level and enhance students ’ comprehensive quality through subtle in fl uence. Based on the analysis of students ’ behavior data, this paper analyzes the role of mastering musical instrument playing skills in improving students ’ comprehensive quality and puts forward research ideas and schemes. It focuses on students ’ group behavior in the digital campus environment, integrates multisource data in the digital campus, quanti fi cationally calculates students ’ multidimensional behaviors, studies the behavior rules of students with di ff erent academic performance levels, and uses machine learning algorithm to build a multifeature integrated model of students ’ comprehensive quality, providing personalized feedback for the improvement of students ’ comprehensive quality. The results show that the e ff ect of mastering musical instrument playing skills combined with data mining analysis of students ’ behavior is generally 30% higher than that of the previous research. Compared with a single model, the fused model can fully consider each algorithm to observe data from di ff erent data spaces and structures and give full play to the advantages of di ff erent algorithms. The training of a single model will fall into the local minimum, which may lead to the relatively poor generalization performance of its model. However, the weighted fusion of multiple basic learners can e ff ectively reduce the probability of falling into the local minimum.
Introduction
Combined with the analysis of students' behavior data mining, the role of mastering musical instrument playing skills in improving students' comprehensive quality refers to the internal and relatively stable main characteristics and qualities that are formed or developed in the learning and practice of students in the education stage and have positive significance for students' sustainable development [1]. The higher the comprehensive quality, the stronger the ability of young students to understand and transform the objective world. Young students are in the golden period of life development. At this stage, they not only need to learn rich cul-tural knowledge and professional skills but also need to cultivate and develop their comprehensive quality [2]. This is not only the objective need of personal growth and success but also the inevitable requirement of China's economic and social development for outstanding talents in the new era. However, for a long time, influenced by various subjective and objective factors, there is still a problem of ignoring comprehensive quality education in school education in China. Even though some schools have also carried out comprehensive quality education, it is more superficial, and its educational effect is not ideal [3]. The lack of comprehensive quality education for young students easily leads to a series of problems in their learning attitude, learning ability, political literacy, moral quality, values, and so on, which greatly restricts the healthy development of young students. For young students, comprehensive quality education is an important way to cultivate their personality and develop their various abilities [4]. Young students occupy a fundamental and strategic position in the construction of socialism with Chinese characteristics. The improvement and development of their comprehensive quality are the only way to strengthen and revitalize the country. School educators should be based on the new era and fully understand the urgency and necessity of comprehensive quality education for young students from the height of national strategy [5].
Musical instrument playing covers all aspects of life, which expands and extends the value and significance of mastering musical instrument playing skills. In the process of musical instrument playing, students gain not only music knowledge and enjoyment of music art but also knowledge of history, humanities, customs, geography, and other aspects contained in playing music. For example, the invention, spread, evolution, and modern development of musical instruments are all part of musical instrument knowledge. Many classical musical instruments are artistic interpretations of historical events. Feeling historical events from music appreciation can help students better interpret historical events [6]. All over the world, the music of every nation has its unique musical instrument playing expression, which is also the concentrated expression of local customs and national customs. Students can feel the local customs and customs of all over the world by learning musical instruments. In a word, the process of mastering musical instrument playing skills contains rich elements of cultural knowledge, which can promote the improvement of students' comprehensive quality. This paper focuses on students' group behavior in the digital campus environment, aiming at integrating multisource data of digital campus, quantitatively calculating students' multidimensional behavior, studying the behavior rules of students with different academic performance levels, and using machine learning algorithm to build a model of students' comprehensive quality with multifeature fusion, so as to provide personalized feedback for the improvement of students' comprehensive learning quality. Practice has proved that combining students' behavior data has profound significance and function in exploring whether mastering musical instrument playing skills can improve students' comprehensive quality.
The coefficient of the weight of students' comprehensive quality evaluation index is the basic information reflecting the quality of students' comprehensive quality [7]. It reflects the evaluators' judgment on the importance of each index. Among all kinds of index evaluation methods, index data and index weight are the two major factors that directly affect the final result of evaluation [8]. The evaluation of students' comprehensive quality is no exception, so whether the weight coefficient of students' comprehensive quality index is designed scientifically will directly affect the scientificity and rationality of the evaluation results of students' comprehensive quality. Based on the prediction and research of students' behavior analysis, this paper adopts the computer method of pairing and sorting to predict students' behavior.
Compared with the traditional prediction of students' comprehensive quality or GPA, this method pays more attention to students' individual performance and changing trend in a whole group.
This method can help the education work to continuously observe and intervene students' academic performance in the actual education and teaching work, and at the same time, it can better grasp the characteristics of different groups of students, so as to optimize the behavior research of students of different majors. Its innovation lies in the following: (1) drawing an objective and thorough "student portrait" can help students develop their self-awareness and provide direction for their self-development while also enhancing the school's capacity to recognise students' learning growth and daily behavior based on the analysis of the behavior characteristics of their personal big data. (2) Based on the analysis of students' group behavior characteristics, explore the role of mastering musical instrument playing skills in improving students' comprehensive quality. (3) Based on students' behavior data, the essential personality characteristics of students are extracted from the perspective of Big Five personality traits, and the network system of students' comprehensive quality evaluation is clearly modeled.
Related Work
After extensive accumulation and strong support from the Ministry of Education, research into educational data mining technology in China is now at a relatively advanced stage. The related research in depth and breadth of educational data mining technology is very advantageous. In order to promote the research and application of data mining technology in the field of education, it focuses not only on the improvement and optimization of data mining algorithms but also on the development of numerous mature application systems. This work has produced promising theoretical and practical outcomes.
Lu put forward an association rule algorithm and conducted data mining on the participation patterns of musical instruments [9]. Wang et al. put forward the classification algorithm and also invented a large number of relatively mature data mining software [10]. Peng put forward a new model of blended learning and introduced its learning process and learning environment [11]. Wang et al. used educational data mining methods to identify the students' situation and participation patterns in distance learning [12]. Lust et al., considering the generality of personality and group students, put forward a collaborative filtering recommendation method for students' mastery of knowledge points by using cognitive diagnosis technology [13]. Brooks has achieved satisfactory results by mining educational data and analyzing the performance of feature selection algorithm [14]. Shanabrook et al. analyzed the data of students' consumption and used data mining and statistical analysis to find out the hidden law of students' consumption behavior [15]. Rusby et al. used data mining technology to analyze students' consumption and borrowing behaviors and built a system model of students' behavior analysis. By inputting various behavior characteristics into the model, they judged 2 Journal of Environmental and Public Health whether the student's learning mode was reasonable, and the analysis results helped students find their own effective learning methods [16]. Gu and Huang, by analyzing the data of students' campus card, measured students' campus life behavior by using entropy-based measurement and reached the conclusion that there is a great correlation between the regularity of campus life and comprehensive academic achievement [17]. Zhu et al. can learn about students' learning, consumption, and work and rest behaviors through deep and systematic research on students' consumption behaviors and give early warning tips for abnormal situations [18]. The computer method of pairing sorting used in this paper can find frequent transaction item sets from massive data, so as to infer the correlation between transactions. Association rules first discussed the association of supermarket shopping baskets. Since then, many scientific researchers have also begun to be interested in association rules. The research on the theory and application of association rules has become more and more in-depth, and many improved algorithms on association rules have been published [19]. The most classic algorithms are a priori algorithm and FPT growth algorithm, but these two algorithms need to scan the database many times to produce frequent patterns and will also produce a large number of frequent item sets. Therefore, the time and space complexities are relatively large, and the operation efficiency may be low in the process of data mining. However, Eclat algorithm adopts vertical data representation, and it can quickly calculate the support of item sets by scanning data records only once, so as to improve the excavation quality.
Behavior Regularity.
Behavior regularity is another very important feature of the sense of responsibility in the Big Five personality, which represents students' ability of selfdiscipline. Compared with students with chaotic life rhythm, students with strong self-discipline usually have strong willpower and the ability to control their own lives and can properly arrange their own life and study. Such personality characteristics can positively affect students' comprehensive quality performance, so behavioral regularity is the personality characteristics that this study focuses on and discusses. This paper will study the quantitative method of behavior regularity from two parts: behavior change and behavior complexity, that is, quantify the regularity of students' behavior from linear and nonlinear angles. The quantization process is shown in Figure 1.
People with a strong sense of responsibility usually have a high ambition, pursuit, and a strong self-driving force to strive for goals. The superposition of these factors usually leads to a higher comprehensive achievement. The internal reason is that input and output are usually in direct proportion, and spending more time and energy on synthesis will usually bring better comprehensive results [20]. However, the comprehensive quality here is an abstract concept, which is not convenient for direct evaluation in teaching. How to quantify or project the comprehensive quality value to the detailed behavior of students is a problem that needs to be discussed. We need to dig deep into the time series data of students' behaviors extracted from the original one-card and WIFI data and evaluate the comprehensive quality of students according to the corresponding index characteristics. The magnitude process is shown in Figure 2.
Comprehensive quality behaviors mainly include library access control, borrowing books, and time series data of WIFI staying in comprehensive areas. The frequency of comprehensive behaviors can be directly counted to define students' diligence [21]. Frequency, that is, the number of times behavior occurs in a period of time, is a common statistical indicator. The higher the frequency, the more frequent the students' comprehensive behaviors occur in a fixed time, which is the most direct index quantification method of comprehensive diligence. For WIFI time series data, we can separately count the network connection frequency of students in different areas. From the processed data, we can observe that the most important activity places of students are the comprehensive area and the rest area, and these two areas can best reflect students' comprehensive behavior. Specifically, the online data in the comprehensive area can directly reflect the students' comprehensive state, while the online data in the rest area is the supplementary data reflecting their comprehensive behavior. This is because compared with the comprehensive area, students use the internet more frequently when relaxing in the rest area, so the amount of data in this part is far greater than that in the comprehensive area, and the time spent in the rest area is usually inversely proportional to students' diligence, which can be used to quantify this indicator in reverse.
Behavioral
Complexity. The loose and free campus environment makes students' behavior random and diverse. In this case, the simple mathematical statistics method cannot fully study its behavioral complexity, and more effective quantitative indicators need to be discovered and used. Information entropy can effectively measure the orderly behavior of students. The calculation formula of information entropy is as follows: where n is the total number of different characters in the data and i is a single character. Information entropy, the most fundamental index for measuring information uncertainty, can be used to quantify information uncertainty in informatics. Information entropy, however, has a significant flaw in assessing the complexity of behavior. We can determine that formula (1)'s information entropy calculation method ignores the relationship between elements and instead concentrates on the frequency of occurrence of a single element. That is to say, even if the data's internal components change, the results obtained before and after the change will be the same as long as the structure does not change.
The regularity and complexity of data in nonlinear time series are frequently examined using approximate entropy. The probability of new data patterns appearing in time series 3 Journal of Environmental and Public Health is reflected by approximate entropy, which can be used to detect data changes in complex systems. The complexity of time increases with approximate entropy. The entropy is calculated roughly as follows: is the average similarity rate of all subsegments with a period of m, while the second half is the average similarity rate of all subsegments with a period of m + 1. The approximate entropy formula allows us to know that this index represents the likelihood that new patterns will appear in time series when the dimension changes. This technique is useful for assessing the structural complexity of time series. Journal of Environmental and Public Health Log refers to the debugging information of users after they connect to the campus network, in which each data line contains different response information, and different response information is also distinguished by code segments. Therefore, it is necessary to analyze the corresponding information of different codes in order to process the log data accurately. Due to the huge amount of WIFI data, there are a lot of redundant data which are repeated and not needed, so it is a complicated project to extract the required space-time information from this data. The server captures WIFI data once, counts students' online information at 1minute intervals, divides the area by the functions of buildings on campus, and replaces the physical location of the AP end with functional areas such as study areas and rest areas. The processed space-time data is shown in Table 1.
This study obtained a large number of behavior characteristics after quantifying the characteristics of the behavior data. The correlation between two variables is measured using the correlation coefficient. Its value ranges from -1 to 1, and a positive value denotes a positive correlation between the two variables. The stronger the correlation, the larger the absolute value of the correlation coefficient. The following is the calculation formula: Confidence r is the premise used when discussing the correlation between two variables, and the absolute value of the correlation coefficient represents various correlations. If the value of r is too small, the calculated correlation coefficient result is not credible. In the correlation analysis of this study, when the confidence r value is less than 0.09, the correlation result is reliable.
GBDT algorithm uses Boosting-based ensemble learning method to iterate weak learners to form strong learners. The decision tree model of weak learners is regression tree or classification tree. The specific algorithm flow of GBDT classification algorithm is as follows.
First, initialize the weak learner, as shown in the following formula: Secondly, the number of iterations is s = 1, 2, ⋯, S, and the negative gradient for each sample is calculated as shown in the following formula: Then, the residual (negative gradient) of these n samples is fitted by using regression tree as shown in the following formula: Finally, the strong learner is continuously updated according to formula (6), and the following formula is obtained: The final strong learner is shown in the following formula: The formula can flexibly handle various types of data and adjust parameters in a short time and has high prediction accuracy. The loss function is used to enhance the robustness of outliers, and the weight of each classifier is considered comprehensively.
This function shows a stable advantage in quantifying behavior complexity. It uses the characteristic of the simplest and most common data change in the quantitative relationship to quantify the contingency and complexity of the behavior pattern structure. This function is similar to the calculation idea of approximate entropy, which judges the similarity between subsequences. However, the judgment methods are quite different, and the calculation dimension of this function is more diversified than that of approximate entropy. Among them, the calculation rule of xi is shown in formulas (9) and (10): where pj is the change number when the time interval is j and the final complexity is the average value of the change number of subsequences. pj is calculated as follows: when j = 2, if the two characters are different, it will be recorded as a change, and p is the sum of all the changes. When j ≥ 2, compare whether the string formed by the first j − 1 and the last j − 1 symbols in a subsection has the same number of changes. If the number of changes is different, record it as a change, and pj is the sum of the number of changes of all subsections with a length of j. This index integrates and quantifies the structural information of time series. According to the formula of entropy, the greater the entropy, the lower the predictability and the worse the stability of behavior. Generally speaking, students with excellent comprehensive scores have better behavior stability. In other words, there should be a negative correlation between entropy and students' comprehensive scores. However, according to the correlation coefficient results in the previous section, we can know that there is a positive correlation between entropy and students' comprehensive scores in the data set of this paper. This phenomenon is caused by the behavior characteristics of students in different comprehensive grades. Thus, the probability density function images of students' approximate entropy comprehensive indicators in different comprehensive score intervals are drawn, as shown in Figures 3 and 4. Figures 3 and 4 depict the probability density distribution of FSA determined by various behaviors, respectively, and Figure 5 depicts the probability density distribution of FSA determined by the approximative entropy of various behaviors. The probability near a specific value of the X -axis is indicated by the value of the Y-axis in the image, and the probability in a specific interval of the X-axis is the integral of the probability density curve in this region.
Experimental Results and Analysis
Three models are built in the same experimental setting: a single random forest model, a GBDT model, and an Xgboost model. This is done to demonstrate the efficacy of the integrated comprehensive performance prediction model proposed in this paper. Compare the accuracy, precision, recall, and F-measure model evaluation indexes. The comparison results are displayed in Table 2.
As can be seen from Table 2, from the above four evaluation indexes, the three single models are generally consistent in the prediction model indexes, and the accuracy of each single model is 55.34% higher than the experimental baseline. For the random forest model, the lower the correlation of features, the better the classification effect of the model, the lower the sensitivity of the random forest to missing data, and the better the classification effect with less data, which is why the random forest model has better classification effect in the test set. For GBDT and Xgboost models, the algorithm training needs to go through many iterations, and the calculation amount is much higher than that of the random forest algorithm. By increasing the calculation complexity, the prediction performance of the model can be improved. Generally speaking, by comparing and analyzing the prediction results of students' comprehensive scores by the above three models, the Xgboost model has stronger learning ability than the characteristics of students' behaviors extracted by the random forest and GBDT model, and the accuracy and precision of the prediction model obtained by training are relatively higher, so the reliability of prediction is also relatively higher. The fusion comprehensive performance prediction model proposed in this paper is compared with the single prediction model, and the comparison results are shown in Figure 5.
In comparison to the single random forest, GBDT, and Xgboost model, the fusion prediction model proposed in this paper has significantly improved in four indexes, and its prediction accuracy is higher, making it more suitable for prediction, as shown in Figure 5, comprehensive student grades. At the same time, it is evident that a single model's classification accuracy for predicting students' overall performance is low, and algorithm optimization can only slightly enhance the accuracy of the final prediction. The choice of data features and the integration of models are what ultimately determine the model's accuracy. The rationale for why the fusion model is superior to a single model is examined theoretically. The fusion model based on the Boosting algorithm combines several classification models, allowing it to fully take into account each algorithm when observing data from various data spaces and structures and to fully exploit the benefits of various algorithms. A single model training will run the risk of a local minimum from the perspective of model optimization, which could result in a relatively subpar generalization performance. However, after weighted fusion, the likelihood of entering a local minimum can be significantly decreased by training multiple basic learners.
To sum up, the fusion of models can only improve the final experimental results to a certain extent and count the data sets. Whether the preprocessing is good or bad or whether the effective behavior characteristics that affect the comprehensive academic performance and are not extracted will affect the final result of the model is not reflected. At the same time, according to the basic principles of different algorithms, different classification models will have obvious differences in the learning of students' behavior characteristics. The extraction of students' behavior characteristics is a relatively subjective process, and the extraction of behavior characteristics that effectively affect students' comprehensive performance is not comprehensive enough, which also reflects the importance of preprocessing and data analysis.
In general, the research presented in this paper shows that it is possible to predict students' overall performance using information about their behavior. The prediction effect is constantly improving, from single prediction models to fusion models, but the actual effect of the model is not optimal overall. Analyzing the causes may reveal that the model cannot automatically learn the information from the original data and that manual statistical behavior characteristics perform poorly when applied to traditional methods. This underlines the significance of feature extraction and data analysis and indicates a path for further enhancing the accuracy of comprehensive performance prediction.
In addition, 70% of the student behavior data are chosen as training data and 30% as test data due to the sparseness of the data. The experiment makes use of the stacking feature and the CNN model as the base classifier. Numerous prediction indicators are used to gauge the effectiveness of the comprehensive performance prediction model. The CNN-LSTM network model with time series features outperforms the single CNN model when subjected to a consistent experimental environment and a set of evaluation criteria. In terms of thorough performance prediction, the CNN- LSTM network model with attention mechanism performs the best. The corresponding relationship between training times, accuracy, and loss on the test set and training set is depicted in Figures 6 and 7 below, respectively.
According to Figure 6, the model accuracy rate rises as training times increase in the training set and falls as training times rise in the testing set. It is known that the model with 12 training rounds produces the best results. Figure 7
Conclusion and Prospect
Comprehensive quality is a comprehensive cognitive ability composed of attention, observation, memory, imagination, and thinking ability. Comprehensive quality factor is an important psychological factor that marks people's quality. The human brain is a unified whole, which contains great learning and creative potential. The number of human brain cells is about 18 billion, but only more than one billion of them are always in an active state, and more than 90% of them are in a relatively static or sleeping state. In a person's life, only about 10% of brain cells are used. It can be seen that the human brain has great potential to be tapped. The ability stored in the brain makes us dumbfounded. If we can force our brains to reach half of their working capacity, we can easily learn more than 70 languages, memorize an encyclopedia of the former Soviet Union, and finish courses in 20 universities. Therefore, the right brain not only has a large memory capacity but also has incomparable advantages over the left brain in terms of cognition, such as specific thinking ability, ability to recognize space, ability to understand complex relationships, and emotional expression and recognition ability, which is superior to the left brain. Therefore, the development of the right brain function is crucial to human development. However, the traditional educational methods lay particular stress on reading, writing, mathematical operations, and rational thinking and mostly focused on the activities of the left brain, thus resulting in the overload of the left brain, while the right brain was left idle, which resulted in the incomplete development of comprehensive quality. Therefore, the revelation of brain function by brain science is of great significance to our scientific construction of quality education system. Only when the two hemispheres of the brain cooperate with each other and develop in a balanced way can people's comprehensive quality be highly developed. However, the function of the right brain is closely related to the performance of musical instruments. When playing the piano, the left and right hands alternately coordinate with each other, which promotes the coordinated development of the two hemispheres of the brain and makes the thinking more agile. Therefore, there is a scientific basis for saying that mastering the playing skills of a musical instrument is conducive to the development of comprehensive quality factors. When students play the accordion, they should simultaneously reflect the high and low spectrum tables, quickly identify the high, low, long, short, continuous, broken, strong, and weak sounds on the spectrum tables; change the speed and timbre; and control the bellows operation. Students should complete these comprehensive actions in an orderly manner at the same time and turn the music score into a vivid sound, which will undoubtedly promote the development of students' comprehensive quality factors.
Big data and artificial intelligence advancements have facilitated the digital and thoughtful transformation of conventional campus settings. In order to determine whether learning to play a musical instrument can enhance students' overall quality, this paper will analyze students' behavior using student data collected from the campus environment. The primary work contains the following: (1) In this paper, the sources and characteristics of students' behavior analysis and comprehensive performance prediction are introduced. Current data on students' behavior analysis and comprehensive performance prediction are also explained, and the methods for comprehensive performance prediction based on conventional machine learning are categorised and summarised. This paper introduces the related research and applications of deep learning and sequence modelling by analyzing the issues encountered in the construction of students' comprehensive performance prediction model (2) A data mining algorithm-based fusion model for performance prediction is created. A fusion model based on random forest, GBDT, and Xgboost is established in accordance with the conventionally manually extracted behavior characteristics. First, the weights of the single classification models for random forest, GBDT, and Xgboost are calculated using the Boosting algorithm. Next, the above single models are fused using the weighted average method. Finally, the fused model is compared to the single model to assess its efficacy The findings demonstrate that it is possible to predict students' performance using information about their behavior. The prediction model based on attention mechanism has higher prediction accuracy and better performance compared to the prediction model based on data mining, which confirms the importance of mastering musical instrument playing skills combined with data mining analysis of students' behavior in enhancing students' overall quality. Although this paper has made some achievements in extracting students' behavior characteristics and predicting accuracy of students' grades, in the data preprocessing stage, we should study the data deeply, choose a better data cleaning method, effectively clear the abnormal data, fill in the missing data, and then do in-depth analysis of various behavior data to improve the granularity of data to find more effective behavior characteristics. Further research is needed by using the methods and ideas proposed in this paper.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The author declares that there are no conflicts of interest. 9 Journal of Environmental and Public Health | 7,131.6 | 2022-09-10T00:00:00.000 | [
"Computer Science",
"Education"
] |
RF Spectrum Sensing Based on an Overdamped Nonlinear Oscillator Ring for Cognitive Radios
Existing spectrum-sensing techniques for cognitive radios require an analog-to-digital converter (ADC) to work at high dynamic range and a high sampling rate, resulting in high cost. Therefore, in this paper, a spectrum-sensing method based on a unidirectionally coupled, overdamped nonlinear oscillator ring is proposed. First, the numerical model of such a system is established based on the circuit of the nonlinear oscillator. Through numerical analysis of the model, the critical condition of the system’s starting oscillation is determined, and the simulation results of the system’s response to Gaussian white noise and periodic signal are presented. The results show that once the radio signal is input into the system, it starts oscillating when in the critical region, and the oscillating frequency of each element is fo/N, where fo is the frequency of the radio signal and N is the number of elements in the ring. The oscillation indicates that the spectrum resources at fo are occupied. At the same time, the sampling rate required for an ADC is reduced to the original value, 1/N. A prototypical circuit to verify the functionality of the system is designed, and the sensing bandwidth of the system is measured.
Introduction
In the crowded electromagnetic environment, high spectral efficiency, and optimal communication performance are achieved by a cognitive radio communication system that senses the spectrum hole, adopting artificial intelligence techniques to adaptively adjust the transmission power, carrier frequency, and modulation system parameters in real time, allowing the system to adapt to changes in the external environment [1]. In a cognitive radio communication system, spectrum sensing is an important component that refers to obtaining radio-spectrum usage information by a cognitive user through a variety of signal-detection and -processing means. From the point of view of the function layer of wireless networks, spectrum sensing mainly involves a physical and data-link layer. The physical layer mainly focuses on a variety of specific local-detection algorithms, and the link layer mainly on user collaboration and optimization of local sensing, collaboration sensing, and the sensing mechanism [2].
In recent years, many local-detection methods have been proposed, with energy detection being the most common. In the energy-detection method, the average energy of the signal sampling is compared with a threshold to determine whether the spectrum is used [3]. The realization of this method is simple, and does not require prior information of the primary user, but because of the uncertainty regarding the noise power, the energy cannot be effectively detected, and the sensing time is increased when the signal-to-noise ratio (SNR) is lower than a certain threshold [4,5]. The energy detector cannot distinguish the main user signal from the noise and other interference, which leads to a high false-alarm rate. In order to improve its performance, the power spectral density separation (PSC) method can effectively reduce the false-alarm rate by calculating the ratio of the sub-band power to the total bandwidth power [6]. On this basis, the bandwidth can be scanned by a tunable tracking filter, which can be used to extract the spectrum occupancy information of several specific sub-bands [7]. Another type of spectrum-detection method is based on signal characteristics, including cyclostationary features [8,9]. In these methods, the cyclic spectrum density is obtained by a fast Fourier transform (FFT) after sampling the cyclic autocorrelation function, and the peak value occurs when the spectrum is occupied. Compressed sensing can also be used to obtain a flag bit to detect the occupancy of a spectrum according to the symbol-bit information [10]. In a multiple-antenna system, the characteristic value of the array's signal correlation matrix can be used to detect the frequency spectrum [11,12]. In the case of unknown noise, power, and location of the main user information, the blind estimation of the spectrum can be carried out using the moment feature [13].
In view of the problem that the local-detection method is not reliable in the cases of shadow and deep fading, cooperative spectrum sensing among users in the link layer is needed [14]. This method is the key to optimizing the merging method to obtain the sensing result because it is needed to integrate the sensing results among multiple cognitive users. For the weighted combination method, the frog-leaping algorithm can be used to obtain optimal weights to improve the probability of correct detection [15,16]. In order to reduce the network load of cooperative spectrum sensing, the double-threshold cooperative spectrum-sensing algorithm based on trust has better flexibility [17]. In order to overcome the influence of channel fading, the adaptive global optimization algorithm is proposed to determine the relay node set, which solves the problem of performance degradation induced by redundant relay interference, the detection threshold of nonoptimal designs, channel transmission error rate, and other factors [18].
Although cooperative spectrum sensing is able to compensate for the lack of local-detection methods to a certain degree, it is necessary to shorten the detection time and reduce the false-alarm rate to improve the single-cognitive-user spectrum-sensing ability, taking into account the network latency and traffic load and the algorithmic complexity. In general, spectrum sensing firstly requires sampling the RF signals according to Nyquist's law. With the continuous increase of the frequency of the carrier signal, an ADC's sampling rate must also increase. Thus, its resolution and dynamic range will become worse, which will lead to a decline in spectrum-detection performance, or an increase in the cost of ADC operation under the same conditions [19].
In this paper, a spectrum-sensing method based on a unidirectionally coupled, overdamped nonlinear oscillator ring is proposed. First, the weak signal detection by a nonlinear oscillator is a type of time-domain signal-processing technology with stronger detection ability than the previous spectral method, high-order statistics, etc. [20][21][22]. Secondly, the nonlinear oscillator has an active self-tuning capability that can be synchronized with an external periodic driving signal under specific conditions. In addition, the circuit of the nonlinear oscillator is relatively simple, which can, in turn, simplify the structure of the cognitive radio system. In this paper, we discuss the theory of the structure of a nonlinear oscillator-ring system and the critical conditions of the system operating as a spectrum detector.
Basic Structure of a Coupled, Overdamped Duffing Oscillator Ring
A Duffing oscillator is a type of nonlinear oscillator that can be expressed as a nonlinear, two-order differential equation, where δ controls the size of the damping, α controls the size of the stiffness, β controls the nonlinearity of the restoring force, γ controls the amplitude of the external driving force, and η(t) indicates the external driving force. Moving the left-hand-side partial items in Equation (1) to the right-hand side, we obtain ..
Assuming that β = 1, α = 1, γ = 1, x s1 " 1 and x s2 "´1 are the two equilibrium points, x un " 0 is the nonequilibrium point. As shown in Figure 1, the relationship between the bistable state potential and x shows that the motion converges quickly to one of the two equilibrium points when the external force is missing. Or to the general β and α, the equilibrium points are x s "˘aα{β and the barrier height at x un " 0 is ∆U = α 2 /4β.
where x is the inertial force, which can be ignored in the overdamped case. Then Equation (2) is rewritten as which is called the Langevin equation. When α > 1, U(x) = −αx 2 /2 + βx 4 /4 is a bistable state potential function, which is used to describe the motion of a unit mass particle in a potential well [23]. Assuming that β = 1, α = 1, γ = 1, 1 = 1 s x and 2 = 1 s x are the two equilibrium points, = 0 un x is the nonequilibrium point. As shown in Figure 1, the relationship between the bistable state potential and x shows that the motion converges quickly to one of the two equilibrium points when the external force is missing. Or to the general β and α, the equilibrium points are If the nonlinear oscillator is a unidirectionally coupled ring as shown in Figure 2, and the number of oscillators in the ring is N, the coupled bistable oscillator in the ring is expressed as follows: where k is the linear coupling coefficient and 1, 2,..., i N. For spectrum sensing of a cognitive radio, the coupled oscillator ring should not oscillate until it senses a radio signal from the antenna. The oscillation of the ring indicates that the radio-frequency spectrum has been occupied. Thus, the critical point at which the system starts oscillating must be defined. Letting the radio signal received by the antenna be r(t) = a(t)cos[ωc + θ(t)], where a(t) is the signal amplitude, θ(t) is the signal phase, ωc is the signal frequency, the radio signal is input to each element, Equation (4) is rewritten as x Figure 1. Potential function of a bistable system, showing that the motion can quickly converge to one of the two equilibrium points when the external force is missing.
If the nonlinear oscillator is a unidirectionally coupled ring as shown in Figure 2, and the number of oscillators in the ring is N, the coupled bistable oscillator in the ring is expressed as follows: where k is the linear coupling coefficient and i " 1, 2, . . . , N. For spectrum sensing of a cognitive radio, the coupled oscillator ring should not oscillate until it senses a radio signal from the antenna. The oscillation of the ring indicates that the radio-frequency spectrum has been occupied. Thus, the critical point at which the system starts oscillating must be defined. Letting the radio signal received by the antenna be r(t) = a(t)cos[ω c + θ(t)], where a(t) is the signal amplitude, θ(t) is the signal phase, ω c is the signal frequency, the radio signal is input to each element, Equation (4) is rewritten as
Dynamic Model
The practical circuit of a bistable, overdamped, nonlinear oscillator as shown in Figure 3 has been used to detect weak signals [24]. The circuit can be divided into two parts: one a linear part composed of two field-effect transistors (FETs), and the other a nonlinear part comprised of two transconductance operational amplifiers. A model for describing the nonlinear phenomena of this circuit is defined as where C is load capacitance; Equation (6) is a variation of Equation (5), which also has the characteristics of an overdamped bistable state. Figure 3. Circuit of an overdamped nonlinear oscillator, which can be an element of a unidirectionally coupled, overdamped nonlinear oscillator ring.
Dynamic Model
The practical circuit of a bistable, overdamped, nonlinear oscillator as shown in Figure 3 has been used to detect weak signals [24]. The circuit can be divided into two parts: one a linear part composed of two field-effect transistors (FETs), and the other a nonlinear part comprised of two transconductance operational amplifiers. A model for describing the nonlinear phenomena of this circuit is defined as C . V i "´gV i`Is tanhrc s pV i´r ptqqs`I c tanhrc c pV dc´Vi´1 qs (6) where C is load capacitance; gV i " I sc´Io , where I o is the sum of the steady-state current in both linear and nonlinear currents and I sc " I p´In is a linear part of the effective current in the saturation state of the transistor; I p and I n are the leakage currents through the N-channel and the P-channel FET; V i is the oscillator's output and V i´1 is the output of the previous oscillator; c s " a η{I s , c c " a η{I c , and β are the process parameters; I s and I c are the main operational transconductance amplifier (OTA) bias current and the coupled OTA bias current, respectively; and rptq is the signal to be detected. Equation (6) is a variation of Equation (5), which also has the characteristics of an overdamped bistable state.
Dynamic Model
The practical circuit of a bistable, overdamped, nonlinear oscillator as shown in Figure 3 has been used to detect weak signals [24]. The circuit can be divided into two parts: one a linear part composed of two field-effect transistors (FETs), and the other a nonlinear part comprised of two transconductance operational amplifiers. A model for describing the nonlinear phenomena of this circuit is defined as where C is load capacitance; Equation (6) is a variation of Equation (5), which also has the characteristics of an overdamped bistable state. Figure 3. Circuit of an overdamped nonlinear oscillator, which can be an element of a unidirectionally coupled, overdamped nonlinear oscillator ring. In order to form a ring as shown in Figure 2, the output of the previous element is coupled into the element by the coupling OTA shown in Figure 3, and the signal itself is coupled to the next element. The generation of the oscillation is related to N, the number of elements in the ring. When N has an even value, the system is in a stable state; the system is not stable when N has an odd value [25]. For the circuit shown in Figure 3, C, I o , I sc , I s , and I c must be adjusted to the appropriate values to generate the periodic signal. Via Equation (6), I s is defined as the nonlinear coefficient representing the bistability of the circuit, and I c is the coupling coefficient among the nonlinear oscillators. c s and c c are constants during signal processing. Provided N " 3, c s " c c " 1, C " 0.1 pF, g " 1{1000 Ω, rptq " 0 V, V dc " 0 V, I s " 120 µA and I c " 100 µA, and substituting into the following system dynamics equations: where the numerical-simulation waveform of each oscillator is shown in Figure 4. From Figure 4, it can be seen that while the frequency of each oscillator is the same, the phase difference between them is 2π/3. Next, the critical point at which the system starts oscillating, as shown in Figure 4, must be discussed, including the effect of the coupling and nonlinear coefficients. This will provide enough information for us to control the ring system for spectrum-sensing applications. In order to form a ring as shown in Figure 2, the output of the previous element is coupled into the element by the coupling OTA shown in Figure 3, and the signal itself is coupled to the next element. The generation of the oscillation is related to N, the number of elements in the ring. When N has an even value, the system is in a stable state; the system is not stable when N has an odd value [25]. For the circuit shown in Figure 3, C, I o , I sc , I s , and I c must be adjusted to the appropriate values to generate the periodic signal. Via Equation (6), I s is defined as the nonlinear coefficient representing the bistability of the circuit, and I c is the coupling coefficient among the nonlinear oscillators. c s and c c are constants during signal processing. Provided =3 , and substituting into the following system dynamics equations: where the numerical-simulation waveform of each oscillator is shown in Figure 4. From Figure 4, it can be seen that while the frequency of each oscillator is the same, the phase difference between them is 2π/3. Next, the critical point at which the system starts oscillating, as shown in Figure 4, must be discussed, including the effect of the coupling and nonlinear coefficients. This will provide enough information for us to control the ring system for spectrum-sensing applications.
Relationship between Oscillation Frequency and Currents (Ic,Is)
For spectrum-sensing applications, it is a prerequisite for the unidirectionally coupled oscillator to generate periodic oscillation; Therefore, its state-transition condition is critical. For this purpose, the fixed points of the system are analyzed according to Equation (6), and the bifurcation points are determined with the change of the coupling and nonlinear coefficients. , the system can be represented as . Numerical-simulation oscillation waveform of the system when N " 3, c s " c c " 1, C " 0.1 pF, g " 1{1000 Ω, rptq " 0 V, V dc " 0 V, I s " 120 µA, and I c " 100 µA.
Relationship between Oscillation Frequency and Currents (I c ,I s )
For spectrum-sensing applications, it is a prerequisite for the unidirectionally coupled oscillator to generate periodic oscillation; Therefore, its state-transition condition is critical. For this purpose, the fixed points of the system are analyzed according to Equation (6), and the bifurcation points are determined with the change of the coupling and nonlinear coefficients. Letting N " 3, rptq " 0 V, V dc " 0 V, g "´g{C, I s " I s {C and I c " I c {C, the system can be represented as eigenvalues have negative real parts, the fixed points along the feature vectors are stable [26]. The coupled system is rewritten in a more compact form as: dx i {dt " f px i , x i´1 , g, I c , I s q, i " 1, . . . , N. For the coupled system in which N = 3, the Jacobian at the origin, px 1 , x 2 , x 3 q " p0, 0, 0q, is Letting´g`I s " I gs , the Jacobian eigenvalues are λ 1 " I gs`Ic , λ 2,3 " I gs´Ic {2˘i`?3{2˘I c . From the eigenvalues, we find that there are two local bifurcation points apart from the origin: one is the bifurcation of the steady state at I c "´I gs , and the other is the Hopf bifurcation at I c " 2I gs . The bifurcation diagram of the system is shown in Figure 5, which shows the steady-state bifurcation point at I c "´1; note that a pitch bifurcation occurs at I c " 2, becoming the two branches of the unstable nontrivial equilibrium points. Once the unstable bifurcation point Ic is reached, the system begins to oscillate. When g " 2 and I s " 1, Ic " 2. The critical coupling coefficient and the frequency of the system oscillation can be determined according to Equation (8). Although the oscillation frequency of the system can be roughly estimated from Figure 4, an accurate computation of the oscillation period can be obtained based on the decoupling method. As shown in Figure 1, the main time of a particle moving from the left (negative) to the right (positive) is the period from the negative state across the potential barrier, while the time "rolled" to positive-state time over the potential barrier is negligible. Figure 4 shows that the rest of the elements are approximately in a steady state when an element climbs over a potential barrier. Therefore, the system can be decoupled in the calculation of the cycle of a single element, and the coupled term is regarded as a constant. The calculation of the oscillation period is divided into two parts; that is, the transition from the positive state to the negative state and vice versa. Assuming that the element 1 locates at the positive minimum at t = 0, the time evolution from the positive state to the negative state can be obtained from the following integral: The critical coupling coefficient and the frequency of the system oscillation can be determined according to Equation (8). Although the oscillation frequency of the system can be roughly estimated from Figure 4, an accurate computation of the oscillation period can be obtained based on the decoupling method. As shown in Figure 1, the main time of a particle moving from the left (negative) to the right (positive) is the period from the negative state across the potential barrier, while the time "rolled" to positive-state time over the potential barrier is negligible. Figure 4 shows that the rest of the elements are approximately in a steady state when an element climbs over a potential barrier. Therefore, the system can be decoupled in the calculation of the cycle of a single element, and the coupled term is regarded as a constant. The calculation of the oscillation period is divided into two parts; that is, the transition from the positive state to the negative state and vice versa. Assuming that the element 1 locates at the positive minimum at t = 0, the time evolution from the positive state to the negative state can be obtained from the following integral: where f 1 pV 1 q "´gV 1`Is tanhpV 1˘´Ic tanhpV 3`q , and V 1+ and V 3`a re the positive minima of elements 1 and 3, respectively. Letting h 1 pV 1 q "´f 1 pV 1 q, Equation (10) is rewritten as Because h 1 pV 1 q has a sharp peak at the inflection point, V 1m " asech(1{ b I s {gq, and h 1 pV 1 q is expanded at V 1m as where When the integral limitÑ8, then where According to the same principle, another part of the oscillation period is obtained by calculating the transition of element 2 from the negative state to the positive state: where f 2 pV 2 q "´gV 2`Is tanhpV 2˘´Ic tanhpV 1´q , and V 2´a nd V 1´a re the negative minima of elements 2 and 1, respectively. The evaluation result for the integral is , and h 2 pV 2 q "´f 2 pV 2 q. In the end, the period of the superposition of the three elements' oscillation signal is T ř " t 1`t2 . The corresponding frequency is In order to solve the critical point of the system oscillation, the potential function of Equation (6) is expressed as Using Equation (17) to find the critical coupling point, and letting f pV 1 q " 0 and f 1 pV 1 q " 0 at the point V 1m , we obtain gV 1´Is tanhpV 1m q`I cc tanhpV 3`q " 0 and gV 1´Is tanhpV 1m q`I cc tanhpV 3`q " 0 (20) Solving I cc , we obtain which describes the relationship between the critical coupling coefficient I cc and the critical nonlinear coefficient I s . According to Equations (17) and (21), the variation of the oscillation frequency with currents (I c , I s ) is shown in Figure 6. It shows that the oscillation frequency of the system increases with the increase of I c , and decreases with the increase of I s . cc m s m gV I V m s m g g ga I a s I I s s (21) which describes the relationship between the critical coupling coefficient I cc and the critical nonlinear coefficient I s . According to Equations (17) and (21), the variation of the oscillation frequency with currents ( I c , I s ) is shown in Figure 6. It shows that the oscillation frequency of the system increases with the increase of I c , and decreases with the increase of I s .
Spectrum Sensing
Although we have shown that the unidirectionally coupled, overdamped nonlinear oscillator ring can generate an oscillation signal once the coupling coefficient (current) exceeds the critical coupling point, it cannot be said, however, that it can sense the RF spectrum. Only when the output of the antenna is fed to the system, and the frequency of the system can be locked to the frequency of the external RF signal, can the occupancy of the spectrum be recognized. In addition, the influence of the noise in the spectrum-sensing channel creates a factor of uncertainty.
Assuming only the Gaussian white noise (6) is rewritten as If the Euler forward integration method is used for the numerical analysis of differential equations, under the conditions Through numerical simulation in MATLAB, the bifurcation characteristics of the system with Gaussian white noise are obtained under different noise variances. Table 1 lists the different critical coupling points for different noise variances and shows that the variation of the critical coupling point is minor when Gaussian white noise is fed into the system. Figure 7 shows the spectrum of the system oscillation waveform when the noise variances are 20, 10, 0, −10, −20 and −30 dBm. Except for the variance of 20 dBm, the frequency of the oscillation waveform is the same for all of the other cases. Figure 6. Variation of the oscillation frequency with currents (I c , I s ), which indicates that the oscillation frequency of the system can be determined by (I c , I s ) (N " 3, c s " c c " 1, C " 0.1 pF, g " 1{1000 Ω, rptq " 0 V and V dc " 0 V).
Spectrum Sensing
Although we have shown that the unidirectionally coupled, overdamped nonlinear oscillator ring can generate an oscillation signal once the coupling coefficient (current) exceeds the critical coupling point, it cannot be said, however, that it can sense the RF spectrum. Only when the output of the antenna is fed to the system, and the frequency of the system can be locked to the frequency of the external RF signal, can the occupancy of the spectrum be recognized. In addition, the influence of the noise in the spectrum-sensing channel creates a factor of uncertainty.
Assuming only the Gaussian white noise ? 2Dξptq, output from the antenna is considered, and the noise is a random process with a variance of D and a mean of 0. Letting rptq " ? 2Dξptq, Equation (6) is rewritten as If the Euler forward integration method is used for the numerical analysis of differential equations, under the conditions c s " c c = 1, V dc " 0 V, g "´g{C, I s " I s {C and I c " I c {C, Equation (22) is changed into Through numerical simulation in MATLAB, the bifurcation characteristics of the system with Gaussian white noise are obtained under different noise variances. Table 1 lists the different critical coupling points for different noise variances and shows that the variation of the critical coupling point is minor when Gaussian white noise is fed into the system. Figure 7 shows the spectrum of the system Next, assuming that only the RF signal output from the antenna is considered, spectrum sensing requires a radio signal to be fed into the system. We know from Figure 6 that different critical currents ( , ) I I sc cc arise that put the system into a critical state on the verge of oscillation. When the RF signal, is fed into the system as in Equation (6), where ( ) A t is the instantaneous amplitude, ( ) t is the instantaneous phase, and is the carrier frequency, Equation (6) is rewritten as The system represented by Equation (24) is a nonautonomous, or forced oscillation, system. According to the theory of nonlinear oscillators, when the difference between the frequency of the external signal and the free oscillation frequency of the nonlinear oscillator is small enough, then the frequency of the oscillation will be locked to the external signal. Therefore, such a system functions as a spectrum-sensing device, operating in the critical region of the oscillation when no signal is present. Figure 8 shows the oscillation and non-oscillation regions related to (IC, IS). There is a boundary between the two regions that is the critical transition from the non-oscillation state to the oscillation state. When there is no signal, the system is in a non-oscillation region; when the antenna output contains the signal, the system will cross the critical point into the oscillation region, according to Equation (24) and Figure 8. Then, each element in the system will oscillate as shown in Figure 4. Considering a radio signal to be detected, s(t), with a carrier frequency of 2.421 GHz and a power of −50 dBm, at first the currents is set as (IS = 220 μA, IC = 300 μA); thus the system is in the critical region and there is no oscillation waveform. When the radio signal is fed into the system, the output waveform of each element is shown in Figure 9, which shows that the frequency of the oscillation waveform {x1(t), x2(t), x3(t)} is locked to the carrier signal s(t), and its amplitude is far greater than the signal. Table 1. Critical coupling points of different noise variances (N " 3, c s " c c " 1, C " 0.1 pF, g " 1{1000 Ω, V dc " 0 V, I s " 120 µA, I c " 100 µA). Next, assuming that only the RF signal output from the antenna is considered, spectrum sensing requires a radio signal to be fed into the system. We know from Figure 6 that different critical currents pI sc , I cc q arise that put the system into a critical state on the verge of oscillation. When the RF signal, rptq " Aptqcosrωt`φptqs, is fed into the system as in Equation (6), where Aptq is the instantaneous amplitude, φptq is the instantaneous phase, and ω is the carrier frequency, Equation (6) is rewritten as
D (dBm
The system represented by Equation (24) is a nonautonomous, or forced oscillation, system. According to the theory of nonlinear oscillators, when the difference between the frequency of the external signal and the free oscillation frequency of the nonlinear oscillator is small enough, then the frequency of the oscillation will be locked to the external signal. Therefore, such a system functions as a spectrum-sensing device, operating in the critical region of the oscillation when no signal is present. Figure 8 shows the oscillation and non-oscillation regions related to (I C , I S ). There is a boundary between the two regions that is the critical transition from the non-oscillation state to the oscillation state. When there is no signal, the system is in a non-oscillation region; when the antenna output contains the signal, the system will cross the critical point into the oscillation region, according to Equation (24) and Figure 8. Then, each element in the system will oscillate as shown in Figure 4. Considering a radio signal to be detected, s(t), with a carrier frequency of 2.421 GHz and a power of´50 dBm, at first the currents is set as (I S = 220 µA, I C = 300 µA); thus the system is in the critical region and there is no oscillation waveform. When the radio signal is fed into the system, the output waveform of each element is shown in Figure 9, which shows that the frequency of the oscillation waveform {x 1 (t), x 2 (t), x 3 (t)} is locked to the carrier signal s(t), and its amplitude is far greater than the signal.
Circuit Experiments
Based on the circuit and the structure of the unidirectionally coupled, overdamped nonlinear oscillator shown in Figure 2, we designed an experimental spectrum-sensing circuit composed of three elements. The circuit, which includes a nonlinear oscillator ring, and an ADC, is shown in Figure 10.
The setup for the spectrum-sensing experiment is shown in Figure 11. First, the critical point (ISC, ICC) is adjusted to make the circuit system work in the critical region of oscillation. Next, the signal generator is used to generate the RF signal that is fed to the circuit system. When the frequency of the signal falls into the spectrum-sensing range, the circuit system starts to oscillate; the frequency of the oscillation signal is 1/3 of the RF signal. As shown in Figure 12, by changing the intensity and frequency of the input signal, the relationship between the spectrum-sensing bandwidth and the amplitude of the input signal, or the conductivity, g, can be obtained by observing whether the circuit oscillates. In Figure 12, the region labeled "unlocked" is where the system is not synchronized to the external RF signal, while the region labeled "locked" is where the system is synchronized to the external RF signal.
Circuit Experiments
Based on the circuit and the structure of the unidirectionally coupled, overdamped nonlinear oscillator shown in Figure 2, we designed an experimental spectrum-sensing circuit composed of three elements. The circuit, which includes a nonlinear oscillator ring, and an ADC, is shown in Figure 10.
The setup for the spectrum-sensing experiment is shown in Figure 11. First, the critical point (ISC, ICC) is adjusted to make the circuit system work in the critical region of oscillation. Next, the signal generator is used to generate the RF signal that is fed to the circuit system. When the frequency of the signal falls into the spectrum-sensing range, the circuit system starts to oscillate; the frequency of the oscillation signal is 1/3 of the RF signal. As shown in Figure 12, by changing the intensity and frequency of the input signal, the relationship between the spectrum-sensing bandwidth and the amplitude of the input signal, or the conductivity, g, can be obtained by observing whether the circuit oscillates. In Figure 12, the region labeled "unlocked" is where the system is not synchronized to the external RF signal, while the region labeled "locked" is where the system is synchronized to the external RF signal. . When the external radio signal with frequency f s is fed into the system, the frequency of the oscillation waveform of each element is locked to f s {3 (N " 3, c s " c c " 1, C " 0.1 pF, g " 1{1000 Ω, V dc " 0 V, I s " 220 µA, I c " 300 µA, f s " 2.421 GHz).
Circuit Experiments
Based on the circuit and the structure of the unidirectionally coupled, overdamped nonlinear oscillator shown in Figure 2, we designed an experimental spectrum-sensing circuit composed of three elements. The circuit, which includes a nonlinear oscillator ring, and an ADC, is shown in Figure 10.
The setup for the spectrum-sensing experiment is shown in Figure 11. First, the critical point (I SC , I CC ) is adjusted to make the circuit system work in the critical region of oscillation. Next, the signal generator is used to generate the RF signal that is fed to the circuit system. When the frequency of the signal falls into the spectrum-sensing range, the circuit system starts to oscillate; the frequency of the oscillation signal is 1/3 of the RF signal. As shown in Figure 12, by changing the intensity and frequency of the input signal, the relationship between the spectrum-sensing bandwidth and the amplitude of the input signal, or the conductivity, g, can be obtained by observing whether the circuit oscillates. In Figure 12, the region labeled "unlocked" is where the system is not synchronized to the external RF signal, while the region labeled "locked" is where the system is synchronized to the external RF signal.
Discussion
Because the overdamped Duffing oscillator cannot oscillate by itself, the inertial term in the Duffing equation can be ignored, which simplifies the model analysis in this paper. The simplified model is a bistable system. When N overdamped bistable systems are unidirectionally coupled into a ring system, oscillation may occur under specific conditions. The circuit of this bistable system is shown in Figure 3 and is modeled by Equation (6). Through analysis, the oscillation frequency of the ring system is mainly determined by the current (IS, IC). The relationship between the free oscillation frequency and the current of the system is shown in Figure 6, and the response of the system is divided
Discussion
Because the overdamped Duffing oscillator cannot oscillate by itself, the inertial term in the Duffing equation can be ignored, which simplifies the model analysis in this paper. The simplified model is a bistable system. When N overdamped bistable systems are unidirectionally coupled into a ring system, oscillation may occur under specific conditions. The circuit of this bistable system is shown in Figure 3 and is modeled by Equation (6). Through analysis, the oscillation frequency of the ring system is mainly determined by the current (IS, IC). The relationship between the free oscillation frequency and the current of the system is shown in Figure 6, and the response of the system is divided
Discussion
Because the overdamped Duffing oscillator cannot oscillate by itself, the inertial term in the Duffing equation can be ignored, which simplifies the model analysis in this paper. The simplified model is a bistable system. When N overdamped bistable systems are unidirectionally coupled into a ring system, oscillation may occur under specific conditions. The circuit of this bistable system is shown in Figure 3 and is modeled by Equation (6). Through analysis, the oscillation frequency of the ring system is mainly determined by the current (IS, IC). The relationship between the free oscillation frequency and the current of the system is shown in Figure 6, and the response of the system is divided Figure 12. Relationship between the spectrum-sensing range of the system and the amplitude of the input signal intensity, or the conductivity, g(I sc " 220 µA, I cc " 300 µA).
Discussion
Because the overdamped Duffing oscillator cannot oscillate by itself, the inertial term in the Duffing equation can be ignored, which simplifies the model analysis in this paper. The simplified model is a bistable system. When N overdamped bistable systems are unidirectionally coupled into a ring system, oscillation may occur under specific conditions. The circuit of this bistable system is shown in Figure 3 and is modeled by Equation (6). Through analysis, the oscillation frequency of the ring system is mainly determined by the current (I S , I C ). The relationship between the free oscillation frequency and the current of the system is shown in Figure 6, and the response of the system is divided into oscillation and the non-oscillation regions, as shown in Figure 8. When the system is used for spectrum sensing, the system operates in the critical non-oscillation region. Once an RF signal appears in the channel, the system enters the oscillation region and its oscillation frequency is locked to the RF signal. The result obtained by numerical simulation is shown in Figure 9, which indicates that this phenomenon will occur as long as the current (I S , I C ) is appropriate. The frequency of each element is locked to 1/N of the frequency of the external radio signal. This result provides two benefits for spectrum sensing: (1) weak radio signal detection is converted to a stronger oscillator waveform detection, which reduces the requirement of the ADC's dynamic range; (2) the ADC's sampling rate is reduced. In addition, regarding the Gaussian white noise in the radio channel, the data reported in Table 1 and Figure 7 show that it has no real effect on the oscillation of the system.
Through practical circuit experiments, spectrum-sensing functionality is verified. Because the frequency of the system is determined by the current (I S , I C ), it is necessary to determine the critical current (I SC , I CC ) according to its operating frequency band. Afterwards, the frequency of the system can be locked to external radio signals. From the experiments, the relationship between the spectrum-sensing range of the system and the amplitude of the input signal intensity, or the conductivity, g, are obtained. The data show that the spectrum-sensing bandwidth will increase with the growth in amplitude of the input signal intensity. In addition, the growth of g in Equation (6) will also expand the spectrum-sensing range. If a cognitive radio system needs to be aware of the band beyond that provided by a single-ring system, multiple-ring systems operating at different currents (I S , I C ) are combined, with each system covering a specific frequency band. Thus, spectrum sensing along the entire frequency band can be achieved. Furthermore, a more complicated current control circuit is worth studying in order to extend the frequency band of a single ring system in the future.
The conventional method needs A/D sampling and complicated digital signal processing, which is time consuming. While the proposed scheme performs spectrum sensing at time domain, therefore the detection time until finding the existence of the primary signal is shorter, and the probability of interfering with the primary user is reduced.
Conclusions
In this paper, a spectrum-sensing method based on a unidirectionally coupled, overdamped nonlinear oscillator ring is discussed in detail. The ring system is composed of N overdamped Duffing oscillators, which is simplified to a bistable system. An overdamped Duffing oscillator can be realized by a simple circuit, which is easy to make into an integrated circuit. If it is unidirectionally coupled to a ring, the system will spontaneously generate oscillations related to the critical currents (I SC , I CC ). The critical currents divide the response of the system into oscillation and non-oscillation regions. When the system operates in the critical non-oscillation region, it will not oscillate. However, once the external RF signals are fed into each element of the system, they start to oscillate and the frequency is locked to the RF signal. Even if the RF signal is weak, the system still exhibits this characteristic. Regarding the usual Gaussian white noise in radio channels, there is no obvious effect on the oscillation of the system. These features are not only utilized to achieve spectrum sensing, but they also reduce the requirements of the ADC's dynamic range and sampling rate. The circuit experiments show that the spectrum-sensing bandwidth is related to the amplitude of the detected RF signal and the conductivity of the element. If multiple spectrum-sensing systems operating with different currents (I SC , I CC ) are combined, so that each system covers a different frequency band, wider-bandwidth spectrum sensing can be achieved. | 9,783.2 | 2016-06-01T00:00:00.000 | [
"Computer Science"
] |
Software Requirements Conflict Identification : Review and Recommendations
Successful development of software systems requires a set of complete, consistent and clear requirements. A wide range of different stakeholders with various needs and backgrounds participate in the requirements engineering process. Accordingly, it is difficult to completely satisfy the requirements of each and every stakeholder. It is the requirements engineer’s job to trade-off stakeholders’ needs with the project resources and constraints. Many studies assert that failure in understanding and managing requirements in general, and requirement conflicts in particular, are one of the main problems of exceeding cost and allocated time which in turn results in project failure. This paper aims at investigating the different reasons of requirements conflicts and the different types of requirements conflicts. It providing an overview of existing research works on identifying conflicts; and discussing their limitations in order to yield suggestions for improvement. Objective: To provide an overview of existing research studies on identifying software requirements conflict and identifying limitations and areas for improvement. Method: A comparative literature was conducted by assessing 20 studies dated from 2001 to 2014. Keywords—software requirements; requirements engineering; requirements conflicts
I. INTRODUCTION
In requirement engineering, the term conflict involves interference, interdependency or inconsistency between requirements [1].
Different studies state that failure in managing requirement conflicts is one of the main reasons for failure in software projects which is caused by cost and lack of time [2].It is essential to detect and resolve conflicts in early phases of the project lifecycle to prevent re-iterations of all phases [3].In recent research studies, a high number of conflicting requirements is stated as in [4], n 2 conflicts are reported in n requirements, whereas [5] reported 40%-60% of requirements were in conflict, In addition, the functional and nonfunctional requirements were both found to be equal in the percentage of conflicts.
Also, most research has shown the risks of working with requirements that are in conflicts with other requirements.These risks are overtime or over budget which can lead to project failure.At the very least, it would result in extra effort expended.
The remainder of the paper is organized as follows: section II gives an overview about requirements conflict, the different reasons for requirements conflict and the different types of requirements conflict.Section III presents in details the existing techniques for requirements conflict and a comparison between them.Then, section IV, discusses the limitation and research gaps in previous works and gives some recommendations should be taken into consideration when working to find practical techniques for detecting conflict between requirements.Finally, a conclusion of the review is giving.
II. REQUIREMENTS CONFLICT
This section explains the meaning of requirements conflicts, the different reasons that may cause conflict between requirements and the different types of requirements conflicts.
A. Definition of Requirements Conflict
Conflicting requirements is a problem that occurs when a requirement is inconsistent with another requirement [7].Consistency between requirements requires no two or more requirements contradict each other [8].In requirements engineering, the term conflict involves interference, interdependency or inconsistency between requirements [1].Kim et al. [9] gave a good definition of requirements conflict as: "The interactions and dependencies between requirements that can lead to negative or undesired operation of the system" An example of a conflict in nonfunctional requirements can be the gap between performance and security; when the client wants certain functionality to be satisfied in minimal time (e.g.calculate something and display it on screen), as well as the use of a secure protocol for data transferee and double password access control.
B. Causes of Requirements Conflict
There are different reasons that cause conflicts between stakeholders' requirements.One good categorization for conflicts reasons is presented in [10]; it classifies the reasons into technical reasons and social reasons.Technical reasons are caused by the following difficulties: • Massive quantity of requirements can lead to conflicts between them.
• Changes in requirements during system development phases.These changing may occur after the addition of new requirements or the update of old ones [14].
• Complex system domain can lead to misunderstanding of requirements, and therefore, conflicts between them.
Whereas, the social difficulties that lead to requirements conflicts are as follows: • System has different stakeholders with diverse interests that usually interact with each other and causes conflicts.
• Changes in the system's stakeholders by adding new stakeholders with different needs or by changing stakeholders' requests.
Therefore, there are different sources for inconsistencies between requirements and these may cause problems in the success of the software development.Researchers have been working to find various solutions for this problem.
C. Types of Requirements Conflict
The literature review has shown that there are no predefined classifications for conflicts in requirements.Each work provides a different classification for the conflicts after its found based on the technique used to detect conflicts.
Poort and de With [15] grouped functional requirements based on nonfunctional requirements; this means finding all primary function requirements that share similar nonfunctional requirements and grouping them together.Then two types of conflicts are defined: grouping conflicts which caused by differences in grouping of functions and in-group conflicts that have conflicting requirements within one function group.For example, there are three function groups called workflow, data entry and analysis.For data entry and analysis, security requirements are more restrictive than in the workflow group.Whereas, modifiability for analysis are more stringent than those for data entry and work flow.Sadana and Liu [16] analyzed functional and nonfunctional requirements and built functionality and quality attributed to hierarchy.Then, two types of conflict in NFR are defined based on comparison of all the lowest level NFRs, if there is still a conflict detected among the NFR.These types are mutually exclusive and partial conflicts.Mutually exclusive conflicts as follows: NFRs A, B are mutually exclusive conflicts if all the lowest level requirements in NFR A have a conflict with the lowest level requirements in NFR B. Partial conflicts as follows: NFRs A, B are in partial conflicts if some of the lowest level requirements in NFR A have a conflict with the lowest level requirements in NFR B.
Heng and Ming [17] defined three types of inconsistent requirements based on multi-coordinated views on requirements.This is common when one stakeholder has an incomplete requirement while others have complete requirements.This creates a situation where requirements overlap when one part of processes in set A is overlapped with another but not fully overlapped, as well as totally disjointed requirements when two views on requirements are totally disjointed.
Butt et al. [2] defined different conflicts based on the classification of the requirements to mandatory, essential and optional.Mandatory requirements are a set of functional and nonfunctional requirements.Essential requirements are the constraints of the mandatory requirements.Whereas optional requirements are the requirements that if they have conflicts, this would not affect the acceptance of the system.For example, in a hostel management system for a university: • The system should allow the warden to assign student a seat in his hostel (Mandatory requirement).
• The system should maintain a log of all allotments and vacations in his hostel (Essential requirement).
• The system should allow the warden to shuffle multiple students seats (Optional requirement).
Kim et al. [9] defined two types of requirement conflicts depending on the cause of the conflict and the authoring structure, which is action (verb) + object (object) + resource (resource): • Source conflict when two requirements use the same resource.
For example, with cellular phones, when a phone call is made from a number that should not answered , the automatic response function will try to answer the phone call while the reception refusal function will be forced not to answer the call.
For example, a fire control function is required with an intrusion control function in the home integration system.When those two functions are executed simultaneously, they will try to send messages (fire message and intrusion message) using the same resource (telephone service) at the same time, which will lead to a resource conflict.
Moser et al. [13] [18] defined three types of conflicts that could be detected: conflict between a requirement and a constraint (CRC), conflict between a requirement and a guideline (CRG), and conflict between requirements (CRR).They also gave two classifications for conflicts based on the number of requirements, simple conflicts (between two requirements), and complex conflicts (between three or more).
Urbieta et al. [19] [20] defined three types of conflicts on Web application:
• Structural conflicts: Which means the difference in the data is expected to be presented on a Web page by different stakeholders • Navigational conflicts: This occurs when two Web application requirements may contradict the way in which links are traversed which in turn produces navigational conflicts; that is, having two targets go to a single source.
• Semantic conflicts: this happens when the same realworld object is described using different terms.
Chentouf [21] defined seven types of conflicts: 1) Duplicated requirements: If two requirements are exactly the same or one is included in the other.
2) Incompatible requirements: If two requirements are either ambiguous, incompatible or contradictory.a) Two operation frequencies: when the same agent is required to perform the same operation on the same object, but at two different frequencies.b) Start-forbid: when the same event causes the same operation to be performed and forbidden.c) Forbid-stop: when the same operation is stopped under a certain condition event and at the same time, is unconditionally forbidden in another requirement.d) Two condition events: when the same operation is being executed, stopped or forbidden on two different events.3) Assumption alteration: when the output of one requirements' operation is part of the inputs (assumptions) or outputs (results) of the other's operation.a) Input-output: When one of the requirements performs its operation on an object (output) that is an input in other requirements.b) Out-put: This happens if one requirement alters the result (output) or part of another requirement.
Mairiza and Zowghi [21] explained the different categories of conflicts in NFRs as: • Absolute conflict: represents a pair of NFRs types that are always in conflict.For example, security and performance, availability and privacy.
• Relative conflict: represents a pair of NFRs that are claimed to be in conflict in some cases but not in all.For example, usability and security, usability and performance.
• Never conflict: represents a pair of NFRs types that in the software projects are never inconflict.They may contribute either positively through support or cooperation, or may be indifferent to one another.For example, accuracy and security, usability and maintainability.
In general, we can give general classifications to requirements conflicts based on the types of requirements, functional requirements and nonfunctional requirements.An example for conflicts in nonfunctional requirements is security (privacy metric) with usability (ease of function learning metric) so there is a tradeoff between them.Then the developer must choose a satisfactory solution to find the right balance of attributes that work.Another example is: • R1: After three continues failed login attempts, the account would be locked by the system.
• R2: Once the account is locked, the system sends an account lock notification email to the account's owner.
• R3: Once an account is locked, the system would also send a SMS message to the account's owner to notify him about the situation owner.
• R4: If a user has already received a notification via email, he will not receive the same notification via SMS.
• There is a conflict between R2, R3 and R4.
Another good classification for requirements conflicts is the one illustrated in [21].
III. REQUIREMENT CONFLICT IDENTIFICATION TECHNIQUES
Owing to the importance of accurate and complete requirements, researchers have tried to identify detection techniques and proposed solutions for requirements conflicts.
This section discusses the different existing detection techniques and their categorization.In the end, a comparison and analysis of the techniques is summarized in a table.
The techniques proposed can have different classifications; the easiest classification is the negotiation or automation techniques.In negotiation techniques, stakeholders and software engineers manually discuss and analyze requirements to detect any conflicts [13].Some call this approach an informal technique that can be achieved by hiring experts to detect inconsistencies using their experience [22].This method has some disadvantages because it may take a long time and much effort to negotiate between different stakeholders.Additionally, hiring experts can be very expensive and leave the process to be prone to errors.While in automation approaches, software engineers can use some tools to help with analyzing and managing requirements [13].
In [6], three approaches are proposed to detect requirements conflicts.The ontological approach which uses ontology to extract conflicts between terms and then, between requirements.The methodological approach compares requirement representations to find conflicts and resolve them.The technological approach provides a specific technique or automation to detect potential conflicts.
Methodological approach is almost the same as the negotiation approach since both are manual processes and depend on human efforts.Additionally, technological approach is similar to the automation technique since they both utilize tools to solve problem of requirements conflicts.
Another classification of current detection approaches are formalization-based approaches, model-based approaches and stakeholder priority approaches [23].The formalization-based approaches use formal specifications for requirements to support seeking conflicts between them.The drawbacks of this approach are the time and effort needed to formalize the requirements and any mistake that could occur during the formalization may lead to incorrect conflict detection.The model-based approach structures the requirements into specific models before conflict identification.If the approach uses a model that is already used in the system then developing it is fine; however, if it uses a different model, this will create additional steps and therefore, extra time and effort.The third approach depends on the stakeholders' discussion and the stakeholders preferences.
A. Existing Techniques
Literature shows that requirements engineering is one of the most active research fields in the recent years.Researchers are continuously working to improve requirements quality and to resolve difficulties that may affect requirements wholeness or accuracy.One of the most common problems is requirements conflicts, and because of the importance of this topic as mentioned in section B, many works have presented different techniques to detect and resolve conflicts between requirements.
This section discusses these techniques, which can be placed in three categories: 1) Manual techniques done manually by requirement engineers.2) Automatic techniques applied automatically using software tools.3) General framework, to detect conflicts without using special techniques.
The different techniques are presented in ascending order based on their dates.
1) Manual: Most of the proposed methods are performed manually with software engineers and with help of stakeholders.Heisel and Souquierers [3] presented a heuristic algorithm to detect feature interactions in requirements.The algorithm uses the schematic versions for formalized requirements and consists of two parts, precondition interaction analysis to determine any two requirements where both might be applied.Then postcondition interaction analysis to determine the candidate incompatible requirements.As the algorithm is named 'heuristic', the candidates need to check with the software engineers and stakeholders to determine if they are actual conflicts or not.
Robinson [6] used a root requirements analysis to detect requirements interactions.The technique is composed of three procedures.First, rewrite the requirements in structure form.Then, produce the root requirements hierarchies.Finally, analyze the root requirements to determine the ordering of the requirements according to their degree of expected conflict.The case study result demonstrated that using root requirement analysis is more accurate and detects more conflicts than without using root analysis.
Poort and de With [15] presented a non-functional decomposition (NFD) model that gives a new classification for requirements.Primary functional requirements and supplementary requirements which is classified as secondary functional requirements, quality attribute requirements and implementation requirements.
The technique defined two types of conflict: grouping conflict caused by differences in grouping of functions and in-group conflicts when conflict happen within one function group.To solve in-group conflicts, requirements will be split into different functions.The new functions will be included in other function groups.This process will repeat until there is no in-group conflict found.Sadana and Liu [16] have proposed a framework to analyze the conflicts among nonfunctional requirements using the integrated analysis of functional and non-functional requirements.
The conflict detection is performed on the high level NFR based on the relationship between quality attributes, constraints and functionality.The FR and NFR hierarchy are built and integrated to produce high level NFR.
The conflict detection in NFRs is based on relationship among ISO 9126 quality attributes.Two types of conflict in NFR are defined mutually exclusive and partial conflict.
Liu [13] utilized an ontological approach to analyze conflicts in the requirements specification of activity diagrams.The requirements conflict process starts by building an action state ontology and drawing the activity diagram for existing requirements.Then, it detects the requirements conflict based on seven proposed rules: shortcut conflict, initial state conflict, final state conflict, sequence conflict, action state addition conflict, action state deletion conflict and process length conflict.
Heng and Ming [17] proposed a non-mathematical technique called multi-coordinated views that showing different views of multiple stakeholders.The methods used for displaying the different views are color and size.Three types of inconsistent requirements can be found, when one stakeholder has incomplete requirement while other stakeholder has more complete requirement, fully overlapping requirements, and totally disjointed requirements.The conflict resolving is done through agent communication protocol like JADE with ACL.
Mairiza and Zowghi [5] proposed an ontological framework (sureCM) to manage the conflicts between security and usability requirements.The output of the system are lists of conflicts, nature of the conflict based on the impact of the conflicts against different components in software development, and conflict resolution strategy.
Butt et al. [2] proposed a Mandatory, Essential and Optional (MEO-strategy) for requirement conflict resolution.The strategy defined three types of requirements: mandatory requirements, essential requirements and optional requirements.
The output of the framework is a requirement matrix contains the conflicting requirements if any and the suggested solution time.Prevention for mandatory, detection and removal for essential and containment for optional requirements.A case study result shows that the users' acceptance test for system performance, quality and conformity to user needs was achieved successfully.
Mairiza et al. [11] applied an experimental approach to design a framework that manage the relative conflicts among NFRs.A suitable exterminate is designed to apply the metric and measure of the NFRs with the functionality of the system and how to implement the functionality (operationalization).The result of the experiment is the satisfaction level of NFRs in the system.A two dimensional conflict relationship graph is created to determine if there is a conflict between the two NFRs and the severity of any existing conflicts, means is it a strong or weak conflicts depend on the shape of the graph.Moreover, Mairiza et al. [24] proposed a novel idea of utilizing TOPIS (Technique for Order of Preference by Similarity to Ideal Solution) to resolve nonfunctional requirements conflicts.TOPIS is a goal-based technique for finding the alternative that is nearest to the ideal solution.
The framework takes a two-dimensional graph that shows the relationship between two NFRs.Then, a decision matrix is constructed based on the graph.The technique calculate the distance to each alternative to the ideal solution and choose the closed one, it is the solution that maximize both NFRs.Alebrahim et al. [25] presented a structural method to detect candidate requirements interaction between functional requirements.The proposed method consists of three phases.The first phase is to remove any conflicts after analyzing problem diagrams.In the second phase, the set of candidates conflict requirements are reduced using the information if requirements have to be accomplished in parallel or not.In last phase, the candidates conflict set are reduced by checking if combination of their precondition is fulfilled.A real life example was studied and the results show that the number of possible interactions was decreased and thus, the time for looking into requirements interactions decrease by 95%.The precision was 33% and a perfect recall with 100%.
2) Automatic: The word 'automatic' intended using some tools to analyze and detect the requirements conflicts instead of doing that manually.
Egyed and Grunbacher [26] used an automated traceability techniques to eliminate false conflicts and cooperation.The approach automatically analyzing the requirements to identify requirements that conflict based on their attributes, attributes might be indifferent to one another, cooperative or conflicting.Then, the trace analyzer automatically identifies the trace dependencies among the requirements.Based on the knowledge of trace dependencies, the system can determine to what extent the requirements are overlapping.If two requirements overlap, then the two requirement are conflicts.Whereas if there is no overlap between them, they can't be conflicts.Kim et al. [9] presented a systematic process to detect and manage requirement conflicts based on requirements partition in natural language.A supporting tool (RECOMA) has been built and two types of conflicts are defined, source conflict and activity conflict.The requirement conflict detection is done through two steps.First, a syntactic method automatically identifies the candidate conflict requirements.Then, the semantic method is used to find the actual requirements conflicts through questions list.By automated a syntactic analysis, the number of requirements to be semantically comparison are reduced.Two cases studies are presented and the results demonstrate that comparison requirements dramatically reduced, and thus the time and effort are decreased.Kamalrudin et al. [8] explained how to use tractability approach to manage the consistency between textual requirements, abstract interactions and Essential Use Cases (EUCs).An automated tracing tool (Marama AI) is built to help users extract abstract interaction from the textual requirements, mapping the type of interaction and creating the EUC model.It supports traceability and inconsistency checking between the three forms.An experiment results show that 94% of the participants were agree that it is useful and all were say it is user friendly and easy to use.Moser et al. [13], [18] proposed an automatic semantic based approach for requirements conflict detection.The proposed solution consists of two main phases.First step is to link requirements written in natural language to semantic concepts to build the project ontology.Then the requirements will automatically and semantically analyzed to identify possible conflicts using sets of assertions that should be true for all existing facts.They defined three types of conflicts that could be detected: conflict between a requirement and a constraint (CRC), conflict between a requirement and a guideline (CRG) and conflict between requirements (CRR).The evaluation results show that the prototype tool (OntRep) found all conflicts while manual conflict analysis found 30% -80% of the conflicts.Also, the correctness of the proposed approach is 100% compared to 58.8 of false positive in manual analysis.
While Urbieta et al. [19], [20] proposed a model-driven approach to detect requirement conflicts in Web applications in early stage of software development.The approach starts automatically listing the candidate structural and navigational conflicts by structural analysis using the Navigational Development Techniques (NDT) model.Then semantic analysis on requirements is formalized using Domain Specific Language (DSL) for candidate conflicts to avoid false positives which are conflicts that are actually not in conflict.
Resolving conflicts will be done manually using the proposed conciliation rules or by stakeholders' negations.Compared to manual approach, the evaluation shows that system detects 100% of inconsistencies and the time is reduced by 78% which saves 44% of budget.
Nguyen et al. [27] proposed Knowledge Based Requirements Engineering (KBRE) framework.The domain knowledge and semantics of requirements are centralized using ontology and the requirements goal graph is used to detect requirements inconsistencies and overlaps.The explanation for each detected requirements is provided automatically.The case study shows the performance of the system is satisfactory by calculating the running time to detect inconsistencies and precision of detecting inconsistent requirements.
Chentouf [21] presented a solution to OAM&P (Operation, Administration, Management and Provisioning) requirements conflicts..The proposed method used an Extended Backus-Naur Form (EBNF) as representation language for requirements.The system automatically validates each requirement statement based on validation rules.Then it compares every pair of requirement to detect conflicts according to the seven conflict inference rules.Seven types of conflicts were defined and a proposed solution for each type was presented.To test the scalability, results show that the proposed solution gives an acceptable computation time less than a minute for more than 10,1000 requirements.Also, it scales very well as the number of requirements increase.
3) General Framework: Some works can't be classified as manual or automatic techniques.Thus, they are only considered as general frameworks to detect the conflicts between requirements.Shehatam et al. [22] proposed a three-level interaction detection framework (DRI-3).Level-1 uses informal approaches to detect accurate and domain known interaction with the help of experts, Level-2 Identifies requirements interaction using semi-formal, semi-formal means systematic steps without formalized methods.Level-3 applies formal approaches to detect accurate interaction.
Additionally, the paper presented a set of guidelines describing which techniques from (DRI-3) can be used based on the values of different attributes of project.A case study is carried to evaluate the efficiency of using the model comparing to experts without applying the model.The results show that the number of comparison requirements is decreased by 18%.
Mairiza and Zowghi [28] demonstrated the results of the investigation of research on NFRs conflicts that resulted in a catalogue of conflicts among NFRs.The catalogue is a two-dimensional matrix that represents the interrelationships among twenty types of NFRs.
It shows three categories of relative conflicts between the NFRs, absolute conflicts for NFRs that are always in conflict, relative conflict for pair of NFRs that are sometimes conflicted and not conflict for NFRs that never conflict in the literature of NFRs conflict studies.
B. Comparing between Exciting Works
This section analyzes and summarizes the comparison between different techniques to provide a general and quick review on the works done in this area.
To offer better understanding and analysis of existing techniques, they will be classified into different categories as shown in figure 1, categorization is based as follows: • The first classification is based on the conflict identification method, whether it is done manually by the requirement engineers or automatically using software tools.A class of general frameworks is added to classify some works that detect conflicts without using special techniques.
• The second classification is focused on the type of requirements that the technique will be applied to: functional or nonfunctional requirements.
• The third classification is to determine the scope of the proposed approach to examine if it covers the detection problem, detection and analysis of the conflicts requirements to organize them into different conflict types, and if the proposed approach offers a resolving technique.
• The last classification is based on the representation type for requirements used.If the detection technique uses a specific formalization form, it structures the requirements in a particular model, or it uses an ontology.
Table I summarizes previous works listed by reference number for the research, conflict analysis approach used in identifying conflicts, and category of proposed methods (manually, automatic or just a general framework).It also states the type of requirements the technique is applied to and what is the scope of the technique (i.e.identify, analyze, resolve).It also determines what representation was used to complete or facilitate the technique (formalization, structure model or ontology).The last column indicates whether the proposed technique was supported by evaluation or not.For example, the first row corresponds to a manual technique that The previous works that were studied include 20 papers; these papers are the ones that have a close association with the problem of requirement conflicts.They have proposed different approaches for conflict analysis and detection.For automatic techniques, the conflict analysis approach can be classified into the following four groups: (1) semantic approach for technique that use ontology like [13], [18]; (2) syntax approach when a syntax analysis is done for requirement specification like [9]; (3) graphical analysis when a specific model used like [20], [19], [27]; and (4) tractability approach when tractability technique is used like [26], [8].
The main classification shows that twelve of the works (more than half of them) are manual techniques performed by software engineers whereas only seven are automated tools that help them to find conflicts in requirements, see figure 2.
The analysis of the works as shown in figure 3 demonstrates that eleven of the twenty works work on functional requirements and only six for nonfunctional requirements while there are three works for both of them.
The scope of the proposed solutions was different.As figure 4 shown, almost all research focus on the problem of identifying conflicts, while thirteen of them analyze the conflicts to give different classifications for the conflicts.Only five research studies give guidelines and proposed resolving approaches.
Most techniques used different representation for the requirements to help in analysis and identification of conflicts.Figure 5 indicates that the representation methods used can be divided to three types: either using ontology if the technique using semantic analysis for the requirements like [5], [27], Fig. 3. Analysis results based on type of requirements [13], [18]; structural model if graphical analysis is used like [20], [19], [8], [29]; or formalization methods which are different from schematic version in [3], structure requirements in [6], two canonical forms in [16], semi-formal ontology driven domain-special requirement language in [17], DSL in [20] and [19], OWL in [27], and EBNF in [21].While some proposed techniques used more than one type of representation like [20], [19] which used ontology and DSL as formalization method, [27] used ontology, OWL as formalization method and goal graph as a structural model.
Analysis on previous works illustrate that only half of the works were evaluated to test the effectiveness of the proposed method, as shown in figure 7.
According to the importance of evaluation, more analysis was conducted on the evaluated works.Table II summarizes the evaluated works listed by reference number.The second column presents evaluation data that was used to test the system.The literature illustrates that most works use case studies to test the effectiveness of the proposed method; except for when in two works, one uses a survey and the second uses an experiment.The third column explains the goal of the evaluation.The objectives were between the following: assessment of the utility, measuring users' satisfaction, evaluation of the effectiveness, demonstration of the tool feasibility, testing of the usefulness and ease of use, and testing of the completeness and consistence of the proposed method.
The fourth column explains the method used in the evaluation.It is clear that almost all works used the comparison in the number of detected conflicts, the validity of the detected conflicts, and the cost when using the proposed method without using any specific approach.Finally, the evaluation results are presented in the last column.
IV. DISCUSSION
The previous sections have discussed in detail the existing works for detection and managing requirements conflicts.However, the topic is still active for researchers in the field of requirements engineering.
The problem of requirements conflicts can be divided into two main sections: identifying the conflicts in requirements; and resolving them.This paper focused on identifying requirements conflicts.The literature review demonstrated that most techniques that are proposed to decrease risks caused by requirements conflicts are manual techniques while the automated approaches are tools based on human analysis.That may incur costs to the project due to human error and wrong decision making.
There are still many gaps in the previous works in identifying requirements conflicts.Detecting conflicts manually takes a long time and effort which may cause delays in the project.In addition, it is fallible since it is done by human effort.Some conflict techniques have tried to automate the detection process by using or building specific tools.Applying some automation to the process would decrease the human effort and time.However, all the automation approaches are still based on human analysis to detect and resolve conflicts.Also, most techniques are proposed techniques that are not evaluated for their efficiency in detecting and resolving conflicts.
There are some important issues that should be taken into consideration when working to find practical techniques for detecting conflict between requirements.First, define what requirement conflict exactly means and what it includes to find a suitable technique to catch the conflict.Then, determine the type of requirements that the technique will work on and which representation method for requirement specification is the most suitable for use.Also, determine when the technique can be applied and in which phase of software development.As final step, determine how to measure the efficiency of the proposed technique.
V. CONCLUSION
Requirements engineering is a critical part of software development that plays an important role in the software project success.However, there are different issues that may Assess the utility.Compare the number of conflict detection using root analysis and without using it.
Using root analysis technique detects 72 conflicts, while without using it only 9 conflicts is detected.
[2] Case study: applied MEO-strategy during build (Hostel Management System) as part of university management system.
Conduct a feedback workshop to collect user's feedback.
Users' acceptance test for system performance, quality and conformity to user needs was achieved successfully.
[30] Case study: proposed approach was used in real life example in domain of (Smart Grids).
Validate the proposed approach.
Compare the number of possible requirements interaction using problem diagram and without using it to measure the effort (time) need to detect the interaction.
Measure the precision and the perfect recall using problem domain approach.
The number of possible interactions was decreased and thus, the time for looking into requirements interactions decrease by 95%.The precision was 33% and a perfect recall with 100%. [9] Case study: The proposed approach was applied in (Home Integration System (HIS) and (Cellular phone domain).
Demonstrate the tool feasibility.
Compare the total number of comparison to find conflict using the proposed automated tool with the manual approach by developers.Also, compare the time and cost using the two approaches.
The number of comparison using manual approach in His is 378 and in cellular phone case is 666.Comparing to 79 and 100 using the automated proposed approach.While the number of comparison is decrees, the time and cost will decrees.[8] Survey: with 8 software engineering post-graduate students.
Test the usefulness and ease of use.
Use Likert scale with five part scale to evaluate the usefulness and ease of use of the proposed approach.
Results show that 94% of the participants were agree that it is useful and all were say it is user friendly and easy to use.[13], [18] Case study: real-world industrial case study with 6 project managers and requirement expert.
Evaluate the effectiveness
Compare the number of conflicts detected using the proposed method with the manual approach.Also, compare the percentage of correctness in the two approaches.
The prototype tool (OntRep) found all conflicts while manual conflict analysis found 30% -80% of the conflicts.Also, the correctness of the proposed approach is 100% compared to 58.8 of false positive in manual analysis.[20], [19] Experiment: simulation in real environment of Mosaico.
Measure the efficiency and effectiveness.
Calculate the number of inconsistencies detected and the time and the cost is compared to manual approach.
The evaluation shows that system detects 100% of inconsistencies and the time is reduced by 78% which saves 44% of budget.[27] Case study: on traveler social networking system.
Evaluate the effectiveness.
Measure the performance of the system in the number and the precision of detecting inconsistencies.
The performance of the system is satisfactory by calculating the running time to detect inconsistencies and precision of detecting inconsistent requirements.[21] Proof-of-concept example , simulation test Test the completeness and consistence To test the scalability, Use proof-of-concept Compute the computational time for different number of scalability.
The acceptable computation time less than a minute for more than 10,1000 requirements.Also, it scales very well as the number of requirements increase.[22] A case study : smart homes domain Evaluate the efficiency Compare the number of comparison done by expert if applying the approach without applying it.
The number of comparison requirements is decreased by 18%.
be caused by giving incorrect requirements and therefore, this results in project failure, which is one of the problems in requirements conflict.
The paper provided a literature review on requirements conflict research and analyzed them to show the limitations and gap in previous works.Also, a more detailed analysis was conducted on the works that were evaluated to illustrate the evaluation methods and data used in the previous works.The literature review demonstrated that most techniques that are proposed to decrease the risks and detect requirements conflicts are manual techniques while the automated approaches are tools based on human analysis.That may incur costs to the project due to human error and wrong decision making.Moreover, most of the proposed approaches were not evaluated to measure their efficiency.At the end, important issues were given as general recommendations when proposing requirements conflicts technique.
Fig. 1 .
Fig. 1.Categorization of existing techniques for requirements conflicts
Fig. 4 .
Fig. 4. Analysis results based on scope of the approach
TABLE I .
COMPARISON BETWEEN EXISTING WORKS IN REQUIREMENTS
TABLE II .
COMPARISON BETWEEN EVALUATED WORKS | 8,477 | 2016-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Religion, Populism, and the Politics of the Sustainable Development Goals
This article examines the Sustainable Development Goals (SDG) framework as a political project in tension with its universal and multilateral aspirations to serve as a counterbalance to narrow populist visions increasingly dominating global politics. Building upon Laclau and Mouffe’s theory of populism and their notion of ‘radical democracy’, we conceptualise the SDGs as a struggle for hegemony and in competition with other styles of politics, over what counts as ‘development’. This hegemonial struggle plays out in the attempts to form political constituencies behind developmental slogans, and it is here that religious actors come to the fore, given their already established role in organising communities, expressing values and aspirations, and articulating visions of the future. Examining how the SDG process has engaged with faith actors in India and Ethiopia, as well as how the Indian and Ethiopian states have engaged with religion in defining development, we argue that a ‘radical democracy’ of sustainable development requires a more intentional effort at integrating religious actors in the implementation of the SDGs.
global development policy and practice, faith-based organisations (FBOs) are more likely than before to receive donor funding for development and humanitarian work, and evidence for the positive role that faith actors can play in social welfare is as likely to be stressed as the challenges such engagement can bring (Tomalin, 2013(Tomalin, , 2015. The consultative framework of the SDGs and its potential for including faith actors in its vision of sustainability prompted us to conduct exploratory research in Ethiopia, India, and the UK, aimed at finding out, firstly, whether the civil society consultation process was as successful as claimed with regard to the inclusion of faith actors and their perspectives, and secondly, how religious organisations are engaging in the early implementation phase of the SDGs. In the context of a research networking project funded by the UK Arts and Humanities Research Council, titled 'Keeping Faith in 2030: religions and the sustainable development goals', we conducted three participatory workshops in India, Ethiopia, and the UK from 2017 to 2019, and obtained ten key informant interviews. The workshops gathered representatives from the most relevant national and international FBOs and involved various activities to discuss how they were involved in the consultation and negotiation process to set the SDGs as well as how they were interpreting, adopting and implementing them in their work with local communities. 1 Each followed the same format of a small group discussions of set questions, so that comparisons could be made. These were recorded, and detailed notes taken, and our findings were published in a series of reports and articles (see esp. Haustein and Tomalin, 2019;Tomalin and Haustein, 2020;Tomalin et al., 2019, see also https://religions-and-development.leeds.ac.uk/researchnetwork).
The findings from this preliminary research indicate that the SDGs are far from forming an alternative, internationalist development platform to counter populist agendas, at least as far as the important dimension of religion is concerned. Their reach among FBOs was extraordinarily low, with few involved in the global consultations. Where FBOs did engage with the SDGs, this was mostly prompted by top-down reporting structures rather than the usefulness of the framework. Furthermore, while some interlocutors saw potential in the SDGs for holding their governments to account, the early implementation reports in the Voluntary National Reviews (VNRs) of India and Ethiopia suggest the opposite: governments are subverting and rescripting the SDG agenda to suit their populist politics.
While these initial findings need to be deepened in further studies, they point to an emerging hiatus between the aspiration of the SDGs to form a multi-lateral development effort and the significant state capture the SDG process allows. This calls into question the juxtaposition between the internationalist framework of the SDGs and the rise of local or national populisms. Can the SDG process and its multi-stakeholder framework be positioned as a suitable antidote to the rise of populism? Or is the implementation of the SDGs threatened if not thwarted by populist governments? Are both processes even in competition, or do they cater to different constituents and political echo chambers? How do religions participate in each, and how have religious actors been addressed by the SDG process and populists in India and Ethiopia? And finally, is it justified to see them juxtaposed as inherently different political processes, with the SDGs following a liberal, inclusive ethos to development, and populism an instrumentalist and potentially exclusive one?
These questions prompt us to rethink the relationship between international and national development policy in order to understand the challenges the SDGs are facing in this early implementation period, especially in the mobilisation of local facilitators such as Religion, Populism, and Sustainable Development Goals FBOs in restrictive national environments. We begin by drawing upon the theory of populism offered by Laclau and Mouffe (1987), which offers a way of breaking down the problematic binary between 'progressive' multi-lateral politics and 'retrograde' particularistic populist politics, instead viewing populist strategies as endemic to politics as a whole. Next we apply this framework to the SDGs. While others have drawn attention to the politics of the SDG framework for masking a Northern neo-liberal agenda under the guise of universalism (e.g. Fukuda-Parr and McNeill, 2019;Gabay and Ilcan, 2017), our engagement with Laclau and Mouffe (1987) offers an original contribution to debates in this area in reframing the SDG process as a form of development popularism in itself that stands in competition rather than in contrast with nationalist politics. We will also show that religions operate in the same terrain of constructing 'the people' of development, making them a vital resource within this competition. In the remaining two sections, we present our findings from India and Ethiopia from this Laclauian perspective to illustrate how the aspirations of the SDG platform are being side-lined by national governments and fail to resonate at the grassroots level, but also how religious actors offer a resource to overcome these challenges. We conclude the article with comparative reflections to the questions set out above.
Hegemony and empty signifi ers: on the rationality of populism
In the introduction to this themed section, the editors note that Laclau and Mouffe's notion of populism has an unexplored potential for the policy world (1987). We agree, because Laclau and Mouffe open up new analytical space for how policy frameworks succeed and fail. Rejecting the problematic distinction between populist and non-populist discourses, Laclau and Mouffe focus on how political majorities are organised around a perceived foundational identity. For policy frameworks, such as the SDGs, the question of their success does not depend on the rationality or persuasiveness of their content alone, but on how they manage to mobilise a plurality of actors under a common demand or signifier. This requires moving beyond satisfying elites (which is just as likely to be a marker of liberal political frameworks as those termed 'populist') and instead to enable a broad hegemonial strugglea 'radical democracy'as to what symbolises and encapsulates this common demand.
Laclau and Mouffe's understanding of society is captured in their theory of discourse and rests upon the rejection of any ontological ground beyond the 'play of differences' (Laclau, 2005: 69). This leaves no privileged, a priori vantage point for political analysis, like class, (social) function, or material conditions. All such identities and distinctions are seen to arise within the play of difference itself, as indeed is any signification, as de Saussure has shown (Laclau, 1994: 168). This interdependence of all identity articulations leaves us with the problem of an open-ended system: how can political discourses assert meaning and values if everything is relative? Laclau and Mouffe conclude that a preliminary closure or boundary is necessary, because without a 'fictitious fixing of meaning there would be no meaning at all' (Laclau, 1996: 205). Therefore, social identities like family, ethnicity, or nation are only possible by introducing arbitrary boundaries. Politics becomes the necessary ground for the emergence of social identities.
Within a system of relational differences, such boundaries or closures cannot be represented by another difference, but only as an antagonistic outside, as 'pure negativity' (Laclau, 1994: 170). This is made possible by subverting the system of differential Jörg Haustein and Emma Tomalin relations, by 'privileging the dimension of equivalence to the point that its differential nature is almost entirely obliterated' (Laclau, 1994: 171). Put in simple political terms: 'our' differences pale in comparison to 'those' people who threaten our very existence as a social group.
This constitutive equivalence is represented within any discourse by what Laclau and Mouffe called a 'floating signifier' (Laclau and Mouffe, 1987) or 'empty signifier' (Laclau, 1994). This denotes an element of the equivalential chain, which has been stripped of its referent to stand for the equivalence itself. Populist movements therefore arise when differential demands are absorbed into a chain of equivalences under a name that originated from a particular demand but came to signify the movement itself. Examples for this abound from 'liberté, égalité, fraternité' to 'make America great again.' Which signifier from the equivalential chain takes on this role is the result of a 'particular conjuncture' and produces a 'hegemonic relationship' (Laclau, 1994: 175), because constructing an empty signifier is not 'a conceptual operation of finding an abstract common feature' but 'a performative operation constituting the chain as such' (Laclau, 2005: 97). The forceful articulation of the empty signifier produces the perceived equivalenceit becomes tangible through the shared slogan.
It is easy to see that this act of 'constructing the 'people'' (Laclau, 2005: 66) is not a proprietary feature of populism but points to the structure of politics as a whole. This is why Laclau bemoans the intellectual vacuity of the category of populism and suspects that behind its dismissal or downgrading is the 'denigration of the masses' (Laclau, 2005: 63). Populism is no longer understood as a 'fixed constellation but a series of discursive resources which can be put to very different uses' (Laclau, 2005: 176). Moreover, the 'minimal unit of analysis' is no longer the group, but the 'socio-political demand' (Laclau, 2005: 224), so that the question of analysing 'populism' is not how and why 'populist' politicians lead 'the people' astray, but how demands are being mobilised and drawn together to 'construct "a people".' This allows for a more fine-grained analysis of political mobilisation than the juxtaposition of populism with liberal democracy. Laclau and Mouffe develop from this the project of 'radical democracy': rather than narrowing the field of politics in the name of a universal (as in a Leninist approach), the new left needs to attain hegemony by renouncing the 'discourse of the universal' (Laclau and Mouffe, 1987: 192) and aim for a diversification of demands under a maximally broadened chain of equivalences.
Populism, the SDGs, and religion: constructing 'the people' of development The universalism of the SDGs has been juxtaposed against an inward-looking mood that increasingly influences global politics (Glennie, 2019;Marschall and Klingebiel 2019). Our analysis, however, leads to a different perspective. From the vantage point of Laclau and Mouffe, it is important to suspend the value-laden juxtaposition of universalism versus nationalism in favour of understanding them as engaged over one and the same hegemonial struggle over 'development': How do both the SDG framework and nationalist politicians seek to mobilise social demands in the interest of collecting constituencies? This draws attention to surprising parallels. For instance, the phrase 'leave no one behind' was not only used as the underpinning motto of the SDGs, part of a push for more inclusiveness in development thinking after the MDGs (OECD, 2018; Kharas et al., 2019), but also as a powerful election slogan by Narendra Modi, the Hindu nationalist president of India, even before the adoption of it by the SDGs (Malhotra, 2020). Wraight provides a Religion, Populism, and Sustainable Development Goals critique of the notion of the 'left-behind' as 'an inherently value-laden term' thereby lending itself to blaming perpetrators or 'valorising communities as "victims" ' (2018: 7, 9). This, we argue, highlights that the phrase is a construction employed by both the SDGs and populist governments, where each define a population that is to be 'caught' up according to their own normative standard. This is not to argue that Modi and the SDGs are pursuing a similar programme or agree on what counts as progress and development. Modi's construction of the people who are 'left behind' does not include religious and ethnic minorities outside the Hindu populous and represents a specific hegemonial occupation of the 'left behind' (i.e. the repressed, 'backward' Hindu). However, the shared use of this slogan does point to the parallelism of their hegemonic operation in constructing 'the people' of development. A number of differential demands for better nutrition, water, education, gender equality, infrastructure, etc. are subsumed under the empty signifier of the 'left behind' which constitutes the discourse of global development and nationalistic progress rhetoric alike. In this sense, both the universalism of the SDGs and the nationalism of 'populist' government are engaged in a hegemonic struggle: who speaks for the 'left behind,' how will they be 'caught up' and to what landmarks of progress? In a political sense, the question of the success and indeed sustainability of the SDGs, therefore, does not hinge on an abstract global competition between liberal universalism and national populism, but on how effectively national governments and the UN SDG process assert their hegemony over defining the equivalential chain of deprivation and development in their chosen settings. Therefore, in this Laclauian perspective, 'populist' politicians such as Modi are not the antagonistic outside of development discourse, but its direct competitors in mobilising and uniting the same constituents and their ambitions.
Brown, writing more generally on the concept of 'sustainability' in politics, has similarly argued that the notion of sustainable development has served the 'existing powers' in preventing a threat to their dominant position by capturing 'sustainability' as an 'empty signifier' and denying it's radical potential (2016: 130). In order to correct these failures, Brown looks to civil society as the only place where 'new articulations of sustainability can [ : : : ] take shape' (Brown 2016: 130). This would entail to draw in diverse constituencies and their different demands for 'a more authentic sustainability politics' (Brown, 2016: 128) and utilise the 'empty signifier' of 'sustainability' as a basis for a radical democracy against the current system that fails to articulate a collective future (Brown, 2016: 128-9). While we agree with Brown in general, we would caution against valorising the notion of civil society as a golden ticket to greater inclusion or democracy. Rather, we side with Munck (2002), who understood 'global civil society' as a 'floating signifier' following Laclau. This means to demarcate the notion of global civil society as part of a struggle for hegemony and a possible site for 'radical democracy.' The SDGs have become part of this hegemonic struggle, claiming to speak with and for civil society in collecting and articulating global development aspirations. The question then becomes how successful this claim to 'the people of development' has been, especially in competition with national governments, and whether the SDGs offer a platform that can be broadened in the direction of 'radical democracy' as a way to circumvent narrow national and global agendas. We recognise that the SDG framework is not lacking in universal ambition, which might indeed make it seem suitable to broadening the equivalential chains of 'sustainable development' in a trajectory of 'radical democracy'. But just as 'radical democracy' is not a rhetorical ambition but a hegemonic process, the Jörg Haustein and Emma Tomalin success of the SDG framework will ultimately depend on the extent to which it manages to connect with various vectors of civil society in order to mobilise and unite local development demands in a global movement, instead of relying on narrow national planning frameworks and their political purposes. This is where religions come to the fore as an important factor.
Religions are understood here in the Laclauian sense not as universal ontological entities but as systems that produce (often historically sedimented) discursive closures via equivalential chains (Bergunder, 2014). This has a number of implications. Firstly, what is included in this equivalential chain and what forms its 'floating' or 'empty signifier' may differ from context to context. This means to study in detail what sort of equivalences religious identities and actors stand for and what antagonistic closures they demarcate -or, in other words, who or what is seen to be a legitimate (even if contested) part of a particular religious discourse and what is framed as inimical or entirely irrelevant to its inner logic. This may or may not include political statements, the provision of welfare, dietary regimes, environmental guidance and so on, depending on the particular historical constellation of a religious discourse. Secondly, if religions are reconceptualised as discursive articulations within the same political space of competing 'empty signifiers', the question is how their demands can be included in a 'radical democracy' of development. While secularist thought no longer completely dominates the social and political sciences, as well as global development policy and practice, where multi-lateral frameworks such as the SDGs do engage with religion they do so in a narrow and instrumentalist way that picks and chooses which type of religion and faith actors to partner with. A Laclauian-inspired approach to religion and the SDGs, seeking to effect a radical democracy, needs to move beyond such an instrumentalist approach and instead link up with the diverse ways that faith and faith actors inspire and frustrate the aims of the SDGs. Finally, rather than the premise of discursive closure resting upon inclusion in a moral and spiritual community with shared religious beliefs, Laclau's approach lends itself to thinking about religious identity as a fluctuating political rather than fixed social category where one's belonging in a community is a dynamic of constantly actualised inclusion/exclusion, as in India.
Thus, religious communities operate in the same discursive terrain as the SDGs, mobilising social demands and development ambitions under empty signifiers such as 'salvation,' 'dīn,', or 'dharma.' Therefore, the question for the remainder of this article is whether and how the SDG agenda can be broadened to include religious actors in India and Ethiopia in a 'radical democracy' of development by broadening the empty signifier of 'sustainable development' to include its counterparts within religious discourse. So far, a perspective of instrumentalism prevails, where religious communities are recruited to help deliver progress in the name of the SDGs, but had next to no input in shaping the goals and their targets (Tomalin et al., 2019: 109-110). In addition, beyond collaboration in securing basic needs, religious actors are often seen in global development discourse as a hindrance to achieving progress. This is what makes a local perspective interesting: how do religious actors view and engage with the SDG goals and targets, and do they see the same discrepancies as global development actors?
We will now discuss these challenges with regard to India and Ethiopia, drawing upon the above-mentioned workshops in both countries. This will be guided by the following questions: Firstly, how does the state implement the SDG framework and what does this mean for the local hegemony of the UN's SDG process? Secondly, how does the Religion, Populism, and Sustainable Development Goals nexus of politics, development, and religions play out in each country? And finally, how do faith actors engage with the SDG framework and what might this mean for the local potential of the SDG process?
Religion, populism, and the politics of the SDGs in India The Government of India has engaged actively in the SDG process, submitting Voluntary National Reviews (VNR) in 2017 and 2020 (NITI Aayog, 2017. The 2017 VNR claimed that 'the country's war against poverty has become fundamentally focused on social inclusion and empowerment of the poor', clearly aligning with the SDG slogan to 'leave no-one behind' (NITI Aayog, 2017: v). The malleability of the empty signifier 'leave no-one behind' has enabled the Indian government to domesticate the SDG agenda to suit their populist politics, asserting local hegemony over the SDG process. From the outset of the SDG process, the government stressed that the 'country's national development goals are mirrored in the SDGs [ : : : ] The memorable phrase Sabka Saath Sabka Vikas, translated as "Collective Effort, Inclusive Development" : : : forms the cornerstone of India's national development agenda' (NITI Aayog, 2017: v). Given that the Indian government is driven by a Hindu nationalist agendawhich can be characterised as a form of 'religious populism' (Zuquete, 2017) -its construction of 'inclusive development' is at odds with the SDG notion of 'leave no-one behind'. Brubaker argues that populist movements, figures, and regimes claim 'to speak in the name of "the people" and against various "elites"' (2017: 359). For Hindu Nationalists, the people are the Hindus and the enemyor the antagonist outsider, to reference Laclauare the 'English-speaking, Westernizeduprootedelites' (Jaffrelot and Tillin, 2017: 5). In this version of populism, we find Muslims, and indeed other religious minorities, as enemies outside the Hindu cultural community, outside the equivalential chain of the 'left behind'. This has given rise to cultural and socio-economic marginalisation of Muslims in particular (Sachar, 2006;Shariff, 2016). Those who have drawn attention to Muslim disadvantage in India, particularly since the publication of the 2006 Sachar Committee Report, are involved in a tussle to lengthen the chain of the 'left behind'. Empty signifiers such as Hindurastra, Hindu nation, serve nationalism in pulling together the demands of different groups of citizens who have been convinced that they have suffered at the hands of the secular liberal elite and their preference for serving Muslim needs.
Kazmin tells us that 'during his first prime ministerial campaign in 2014, Modi played down the Hindutva [Hindu nationalist] aspect of his political persona' (Kazmin, 2019). However, according to Vaishnav 'since clinching a historic re-election [ : : : ] the BJP appears to have prioritized majoritarianism over economic renewal' (Vaishnav, 2020). This has included the criminalisation of the Muslim instant triple talaq or instant divorce in July 2019 (Woodyatt and Pokharel, 2019) viewed by many Muslims as an example of the state interfering in religion, the revoking of the autonomy of Muslim majority Indian-administered Kashmir in August 2019 (Hussain, 2019), the decision of the supreme court in November 2019 to permit the rebuilding of a Hindu temple in Ayodhya on the disputed site where the Babri Mosque was pulled down by Hindu militants in 1992 (Ellis-Petersen, 2019), the passing in December 2020 of the Citizen Amendment Bill which allows illegal migrant members of all religions apart from Islam to have the right to Indian citizenship (Kuchay, 2019), as well as increasing 'cow vigilantism', or lynchings of cattle traders (HRW, 2019). This is against the backdrop of a
Jörg Haustein and Emma Tomalin
shrinking civil society space, where the state increasingly stands in the way of civil society organisations (CSOs) mobilising and advocating on behalf of the marginalised and minorities. CSOs/NGOs that are faith-based face additional challenges since the Modi government is particularly sensitive to those that are viewed as engaging in conversion activities. Given this political context, it is striking that, on the one hand, the 2020 SDG VNR articulates an increased commitment to 'leave no-one behind' and to localise the SDGs via the village level gram panchayat system of local governance and the consultation of over 1000 CSOs in the production of the report (NITI Aayog, 2020), but, on the other hand, makes no reference to religion as an identity marker or as a source of continuing social, political and economic inequality, nor the role of faith actors in representing marginalised communities or localising the SDGs.
When we held our 'Keeping Faith in 2030' workshop in New Delhi, India, in December 2017, we learnt that most of the participants -which included representatives of Christian, Buddhist, Muslim and Hindu national and international faith based organisations -had been unaware of the local consultation meetings to decide the SDGs, although some had been more recently involved the consultation to set the national level indicators for the SDGs (Tomalin et al., 2017). We heard that national and local faith leaders had not been involved in these consultations and where faithbased organisations (e.g. World Vision) did take part they did so as civil society groups rather than as FBOs. Those FBOs that were involved in the SDG consultations appeared to be those who were already 'at the table' and could comfortably converse in the international development lexicon (Tomalin, 2019). While most participants told us that the SDGs had not changed the way they worked, the SDG framework was shaping how they branded their activities and articulated them to the outside world. This did not mean that the SDG framework was benignly irrelevant in its impact. The majority of those present at the workshop represented minority faith traditions in India, namely Christianity, Buddhism and Islam, and for them the SDG framework had a very real impact on their work in enabling them to link to networks and mechanisms outside the parochialisms and populisms that perpetuated their marginalisation and enabled them to build solidarity with others transnationally.
When it came to discussion of the relationship between religion and the SDGs, two approaches were apparent. Firstly, most participants stressed that the SDGs were secular or universal goals (i.e. not relating to the particularistic views of certain religions) and they did not use religious language to reference or contextualise the SDGs. When they publicly spoke about themselves as faith-based organisations, they did so as representatives of the interests of marginalised groups that shared a faith tradition rather than as a moral community that has a privileged perspective grounded in religious teachings on how to achieve change. This emphasis on FBOs as social rather than moral entities is, we suggest, a reaction to rising Hindu nationalism that makes it dangerous for minority faiths to claim public moral space. However, it also reveals an important disconnect with the global discourse driving the 'turn to religion'by international development policy and practice over the past couple of decadesthat addresses them as moral communities. Secondly, within local faith communities (and away from public view) some FBOs were using religious language to articulate the SDGs to their constituencies, particularly against religious views being used to justify attitudes and behaviours (such as gender inequality) that would compromise SDG success (Tomalin and Haustein, 2020).
Religion, populism, and the politics of the SDGs in Ethiopia
Development targets and achievements are at the forefront of Ethiopia's political discourse, pursuant of a Chinese model of state-led economic growth. With high GDP growth rates in the last fifteen years, rising exports, and foreign investment in infrastructure, Ethiopia has the stated aim of becoming a middle-income country by 2025. Ethiopia has engaged actively with the SDG process, building on its significant successes in pursuing the preceding MDGs. It was one of fifty countries worldwide to provide data in preparation of the SDG agenda, joined forces with nine other African countries in preparing the 'Common African Position,' and submitted a Voluntary National Review (VNR) in 2017 (FDRE, 2017).
At the same time, it is very clear that this engagement is mainly for the benefit of foreign diplomacy because within the country the SDGs have not been adopted as a development framework. Ethiopia pursues its own five-year development plans, the socalled Growth and Transformation Plans (GTPs), with the still current GTP II adopted in 2015 just after the country signed on to the SDG. The GTP II pursues a much narrower vision of development, focused almost entirely on economic growth, and the 2017 VNR has defended this approach against the SDG framework. More welfare-oriented goals, such as gender equality, are slotted into economic targets, and the performance indicators in the VNR line up with the GTP II rather than the SDG targets (FDRE, 2017: 8-9, 47-50). The VNR sets out this prioritisation in unmistakable terms: Thus, in the context of Ethiopia, implementing the current Second Growth and Transformation Plan (GTP II) and its successors means implementing the SDGs. There is and will be one national development plan in which the SDGs are mainstreamed (FDRE, 2017: 41). This nationalisation of the SDGs was in line with the top-down mode of development planning and the strict controls on CSOs that were imposed until recently. Most significantly, the 2009 Proclamation on Charities and Societies (PCS) had established a problematic dividing line between advocacy and development, mainly in order to cut off foreign assistance to Ethiopian NGOs working in the area of human rights (Woldegiorgis, 2009). For religious actors, this entailed a strict separation between religious (advocacy) and charitable (development) activities, which was not practicable when addressing developmental issues connected to religious discourses, such as FGM/C. Moreover, the PCS also created the Charities and Societies Agency (CSA), which was staffed by the government and given oversight of the entire CSO sector with substantial rights to intervene.
When Abiy Ahmed came to power in 2018, the political space for civil society actors opened up considerably. The new 2019 Civil Society Proclamation (CSP) lifted advocacy funding restrictions, removed many obstacles for the involvement of foreign actors, and placed the CSA on a more representative, democratic footing. Some limits remain, but the positive impact of the CSP on the CSO sector is already visible (Staberock and Christopoulos, 2019). At the same time, development has become an even more integral part of Abiy's political platform. Seeking to transcend the divide-and-conquer strategy of the ethno-regional federalism installed and maintained by his predecessors, Abiy has dissolved the coalition of ethnic parties governing the country and formed a new national party from most of its former constituents, which will be the incumbent in the upcoming elections. The name of the new party is 'Prosperity Party,' its stated motto is 'turning a Jörg Haustein and Emma Tomalin prosperous Ethiopia into reality,' and its indicators of prosperity are similarly wide-ranging as the SDGsfrom basic needs and healthcare to quality education, peace, and environmental care (Prosperity Party, 2019: 5).
Moreover, Abiy has brought religion to the fore in his political rhetoric (Feyissa and Haustein, forthcoming). As a Pentecostal with a multi-religious background, he has departed sharply from the symbolic secularism of the state under his predecessors (one of whom was a Pentecostal as well, see Haustein, 2013), and engages religion in multiple settings: from adding the phrase 'God bless Ethiopia' to all his political speeches to lecturing religious leaders and brokering peace within religious communities. Abiy's political career began with inter-religious peace-building efforts and in his PhD thesis he argued that religions are an important source of social capital in the country (Feyissa and Haustein, forthcoming). It is no wonder then that he is seeking to employ religious communities as a resource for his vision of Ethiopian unity, progress and development. In a particularly instructive speech, given at a large gathering of a Pentecostal youth organisation, Abiy linked his prosperity agenda to his faith by declaring to his rapt audience that by God's power, Ethiopia would prosper to become one of the top five African economies by 2030.
In contrast with Abiy's religious populism, Ethiopia faces enormous political struggles at the moment, from violent ethnic riots, with millions of internally displaced citizens, to unresolved conflicts over Abiy's political reforms and the indefinitely postponed national election (justified with . Armed conflict with Oromo separatist movements in the west and with the remnants of the Tigray regional government in the north have further highlighted the precariousness of Ethiopia's national unity, and the government has been returning to more coercive measures in the imprisonment of journalists and opposition figures. It thus remains to be seen whether Abiy's religio-political construction of 'the people of development' will prevail and through which political means. What is certain, however, is that the politicisation of development promises has only increased, risking a further marginalisation of the SDG framework in Ethiopian political discourse. When we ran our workshop in Ethiopia in September 2018 with a representative selection of NGOs and FBOs (Haustein and Tomalin, 2018), we found that none of the assembled organisations had been involved in the SDG consulting process, and few had even heard about the SDGs, mostly through reporting structures of international NGOs/ FBOs (Haustein and Tomalin, 2018). An overwhelming majority stated that the SDGs would make little or no difference to their work, which was guided rather by their own programmatic priorities, their constituents' needs, and the demands of Ethiopia's centralised development planning. While the latter constraint may have eased a bit, it is difficult to argue that the political capture of the development sector has markedly reduced under Abiy. Furthermore, given the breadth and generality of the SDGs, participants struggled to see the benefits of this framework for their work, even as they acknowledged the general usefulness of international development co-ordination under this umbrella. The secularity of the SDGs and potential tensions between religious values and some targets were noted, but not seen as a substantial problem, because the FBOs present saw themselves as translators between global development aspirations and local cultures. They did note, however, that the SDG framework's lack of references to ethics and morality made them less suitable for translation into religious language. Here too, it would seem, that Abiy's religious mobilisation rhetoric had the upper hand. Overall, there was sustained interest in getting to know the SDGs as a global development framework, Religion, Populism, and Sustainable Development Goals but given their generality and political marginalisation in Ethiopia, it was hard to see how religious actors and other local development organisations could be mobilised around this framework for a common global agenda for development.
Discussion and conclusion
In the conclusion to his Laclauian analysis of the semantics of sustainability, Brown noted: Despite the many attempts to provide it a definition, sustainability remains an 'empty' term in practice, having no precise content. This lack of precision should certainly attract critique, particularly when it enables 'empty gestures' on the part of politicians and other key decisionmakers. The versatility that comes with sustainability's lack of fixed meaning has certainly enabled elites to present it in ways that suit their own agenda, as has clearly been the case with the 'sustainable development' approach. However, such critique should not blind us to the potential of sustainability to open new social and political opportunities (Brown, 2016: 130).
We agree with this hopeful note in principle as well as with Brown's warning that the current configuration of sustainability as 'sustainable development' fails to achieve this promise. This is not just because the vision of sustainability is too narrowly tied to global elites, as Brown seems to suggest, but also because the implementation framework has relied on nation state actors more than on local CSOs. Drawing upon our research on faith actors' engagement with the SDGs in Ethiopia and India, we argue that this failure of the SDG process to engage with local CSOs, sets national governments up against the UN framework, not in a competition between 'populism' and 'multi-lateralism', but in a mutual hegemonic struggle of who speaks on behalf of the 'left behind' or 'the people' of development. Both countries have astutely used the SDG process to define their own national agendas for development, which in both cases severely limits the equivalential chains the SDG framework has sought to build. Of course, Ethiopia and India are somewhat unique cases, since both are well-known as countries that are more resistant than others to perceived outside influences. So comparative research in other countries would have to show if similar dynamics are in play there as well. Given the similar process in the SDG implementation and reporting everywhere, we would suspect that this may well be the case, because in its over-reliance on nation state actors the UN agenda risks strong government capture of the process everywhere.
In order to circumvent this hegemonial struggle between the UN and national governments, the SDG framework needs to rely much more on CSOs including those that are faith-based. During the setting of the goals, this was an emphasis of the SDG process, even though our research would suggest that its reach was not as deep as proclaimed. In the current implementation the target setting and reporting structure appears to depend entirely on national policy, with no concerted efforts to integrate local civil society actors. This can have very uneven effects and fails to maximise the impact of the SDG framework. Our research in India, in particular, has shown that some civil society actors see a potential in the SDG platform for counter-balancing narrow national discourses, and this needs to be developed.
Furthermore, we see a potency in the mobilisation of religious actors for the SDGs, especially in contexts like India and Ethiopia, where religions are an integral part of nationalist mobilisation platforms. A closer integration of religious actors in an Jörg Haustein and Emma Tomalin international sustainable development platform would certainly be a contribution to preventing their co-optation or marginalisation at the hand of national populists. This would require a more intentional effort than simply enlisting religious actors for the implementation of the SDGs. Instead, the chain of equivalences currently signified by 'sustainable development' would need to embrace a larger diversity of religious views about progress, development, and sustainabilitya further emptying out of the 'empty signifier' in the interest of a wider mobilisation. This is not a call for 'religious adjustments' in the goals or targets, especially as our interlocutors in India and Ethiopia have embraced the SDGs as a multinational, religious neutral framework and have signalled less doctrinal conflict than global discourses about religion and development would suggest. What is needed instead is a stronger commitment to a radical democracy of sustainable development that can still signify a common cause and aspiration for a better future even when disagreements and differences in the values, trajectories, and measurements of development come to the fore between religious communities and the globalist, multilateral framework of the SDGs. Our workshops certainly suggested that religious actors are interested to buy into the SDG framework if it helps them transcend national limitations and allows them to play an intermediary role between international development discourse and the local culture of their respective religious communities. | 8,892.8 | 2021-01-27T00:00:00.000 | [
"Political Science",
"Philosophy",
"Sociology"
] |
Load Balanced Congestion Adaptive Routing for Mobile Ad Hoc Networks
In mobile ad hoc networks the congestion is a major issue, which affects the overall performance of the networks. The load balancing in the network alongside the congestion is another major problem in mobile ad hoc network (MANET) routing due to difference in link cost of the route. Most of the existing routing protocols provide solutions to load balancing or congestion adaptivity separately. In this paper, a congestion adaptive routing along with load balancing, that is, load balanced congestion adaptive routing (LBCAR), has been proposed. Transferring of load from congested nodes to less busy nodes and involvement of other nodes in transmission that can take part in route can improve the overall network life. In the proposed protocol two metrics, traffic load density and link cost associated with a routing path, have been used to determine the congestion status. The route with low traffic load density and maximum life time is selected for packet transmission using this protocol. Performance of the network using LBCAR has been analyzed and compared with congestion adaptive routing protocol (CRP) for packet delivery ratio, average end-to-end delay, and normalized routing overhead.
Introduction
It is due to the effect of advancements in wireless technology that all communication systems are going wireless.It is expected to have all such systems and devices connected with certain network.The wireless networks of the day are one of the major areas of communication, which are flooded with multimedia and other allied services with various data types.It is also true that wired communication systems cannot be done away with as it has high bandwidth and uncomparable reliability.The main difference between wireless and wired networks is only in communication channel and their mode of communication.In past, most common networks were only wired communication networks having fixed infrastructure and even wireless networks used to have fixed infrastructure and control like cordless telephone, cellular networks, Wi-Fi, microwave and satellite communication, and so forth.
Ad hoc wireless networks are kind of infrastructure less network in which two or more devices are equipped with wireless communication and networking capability along with routing capabilities even in mobility.Ad hoc networks do not have fix topologies to cover a large area.These topologies may change dynamically and unpredictably at any moment as nodes might be on mobility.Traditional routing protocols that are normally used for internet based wireless networks cannot be applied directly to ad hoc wireless networks because some common assumptions are not valid in all cases for such dynamically changing networks and may be not true for mobile nodes.The availability of bandwidth is an important issue of ad hoc networks.Thus, these network types present a difficult challenge in the design of routing protocols, where each node participates in routing by forwarding data dynamically based on the network connectivity.As network uses wireless channel for communication, the links are affected by propagation loss, shadow fading, and multipath Rayleigh fading.In ad hoc networks, due to the movement of mobile nodes and the multipath effect, a packet transmission is subjected to Rayleigh fades.If the signalto-noise ratio (SNR) at some stages becomes lower than a certain threshold, packets will contain excessive number of errors and will be dropped due to high noise.This cause of packet loss degrades the network performance significantly.Thus signal-to-noise ratio (SNR) is a good indicator of link quality and can be determined from the hardware.Different SNRs cause different bit error rates (BERs).The mathematical formula for calculating BER [1] is given as follows: where is the received power, is the channel bandwidth, is noise power, is transmission bit rate, and erfc is the complementary error function.Most wireless networks typically measure SNR as performance parameter.SNR [1] may be calculated by The above formula is applicable when only one packet is received by the receiver.If more than one packet arrives at the receiver, the SNR is calculated by ∑ =1 is the interference component and is the signal strength of the packets at the receiver. is the number of packets that arrive at the receiver simultaneously; refers to interference [1].
When a sending node is broadcasting packets, it piggybacks its transmissions power .On receiving the packets, the intended node measures the signal strength received which holds the following relationship for free-space propagation model [2] as follows: where is wavelength of the carrier, is the distance between sender and receiver, and are unity gain of transmitting and receiving omnidirectional antennas, respectively, and and are transmitted and received powers, respectively.The wireless ad hoc network shown in Figure 1 considers mobile nodes, which are not supported by an external device or control mechanism and have their communication range according to coverage area of the individual node.It may be seen that sending and destination nodes are connected using multihop communication and thus need congestion free path to achieve reliable communication.
Devices in mobile ad hoc networks should be able to detect the presence of other devices and perform the necessary setup to facilitate communications and the sharing of data and services.However, due to limitation of bandwidth and sharing common channel for all the nodes, the congestion has become more challenging in the wireless ad hoc networks [3].Congestion is always considered to be the main factor for degrading the performance of the network and leads to packet losses and bandwidth degradation and also leads into wastage of time and energy by invoking congestion recovery algorithms.Various techniques have been developed in attempt to minimize congestion and to increase the capacity of wireless ad hoc network.There have been some protocols proposed, which inform the transmitting nodes about the current level of network congestion and help transmitting stations to reroute or delay their transmission according to congestion levels and protocols used.A load balancing technique shares the traffic load evenly among all the nodes that can take part in transmission, which has been proposed recently [4] and proposed to enhance the overall capacity and throughput of the network.Transferring of load from congested nodes to less busy nodes and involvement of other nodes in transmission that can take part in route can improve the overall network life as per the proposal.A number of congestion adaptive and load balanced algorithms have been proposed separately, but, in this paper, a congestion adaptive routing along with load balancing, that is, load balanced congestion adaptive routing (LBCAR), has been proposed, which has considered two metrics, traffic load density and lifetime associated with a routing path, to determine the congestion status and weakest node of the route and the route with low traffic load density and maximum lifetime is selected for packet transmission.The proposed scheme is expected to adapt to the sudden changes and level of traffic load and to find suitable path even in congestion scenario and node energy constraints.The proposed algorithm is suitable for burst traffic also and performs well for higher traffic conditions in the network.
The remainder of the paper is organised as follows.
Review of all four routing protocols is presented in Section 2. The detailed observation on constraint environment is discussed in Section 3. Section 4 depicts congestion and some congestion control mechanism.The proposed congestion control protocol is presented in Section 5. Section 6 has simulation parameter, Section 7 has analysis of simulation results for proposed congestion adaptive protocol.Finally, Section 8 concludes this paper and defines topics for further research.
Related Work
A wireless MANET has become most promising and rapidly growing area, which is based on a self-organized and rapidly deployable network.Due to its flexible features, MANET attracts different real-world application areas where the networks topology changes very quickly.However, it has certain drawbacks, which are being taken care of at various levels of the research.The main weaknesses of MANET are such as limited bandwidth, battery power, computational power, and security.Research is continuously going on by many researchers on MANETs: routing, congestion control techniques, congestion adaptive techniques, load balancing in MANETs, and security.Many congestion adaptive mechanisms have been proposed in the literature.Some of the important congestion adaptive techniques for MANETs have been considered here for the purpose of improvement of the work.Research in MANETs has been mainly focused on designing routing protocols to cope with dynamics of ad hoc networks.There are several protocols in the literature that have been specifically developed to cope with the limitations imposed by ad hoc networking environments due to various constraints.In [5], a distance vector algorithm, ad hoc ondemand distance vector (AODV), was presented which is on-demand route acquisition system; nodes that do not lie on active paths neither maintain any routing information nor participate in any periodic routing table exchanges.In [6], further improvements to the performance of dynamic source routing (DSR) have been presented, for example, to allow scaling to very large networks and the addition of new features to the protocol, such as multicast routing and adaptive quality of service (QoS) reservations and resource management.In [7], an innovative approach, highly dynamic destination-sequenced distance-vector routing (DSDV), has been presented which models the mobile computers as routers, which are cooperating to forward packets as needed to each other.This approach can be utilized at either the network layer (layer 3) or below the network layer but still above the MAC layer software in layer 2. In [8], a new distributed routing protocol, WRP, has been presented for a packet radio network, which works on the notion of secondto-last hop node to a destination.In [1,2,9], a routing with congestion awareness and adaptivity in MANETs (CRP) has been presented.This protocol tries to prevent congestion from occurring in the first place and to be adaptive should congestion occur.Every node appearing on a route warns its previous node when it is prone to be congested.The previous node uses a "bypass" route for bypassing the potential congestion area to the first noncongested node on the primary route.Traffic is split probabilistically over these two routes, primary and bypass, thus effectively lessening the chance of congestion occurrence.In [10], an efficient congestion adaptive routing protocol (ECARP) for MANETs has been proposed that outperforms all the other routing protocols during heavy traffic loads.The ECARP is designed to ensure the high availability of alternative routes and reduce the rate of stale route.This can be achieved by increasing the parameters of routing protocols (especially in AODV) that normally take more time for link recovery.
The number of packets in buffer has been used to determine the congestion status of nodes.In [11], a congestion aware routing protocol for mobile ad hoc networks (CARM) has been proposed which employs the retransmission count weighted channel delay and buffer queuing delay, with preference for less congested high throughput links to improve channel utilization.Whenever streaming of multimedia based data such as video, audio, and text is performed traffic will be more and network becomes congested in mobile ad hoc networks.In [12], a congestion adaptive AODV (CA-AODV) routing protocol has been developed for streaming video in mobile ad hoc networks especially designed for multimedia applications.Since video data is very sensitive in delay and packet loss, the measurement of congestion has been considered here depending on average packet delivery time and packet delivery ratio.In [13], a congestion adaptive routing mechanism has been presented which is applied to reactive ad hoc routing protocol, denoted as congestion adaptive ad hoc on-demand distance vector routing protocol.The main characteristics of the mechanism are its support of finding alternate route, in case of congestion on the primary route, on the basis of status of the buffer size of the neighbor and the status of the buffer size of the next node on the primary route.This approach works in coordination with AODV.In [14], a hop-by-hop congestion aware routing protocol (CARP) has been developed which employs a combined weight value as a routing metric, based on the data rate, queuing delay, link quality, and MAC overhead in its standard cost function to account for the congestion level.The route with minimum cost index is selected, which is based on the node weight of all the in-network nodes from the source node to the destination node.
Due to interference in the channels of the paths, multipath increases the end-to-end delay and does not work well in highly congested networks.In [15], a congestion aware multipath dynamic source routing protocol (CAWMP-DSR) has been proposed for maximum number of node disjoint paths using multipath DSR.A set of disjoint multipaths is generated and handles the problem of end-to-end delay using the correlation factor measurement and as a result endto-end delay improved and overhead as well.In [16], original DSR protocol has been modified to define the occurrence of congestion by monitoring and reporting multiple resource utilization thresholds as QoS attributes and uses multipath routing and load balancing during the periods of congestion to improve QoS in MANETs for CBR multimedia applications.In this proposed protocol, the battery level and queue length are used as the key resource utilization parameters.In [17] authors have discussed protocols for allto-all dissemination in ad hoc wireless networks.They have evaluated the performance of the GOSSIP3 dissemination protocol under varying network loads and have concluded that MAC layer congestion awareness is important for improving application-level efficiency.In [18], a congestion aware routing protocol for mobile ad hoc networks (CARM) was proposed, which employs the retransmission count weighted channel delay and buffer queuing delay, with preference for less congested high throughput links to improve channel utilization.In [4], congestion adoptive International Journal of Distributed Sensor Networks load-aware routing protocol, DLAR, defined the network load of a mobile node as the number of packets in its interface queue.In [3] authors developed congestion adaptive AODV (CA-AODV) routing protocol for streaming video in mobile ad hoc networks that provides alternate noncongested path if node becomes congested.In [19], a hop-by-hop congestion aware routing mechanism was proposed; however, it was directed toward congestion adaptivity only.In [20], a work load based algorithm was proposed, which considered the workload of the path and the network for finding the route with less congestion, and was appropriate for low load only.In [21] an efficient congestion adaptive routing protocol (ECARP) is proposed for MANETs designed to ensure the high availability of alternative routes and reduce the rate of stale routes and also reduce the rate of broken route removal process by increasing the parameters of the routing protocols (especially in AODV) such as active route timeout, route reply wait time, reverse route life, TTL start, TTL increment, TTL threshold, and delete period that normally take more time for link recovery.In [22] a multipath routing with load balancing was proposed but did not consider the variable congestion status of the network.In [23], congestion aware routing methods have been studied and it was found that none of the algorithms has taken both parameters into consideration in the same protocol, which has prompted us to find solution by considering load balancing as well as congestion adaptive scheme in the proposed method.
Congestion and Congestion Adaptive Routings
In wireless ad hoc network the congestion is the cause of concern and needs to be rectified for having better performance of the network.To know the basics of these terms, it has been revisited in short below.
Congestion in MANETs.
In mobile ad hoc network, congestion is a global issue, involving the behavior of all the hosts, all the routers, the store-and-forward processing within the routers, and so forth, which occurs with limited sources.Chief metrics for monitoring the congestion are the percentage of all packets discarded for lack of buffer space, the average queues lengths, and the number of packets that timed out and are retransmitted, the average packet delay, and the standard deviation of packet delay.Congestion control is a method used for monitoring the process of regulating the total amount of data entering the network so as to keep traffic levels at an acceptable value.Various techniques have been developed in an attempt to minimize congestion in communication networks.In addition to increasing capacity and data compression, they include protocols for informing transmitting devices about the current levels of network congestion and reroute or delay their transmission according to congestion levels.When the input traffic rate exceeds the capacity of the output lines, the routers are too slow to perform bookkeeping tasks (queuing buffers, updating tables, etc.) and the router's buffer is too limited and congestion occurs.
Congestion Adaptive routings in MANETs.
The routing protocols have been classified according to their basic initiation and counter mechanism.Besides the classification of routing protocols based on the network structure, there is another dimension for categorizing routing protocols: congestion adaptive routing versus congestion unadaptive routing.The routing protocols in which the congestion is reduced after it has occurred are congestion unadaptive, and all the congestion control routings belong to this group and the routings in which the chances of congestion occurrence are minimized are congestion adaptive.Congestion adaptive routing tries to prevent congestion from occurring in the first place, rather than dealing with it reactively.In congestion adaptive routing, the route is adaptively changeable based on the congestion status of the network.Every node appearing on a route warns its previous node when prone to be congested.The previous node uses a "bypass" route for bypassing the potential congestion area to the first noncongested node on the primary route.Traffic is split probabilistically over these routes, thus effectively lessening the chance of congestion occurrence.If a node is aware of a potential congestion ahead, it finds a bypass that will be used in case the congestion actually occurs or is about to occur.Part of the incoming traffic will be sent on the bypass, making the traffic coming to the potentially congested node less.The congestion may be avoided as a result.
Load Balancing in MANETs.
Load balancing can be defined as a methodology to distribute or divide the traffic load evenly across two or more network nodes in order to mediate the communication and also to achieve redundancy in case that one of the links fails.Load balancing can be optimal resource utilization, increased throughput, and lesser overhead.The load can also be unequally distributed over multiple links by manipulating the path cost involved.In mobile ad hoc networks, balancing the load can evenly distribute the traffic over the network and prevent early expiration of overloaded nodes due to excessive power consumption in forwarding packets.It can also allow an appropriate usage of the available network resources.The existing ad hoc routing protocols do not have a mechanism to convey the load information to the neighbors and cannot evenly distribute the load in the network.On-demand routing protocols such as AODV initiate the route discovery only if the current topology changes and the current routes are not available.In high mobility situations where the topology is highly dynamic, existing links may break quickly.It may be safe to assume that in such scenarios the on-demand routing protocol like AODV and DSR can achieve load balancing effect automatically by searching for new routes and using different intermediate nodes to forward traffic.
Whereas, in the scenarios where the same intermediate nodes are used for longer period of time, the on-demand behavior may create bottlenecks and cause network degradation due to the congestion and lead to long delays, in addition, the caching mechanism in most on-demand routing protocols for intermediate nodes to reply from cache can cause concentration of load on certain nodes.It had been shown that the increase in traffic load degrades the network performance in MANETs.In other words, if the topology changes are minimal then this behavior results in the same routes being used for a longer period of time which in turn increases the traffic concentration on specific intermediate nodes.The early expiration of nodes can cause an increase in the control packets and the transmission power of other nodes to compensate the loss.Furthermore, it can result in network degradation and even an early expiration of the entire ad hoc network.Besides, using the same node for routing traffic for a longer duration may result in an uneven usage of the available network resources, like bandwidth.A network is less reliable if the load among network nodes is not well balanced.
Protocol Description
The wireless ad hoc networks have two types of routing protocols.The protocols considered here are mostly from reactive category as these have been trusted for wireless ad hoc networks for higher traffic scenarios.In reactive routing protocols, the node does not attempt to continuously determine the routes within the network topology; instead, a route is searched when it is required and saves bandwidth of the channel.Example of such protocols is ad hoc on-demand distance vector routing protocol (AODV).
AODV (Ad Hoc On-Demand Distance Vector Routing)
Protocol.AODV builds routes using a route request/route reply query cycle.When a source node desires to transmit data to a destination it searches for route to reach target node.As it does not have a route, it broadcasts a route request (RREQ) packet across the network.Nodes receiving this packet update their information for the source node and set up backwards pointers to the source node in the route tables.In addition to the source node's IP address, current sequence number, and broadcast ID, the RREQ also contains the most recent sequence number for the destination of which the source node is aware.A node receiving the RREQ may send a route reply (RREP) either if it is the destination or if it has a route to the destination with corresponding sequence number greater than or equal to that contained in the RREQ.If this is the case, it unicasts a RREP back to the source.Otherwise, it rebroadcasts the RREQ and nodes keep track of the RREQ's source IP address and broadcast ID.If they receive a RREQ which they have already processed, they discard the RREQ and do not forward it.As RREP propagates back to the source, node sets up forward pointers to the destination.Once the source node receives the RREP, it may begin to forward data packets to the destination.If the source later receives a RREP containing a greater sequence number or contains the same sequence number with a smaller hop count, it may update its routing information for that destination and begin using the better route.As long as the route remains active, it will continue to be maintained.A route is considered active as long as there are data packets periodically travelling from the source to the destination along that path.Once the source stops sending data packets, the links will time out and eventually be deleted from the intermediate node routing tables.If a link break occurs while the route is active, the node upstream of the break propagates a route error (RERR) message to the source node to inform it of the now unreachable destination(s).After receiving the RERR, if the source node still desires the route, it can reinitiate route discovery.Various techniques have been developed in attempt to minimize congestion in communication networks like congestion adaptive routing (CRP) which tries to prevent congestion from occurring in the first place and be adaptive should congestion occur.
CRP (Congestion Adaptive Routing Protocol
).This protocol tries to prevent congestion from occurring in the first place and to be adaptive should congestion occur.CRP [9] is a congestion adaptive unicast routing protocol for MANETs.Every node appearing on a route warns its previous node when prone to be congested.The previous node uses a "bypass" route for bypassing the potential congestion area to the first noncongested node on the primary route.Traffic is split probabilistically over these two routes, primary and bypass, thus effectively lessening the chance of congestion occurrence.CRP is on-demand and consists of the following components: (1) congestion monitoring, When the number of packets coming to a node exceeds its carrying capacity, the node becomes congested and starts losing packets.A variety of metrics can be used for a node to monitor congestion status.The major techniques among these are the percentage of all packets discarded for lack of buffer space, the average queue length, the number of packets timed out and retransmitted, the average packet delay, and the standard deviation of packet delay.In all of these cases, the rising numbers indicate growing congestion.To determine the congestion status, a node is said to be green (i.e., far from congested), yellow (i.e., likely congested), or red (very likely or already congested).A node periodically broadcasts a UDT (update) packet, which contains node's congestion status and a set of tuples [destination R, next green node G, and distance to green node m].When a node receives a UDT packet from its next primary node next regarding destination , will be aware of the congestion status of next .A node receives a UDT packet from its next primary node next (regarding a destination ).If next is yellow or red, congestion is likely ahead if data packets continue to be forwarded on link to next .Since CRP tries to prevent congestion from occurring in the first place, starts to discover a bypass route toward node -the next green node of known from the UDT packet.The bypass search is similar to primary route search.After searching the bypass route, the traffic is split between primary and bypass routes with equal probability, hence effectively reducing the congestion status at the next primary node.To adapt with congestion due to network dynamics, the probability is modified periodically based on congestion status of the next primary node and the bypass route.The congestion status of bypass route is the accumulative status of every bypass route.To keep the protocol overhead small, CRP tries to minimize the use of multiple paths.CRP does not allow a node to use more than one bypass.Therefore, the bypass route discovery is only initiated by a node if no bypass currently exists at this node.The protocol overhead for using bypass is also reduced because of short bypass lengths.A bypass is removed when the congestion is totally resolved, and CRP does not incur heavy overhead due to maintaining bypass paths.The bypass maintenance cost is further reduced because a bypass is typically short and a primary node can only create at most one bypass.The recovery of a link breakage is realized gracefully and quickly by making use of the existing bypass paths.
These protocols generally help in managing the congestion but are not capable of considering congestion state of the network and adapt to the change in traffic and find solution by adopting the new state and do away with the congestion to enhance the performance of the network.
Proposed Load Balanced Congestion Adaptive Routing (LBCAR) Protocol
In a process to find the solution for avoiding the congestion in the network by adapting to the instant changes in the congestion state a new algorithm has been proposed in this paper.The proposed congestion adaptive algorithm is capable of countering congestion in the network and is referred to as load balanced congestion adaptive routing (LBCAR) algorithm.In this protocol each node maintains a record of the latest traffic load estimations at each of its neighbors in a table called the neighborhood table.This table is used to keep the load information of local neighbors at each node.Neighbors that receive this packet update the corresponding neighbor's load information in their neighborhood tables.LBCAR is a new load balanced congestion adaptive technique proposed to reduce congestion and to maximize the network operational lifetime.The metric traffic load density is used to determine the congestion status of the route and link cost is used to determine the lifetime of the route.The route with low traffic load intensity and maximum lifetime is selected for packet transmission and this algorithm practically limits the idealized maximum number of packets transmittable through the route having weakest node with minimum lifetime and high traffic load intensity.Node samples the interface queue length in the MAC layer periodically. () is the th sample value, and is the sampling time over a period of time, and then the traffic load of node is defined as follows: The total length of interface queue of node in the MAC layer is max (); then the traffic load intensity function of node is defined as follows: The link cost for a link (, ) contains two parameters: a node specific parameter, that is, , and a link specific parameter, that is, , , and is defined as where is the residual battery energy of the node and , is the energy spent in one or more retransmissions necessary in the face of link error. , is measured as where , is the energy involved in a single packet transmission, is the number of hops in retransmission, and is the packet error probability.It is considered that the number of neighboring nodes of is and all the traffic load intensity functions are known.These values are sorted in the ascending order and get a sequence number named as seq() {1 ≤ seq() ≤ } corresponding to the traffic load intensity () of .The forwarding probability of the data for the node is given by the following expression: where is related to the existing traffic load of neighboring nodes.
According to the network load the calculation at node is done for link cost using formulae above.In this way, traffic is split according to the traffic load intensity and link cost.The overloaded nodes are protected by using the nodes of lighter traffic load to establish the route, so as to balance the network load, lessen the congestion of the network, improve the data transmission efficiency, and maximize the network lifetime.Flowchart of this algorithm is shown in Figure 2.
The flowchart is clearly indicative according to the link cost and other parameters being considered for load balancing and congestion adaptivity in the network.
Simulation Parameters
6.1.Simulation Setup.The simulations of network have been carried out using Qualnet 5.2.The simulation parameters are given in Table 1.
Performance Metrics.
In this paper, the performance metrics such as packet delivery ratio, average end-to-end delay, and normalized routing overhead were calculated and evaluated for AODV, CRP, and LBCAR.
Packet Delivery Ratio.
Packet delivery ratio is the ratio of the number of data packets successfully received at the destinations to the number of data packets generated by the sources.The average end-to-end delay is a measure of average time taken to transmit each packet of data from the source to the destination.Higher endto-end delay is an indication of network congestion.
Normalized Routing Overhead.
The ratio of the amount in bytes of control packets transmitted to the amount in bytes of data received.
According to the performance parameters and the selected variations for the proposed algorithm the simulation has been undertaken and results are recorded according to the set parameters.
Simulation Results
In this paper, the results have been presented by taking average of over 10 runs of each simulation setting.LBCAR has been compared with AODV and CRP.3-5 show packet delivery ratio, average end-to-end delay, and normalized routing load with varying packet rates for AODV and LBCAR.Figure 3 shows packet delivery ratio for AODV and LBCAR with varying packet rates.This figure shows that packet delivery ratio for LBCAR is greater than that of AODV for all values of packet rates.When the traffic load was high, AODV could not handle congestion.The reason, again, was the ability of LBCAR to adapt to network congestion.Figure 3 shows average end-to-end delay for AODV and LBCAR with varying packet rates.This figure shows that LBCAR has nearly the same or somewhat smaller delay as that of AODV.An interesting observation was that the delay variation in LBCAR was less than that of AODV making LBCAR more suitable for multimedia applications.Hence LBCAR outperforms AODV in terms of average end-to-end delay.Figure 4 shows normalized routing load for AODV and LBCAR with varying packet rates.This figure shows that packet delivery ratio for LBCAR is smaller than that of AODV for all values of packet rates.Thus normalized routing overheads have also decreased to an extent.Thus, out of LBCAR and AODV, performance of LBCAR is slightly better than that of AODV in all respects.
Comparison of LBCAR with CRP.
In this section the proposed protocol has been compared with CRP algorithm, as shown in Figures 6-8. Figure 6 shows packet delivery ratio for LBCAR and CRP with varying packet rates.Both routings give almost the same packet delivery ratio except at high packet rates.Figure 7 shows average end-to-end delay for LBCAR and CRP with varying packet rates.LBCAR gives slightly less average delay than CRP.Therefore, LBCAR outperforms CRP in terms of average end-to-end delay.Figure 8 shows normalized routing overhead for LBCAR and CRP with varying packet rates.LBCAR shows higher normalized routing overhead than CRP.Thus LBCAR is outperformed in terms of packet delivery ratio and average delay.
Conclusion
In this paper, a load balanced congestion adaptive routing (LBCAR) has been proposed.The simulation has been done as per parameter selected.The performance of the MANET has been analyzed and compared with AODV in terms of packet delivery ratio, average end-to-end delay, and normalized routing overhead.LBCAR outperformed the AODV and reduced the congestion.LBCAR has also been compared with the congestion adaptive routing (CRP) and it has been found that packet delivery ratio is almost the same in both routing protocols.Average delay is slightly less in LBCAR compared to CRP but normalized routing overhead of LBCAR is higher than that of CRP.The property of LBCAR is its adaptability to congestion.LBCAR enjoys fewer packet losses than routing protocols that are not adaptive to congestion.This is because LBCAR tries to prevent congestion from occurring in the first place, rather than dealing with it reactively.The noncongested route concept in the algorithm helps next node that may go congested.If a node is aware of congestion ahead, it finds a noncongested route that will be used in case that congestion is about to occur.The part of incoming traffic is split and sent on the noncongested route, making the traffic coming to the congested node less.Thus congestion can be avoided and with LBCAR the traffic load is more balanced, and the probability of packet loss is reduced.The results also show the scalability of the protocol having robustness for large network.
Figure 3 :Figure 4 :
Figure 3: Packet delivery ratio at different packet rates.
Figure 8 :
Figure 8: Normalized routing overhead at different packet rates.
energy on congestion recovery.If no appropriate congestion control is performed this can lead to a congestion collapse of the network, where almost no data is successfully delivered.Congestion control is necessary in avoiding congestion and/or improving performance after congestion.Congestion control schemes are usually composed of three components: congestion detection, congestion feedback, and sending-rate control.Practically, congestion detection can be processed in intermediate nodes or receivers.The criteria for congestion detection vary with protocols.Congestion can be determined by checking queues length.It can also be indirectly detected by monitoring the trend of throughput or response time.
a variety of conditions that can contribute to congestion and they include but are not limited to traffic volume, the underlying network architecture, and the specification of devices in the network (e.g., buffer space, transmission rate, processing power, etc.).Network congestion can severely deteriorate network throughput.Congestion not only leads to packet losses and bandwidth degradation but also wastes time and | 8,082.2 | 2014-07-01T00:00:00.000 | [
"Computer Science",
"Business"
] |
Modeling Composite Labels for Neural Morphological Tagging
Neural morphological tagging has been regarded as an extension to POS tagging task, treating each morphological tag as a monolithic label and ignoring its internal structure. We propose to view morphological tags as composite labels and explicitly model their internal structure in a neural sequence tagger. For this, we explore three different neural architectures and compare their performance with both CRF and simple neural multiclass baselines. We evaluate our models on 49 languages and show that the neural architecture that models the morphological labels as sequences of morphological category values performs significantly better than both baselines establishing state-of-the-art results in morphological tagging for most languages.
Introduction
The common approach to morphological tagging combines the set of word's morphological features into a single monolithic tag and then, similar to POS tagging, employs multiclass sequence classification models such as CRFs (Müller et al., 2013) or recurrent neural networks (Labeau et al., 2015;Heigold et al., 2017). This approach, however, has a number of limitations. Firstly, it ignores the intrinsic compositional structure of the labels and treats two labels that differ only in the value of a single morphological category as completely independent; compare for instance labels [POS=NOUN,CASE=NOM,NUM=SG] and [POS=NOUN,CASE=NOM,NUM=PL] that only differ in the value of the NUM category. Secondly, it introduces a data sparsity issue as the less frequent labels can have only few occurrences in the 1 The source code is available at https://github.com/AleksTk/seq-morph-tagger training data. Thirdly, it excludes the ability to predict labels not present in the training set which can be an issue for languages such as Turkish where the number of morphological tags is theoretically unlimited (Yuret and Türe, 2006).
To address these problems we propose to treat morphological tags as composite labels and explicitly model their internal structure. We hypothesise that by doing that, we are able to alleviate the sparsity problems, especially for languages with very large tagsets such as Turkish, Czech or Finnish, and at the same time also improve the accuracy over a baseline using monolithic labels. We explore three different neural architectures to model the compositionality of morphological labels. In the first architecture, we model all morphological categories (including POS tag) as independent multiclass classifiers conditioned on the same contextual word representation. The second architecture organises these multiclass classifiers into a hierarchy-the POS tag is predicted first and the values of morphological categories are predicted conditioned on the value of the predicted POS. The third architecture models the label as a sequence of morphological category-value pairs. All our models share the same neural encoder architecture based on bidirectional LSTMs to construct contextual representations for words (Lample et al., 2016).
We evaluate all our models on 49 UD version 2.1 languages. Experimental results show that our sequential model outperforms other neural counterparts establishing state-of-the-art results in morphological tagging for most languages. We also confirm that all neural models perform significantly better than a competitive CRF baseline. In short, our contributions can be summarised as follows: 1) We propose to model the compositional internal structure of complex morphological la-bels for morphological tagging in a neural sequence tagging framework; 2) We explore several neural architectures for modeling the composite morphological labels; 3) We find that tag representation based on the sequence learning model achieves state-of-the art performance on many languages. 4) We present state-of-the-art morphological tagging results on 49 languages on the UDv2.1 corpora.
Related Work
Most previous work on modeling the internal structure of complex morphological labels has occurred in the context of morphological disambiguation-a task where the goal is to select the correct analysis from a limited set of candidates provided by a morphological analyser. The most common strategy to cope with a large number of complex labels has been to predict all morphological features of a word using several independent classifiers whose predictions are later combined using some scoring mechanism (Hajič and Hladká, 1998;Hajič, 2000;Smith et al., 2005;Yuret and Türe, 2006;Zalmout and Habash, 2017;Kirov et al., 2017). Inoue et al. (2017) combined these classifiers into a multitask neural model sharing the same encoder, and predicted both POS tag and morphological category values given the same contextual representation computed by a bidirectional LSTM. They showed that the multitask learning setting outperforms the combination of several independent classifiers on tagging Arabic. In this paper, we experiment with the same architecture, termed as multiclass multilabel model, on many languages. Additionally, we extend this approach and explore a hierarchical architecture where morphological features directly depend on the POS tag. Another previously adopted approach involves modeling complex morphological labels as sequences of morphological feature values (Hakkani-Tur et al., 2000;Schmid and Laws, 2008). In neural networks, this idea can be implemented with recurrent sequence modeling. Indeed, one of our proposed models generates morphological tags with an LSTM network. Similar idea has been applied for the morphological reinflection task (Kann and Schütze, 2016;Faruqui et al., 2016) where the sequential model is used to generate the spellings of inflected forms given the lemma and the morphological label of the desired form. In morphological tagging, however, we generate the morphological labels themselves.
Another direction of research on modeling the structure of complex morphological labels involves structured prediction models (Müller et al., 2013;Müller and Schütze, 2015;Malaviya et al., 2018;Lee et al., 2011). Lee et al. (2011) introduced a factor graph model that jointly infers morphological features and syntactic structures. Müller et al. (2013) proposed a higher-order CRF model which handles large morphological tagsets by decomposing the full label into POS tag and morphology part. Malaviya et al. (2018) proposed a factorial CRF to model pairwise dependencies between individual features within morphological labels and also between labels over time steps for cross-lingual transfer. Recently, neural morphological taggers have been compared to the CRF-based approach (Heigold et al., 2017;Yu et al., 2017). While Heigold et al. (2017) found that their neural model with bidirectional LSTM encoder surpasses the CRF baseline, the results of Yu et al. (2017) are mixed with the convolutional encoder being slightly better or on par with the CRF but the LSTM encoder being worse than the CRF baseline.
Most previous work on neural POS and morphological tagging has shared the general idea of using bidirectional LSTM for computing contextual features for words (Ling et al., 2015;Huang et al., 2015;Labeau et al., 2015;Ma and Hovy, 2016;Heigold et al., 2017). The focus of the previous work has been mostly on modeling the inputs by exploring different character-level representations for words (Heigold et al., 2016;Santos and Zadrozny, 2014;Ma and Hovy, 2016;Inoue et al., 2017;Ling et al., 2015;Rei et al., 2016). We adopt the general encoder architecture from these works, constructing word representations from characters and using another bidirectional LSTM to encode the context vectors. In contrast to these previous works, our focus is on modeling the compositional structure of the complex morphological labels.
The morphologically annotated Universal Dependencies (UD) corpora (Nivre et al., 2017) offer a great opportunity for experimenting on many languages. Some previous work have reported results on several UD languages (Yu et al., 2017;Heigold et al., 2017). Morphological tagging results on many UD languages have been also reported for parsing systems that predict POS and morphological tags as preprocessing (Andor et al., 2016;Straka et al., 2016;Straka and Straková, 2017). Since UD treebanks have been in constant development, these results have been obtained on different UD versions and thus are not necessarily directly comparable. We conduct experiments on all UDv2.1 languages and we aim to provide a baseline for future work in neural morphological tagging.
Neural Models
We explore three different neural architectures for modeling morphological labels: multiclass multilabel model that predicts each category value separately, hierarchical multiclass multilabel model where the values of morphological features depend on the value of the POS, and a sequence model that generates morphological labels as sequences of feature-value pairs.
Notation
Given a sentence w 1 , . . . , w n consisting of n words, we want to predict the sequence t 1 , . . . , t n of morphological labels for that sentence. Each label . . , f im } consists of a POS tag (f i0 ≡ POS) and a sequence of m category values. For each word w i , the encoder computes a contextual vector h i , which captures information about the word and its left and right context.
Decoder Models
Multiclass Multilabel model (MCML) This model formulates the morphological tagging as a multiclass multilabel classification problem. For each morphological category, a separate multiclass classifier is trained to predict the value of that category (Figure 1 (a)). Because not all categories are always present for each POS (e.g., a noun does not have a tense category), we extend the morphological label of each word by adding all features that are missing from the annotated label and assign them a special value that marks the category as "off". Formally, the model can be described as: where M is the total number of morphological categories (such as case, number, tense, etc.) observed in the training corpus. The probability of each feature value is computed with a softmax function: where W j and b j are the parameter matrix and bias vector for the jth morphological feature (j = 0, . . . , M ). The final morphological label for a word is obtained by concatenating predictions for individual categories while filtering out off-valued categories.
Hierarchical Multiclass Multilabel model (HMCML) This is a hierarchical version of the MCML architecture that models the values of morphological categories as directly dependent on the POS tag (Figure 1 (b)): The probability of the POS is computed from the context vector h using the respective parameters: The POS-dependent context vector l is obtained by concatenating the context vector h with the unnormalised log probabilities of the POS: The probabilities of the morphological features are computed using the POS-dependent context vector: Sequence model (SEQ) The SEQ model predicts complex morphological labels as sequences of category values. This approach is inspired from neural sequence-to-sequence models commonly used for machine translation (Cho et al., 2014;Sutskever et al., 2014). For each word in a sentence, the decoder uses a unidirectional LSTM network (Figure 1 (c)) to generate a sequence of morphological category-value pairs based on the context vector h and the previous predictions. The probability of a morphological label t is under this model: Decoding starts by passing the start-of-sequence symbol as input. At each time step, the decoder computes the label context vector g j based on the previously predicted category value, previous label context vector and the word's context vector. The probability of each morphological featurevalue pair is then computed with a softmax.
At training time, we feed correct labels as inputs while at inference time, we greedily emit the best prediction from the set of all possible feature-value pairs. The decoding terminates once the end-ofsequence symbol is produced.
Encoder
We adopt a standard sequence tagging encoder architecture for all our models. It consists of a bidirectional LSTM network that maps words in a sentence into context vectors using character and wordlevel embeddings. Character-level word embeddings are constructed with a bidirectional LSTM network and they capture useful information about words' morphology and shape. Word level embeddings are initialised with pre-trained embeddings and fine-tuned during training. The character and word-level embeddings are concatenated and passed as inputs to the bidirectional LSTM encoder. The resulting hidden states h i capture contextual information for each word in a sentence. Similar encoder architectures have been applied recently with notable success to morphological tagging (Heigold et al., 2017;Yu et al., 2017) as well as several other sequence tagging tasks (Lample et al., 2016;Chiu and Nichols, 2016;Ling et al., 2015).
Experimental Setup
This section details the experimental setup. We describe the data, then we introduce the baseline models and finally we report the hyperparameters of the models.
Data
We run experiments on the Universal Dependencies version 2.1 (Nivre et al., 2017). We excluded corpora that did not include train/dev/test split, word form information 2 , or morphological features 3 . Additionally, we excluded corpora for which pretrained word embeddings were not available. 4 The resulting dataset contains 69 corpora covering 49 different languages. Tagsets were constructed by concatenating the POS and morphological annotations of the treebanks. Table 1 gives corpus statistics. We present type and token counts for both training and test sets. For training set, we also show the average and maximum number of tags per word type and the size of the morphological tagset. For the test set, we report the proportion of out-of-vocabulary (OOV) words as well as the number of OOV tag tokens and types.
In the encoder, we use fastText word embeddings (Bojanowski et al., 2017) For training sets we report the number of word tokens and types, the average (Avg) and maximum (Max) tags per word type, the proportion of word types for which pre-trained embeddings were available (% Emb) and the size of the morphological tagset (# Tags). For the test sets, we also give the total number of tokens and types, the proportion of OOV words (% OOV) and the number of OOV tag tokens and types. means of character-level embeddings. In Table 1, we also report for each language the proportion of word types for which the pre-trained embeddings are available.
Baseline Models
We use two models as baseline: the CRF-based MARMOT (Müller et al., 2013) and the regular neural multiclass classifier.
MarMoT (MMT) MARMOT 6 is a CRF-based morphological tagger which has been shown to achieve competitive performance across several languages (Müller et al., 2013). MARMOT approximates the CRF objective using a pruning strategy which enables training higher-order models and handling large tagsets. In particular, the tagger first predicts the POS part of the label and based on that, constrains the set of possible morphological labels. Following the results of Müller et al. (2013), we train second-order models. We tuned the regularization type and weight on German development set and based on that, we use L2 regularization with weight 0.01 in all our experiments.
Neural Multiclass classifier (MC) As the second baseline, we employ the standard multiclass classifier used by both Heigold et al. (2017) and Yu et al. (2017). The proposed model consists of an LSTM-based encoder, identical to the one described above in section 3.3, and a softmax classifier over the full tagset. The tagset sizes for each corpora are shown in Table 1. During preliminary experiments, we also added CRF layer on top of softmax, but as this made the decoding process considerably slower without any visible improvement in accuracy, we did not adopt CRF decoding here. The multiclass model is shown in Figure 1 (d).
The inherent limitation of both baseline models is their inability to predict tags that are not present in the training corpus. Although the number of such tags in our data set is not large, it is nevertheless non-zero for most languages.
Training and Parametrisation
Since tuning model hyperparameters for each of the 69 datasets individually is computationally demanding, we optimise parameters on Finnish-a morphologically complex language with a reasonable dataset size-and apply the resulting values to 6 http://cistern.cis.lmu.de/marmot/ other languages. We first tuned the character embedding size and character-LSTM hidden layer size of the encoder on the SEQ model and reused the obtained values with all other models. We tuned the batch size, the learning rate and the decay factor for the SEQ and MC models separately since these models are architecturally quite different. For the MCML and HMCML models we reuse the values obtained for the MC model. The remaining hyperparameter values are fixed. Table 2 lists the hyperparameters for all models. We train all neural models using stochastic gradient descent for up to 400 epochs and stop early if there has been no improvement on development set within 50 epochs. For all models except SEQ, we decay the learning rate by a factor of 0.98 after every 2500 batch updates. We initialise biases with zeros and parameter matrices using Xavier uniform initialiser (Glorot and Bengio, 2010).
Words in training sets with no pre-trained embeddings are initialised with random embeddings. At test time, words with no pre-trained embedding are assigned a special UNK-embedding. We train the UNK-embedding by randomly substituting the singletons in a batch with the UNK-embedding with a probability of 0.5. Table 3 presents the experimental results. We report tagging accuracy for all word tokens and also for OOV tokens only. A full morphological tag is considered correct if both its POS and all morphological features are correctly predicted. First of all, we can confirm the results of Heigold et al. (2017) that the performance of neural morphological tagging indeed exceeds the results of a CRFbased model. In fact, all our neural models perform significantly better than MARMOT (p < 0.001). 7 The best neural model on average is the SEQ model, which is significantly better from both the MC baseline as well as the other two compositional models, whereby the improvement is especially well-pronounced on smaller datasets. We do not observe any significant differences between MCML and HMCML models neither on all words nor OOV evaluation setting.
Results
We also present POS tagging results in the rightmost section of Table 3. Here again, all neural models are better than CRF which is in line with the results presented by Plank et al. (2016). For POS tags, the HMCML is the best on average. It is also significantly better than the neural MC baseline, however, the differences with the MCML and SEQ models are insignificant.
In addition to full-tag accuracies, we assess the performance on individual features. Table 4 reports macro-averaged F1-cores for the SEQ and the MC models on universal features. Results indicate that the SEQ model systematically outperforms the MC model on most features.
Analysis and Discussion
OOV label accuracy Our models are able to predict labels that were not seen in the training data. Figure 2 presents the accuracy of test tokens with OOV labels obtained with our best performing SEQ model plotted against the number of OOV label types. The datasets with zero accuracy are omitted. The main observation is that although the OOV label accuracy is zero for some languages, it is above zero on ca. half of the datasets-a result that would be impossible with MARMOT or MC baselines. 7 As indicated by Wilcoxon signed-rank test. Figure 3 shows the largest error rates for distinct morphological categories for both SEQ and MC models averaged over all languages. We observe that the error patterns are similar for both models but the error rates of the SEQ model are consistently lower as expected.
Error Analysis
Stability Analysis To assess the stability of our predictions, we picked five languages from different families and with different corpus size, and performed five independent train/test runs for each language. Hyperparameter Tuning It is possible that the hyperparameters tuned on Finnish are not optimal for other languages and thus, tuning hyperparameters for each language individually would lead to different conclusions than currently drawn. To shed some light on this issue, we tuned hyperparameters for the SEQ and MC models on the same subset of five languages. We first independently optimised the dropout rates on word embeddings, encoder's LSTM inputs and outputs, as well as the number of LSTM layers. We then performed a grid search to find the optimal initial learning rate, the learning rate decay factor and the decay step. Value ranges for the tuned parameters are given in Table 6. Table 7 reports accuracies for the tuned models compared to the mean accuracies reported in Table 5. As expected, both tuned models demonstrate superior performance on all languages, except for German with the SEQ model. Hyperparameter tuning has a greater overall effect on the MC model, which suggests that it is more sensitive to the choice of parameters than the SEQ model. Still, the tuned SEQ model performs better or at least as good as the MC model on all languages.
Comparison with Previous Work Since UD datasets have been in rapid development and different UD versions do not match, direct comparison of our results to previously published results is difficult. Still, we show the results taken from Heigold et al. (2017), which were obtained on UDv1.3, to provide a very rough comparison. In addition, we compare our SEQ model with a neural tagger presented by Dozat et al. (2017), which is similar to our MC model, but employs a more sophisticated encoder. We train this model on UDv2.1 on the same set of languages used by Heigold et al. (2017). Table 8 reports evaluation results for the three models. The SEQ model and Dozat's tagger demonstrate comparable performance. This suggests that the SEQ model can be further improved by adopting a more advanced encoder from Dozat et al. (2017).
Conclusion
We hypothesised that explicitly modeling the internal structure of complex labels for morphological tagging improves the overall tagging accuracy over the baseline with monolithic tags. To test this hypothesis, we experimented with three approaches to model composite morphological tags in a neural sequence tagging framework. Experimental results on 49 languages demonstrated the advantage of modeling morphological labels as sequences of category values, whereas the superiority of this model is especially pronounced on smaller datasets. Furthermore, we showed that, in contrast to baselines, our models are capable of predicting labels that were not seen during training. | 4,992.4 | 2018-10-01T00:00:00.000 | [
"Computer Science"
] |
SMARTPHONE BASED AUTOMATIC PRICE DETERMINATION OF AGRICULTURAL PRODUCTS
Agricultural products Market equilibrium price Sellers Buyers Smartphones are increasingly becoming an integral part of our life. We carry it every time with us and use it to do various tasks including browsing internet, emails, social websites and others. We also perform online shopping with our smartphones. In this paper, a new automatic price determination algorithm for agricultural products is proposed that can be used by both the sellers and buyers using their smartphones. The buyers will place their demands for their required agricultural products to the sellers of a particular market and the sellers will also place their supplies. The algorithm will then automatically calculate the market equilibrium price using learning rate based iterative distributed price determination algorithm. As a result, both the sellers and buyers can save their time in finding the suitable prices of the agricultural products. The performance results show that the algorithm is stable and reaches the market equilibrium price within a few milliseconds.
INTRODUCTION
In any market, the price of the products vary from seller to seller.The price of the agricultural product also fluctuates based on the supply of the product and the demand of the buyers.The agricultural market is no exception to that.As a result, the buyer has to visit different sellers' profiles to check the price within a market.On another dimension, the price varies significantly from market to market.There are other factors that affect the price of agricultural products, for example, drought, flood, rain, external supplies and production.Therefore, it is very difficult to predict the price of agricultural products without visiting the sellers within a market and also other markets.This problem not only affects the buyers but also the sellers as well.Some independent sellers may be selling the price below the minimum price without knowing the actual price in the market and thus not getting the proper price for their time, energy and monetary investments.So the bottom line is that the price depends on a number of factors.In this paper, an automatic price determination algorithm is proposed that can be used in smartphones which will collect the supplies of the agricultural products from the sellers and the demands of the buyers of their list of agricultural products.Then the algorithm will automatically calculate the market equilibrium price using iterative distributed algorithm.As a result, both the sellers and the buyers will be able to know the actual price of an agricultural product for a particular day.This will benefit both the sellers and buyers by reducing the risk of their losses.
In this paper, market equilibrium is used to determine the price of agricultural products automatically.It is the state when the supply of a product of sellers become equal to the demand from the buyers (Varian, 1992), (Mankiw, 2006).In market equilibrium, the underlying competition among sellers and/or buyers is inherently captured.Therefore, it can demonstrate the general trend of the market involving both the sellers and buyers.In a market, there can exist product shortage or surplus which directly affects the price of the product.If there is shortage, the price will increase whereas the price will decrease for surplus.In real markets, market equilibrium exists in different forms and scales.It can consist of a pair of seller-buyer or a group of multiple sellers and buyers.The area of the sellers and buyers can span from a single small market to the whole world market.For example, house prices may increase in an area if there is a large demand of that area.But if there is no one to live there, the price will fall.Similarly, oil market is a case in the whole world market.Less supply and increased demand of oil increases the price of oil.On the other hand, increase in supply and less demand will reduce the price of oil.
Market equilibrium approaches have been adopted in various fields, such as, microeconomics (Sharpe, 1964), (Black, 1972), automobile industries (Berry, Levinsohn, & Pakes, 1995), share market (Admati, Pfleiderer, & Zechner, 1994), banking (Besanko & Kanatas, 1993), spectrum resource trading (Niyato & Hossain, 2008b), supermarkets (Smith, 2004), house prices and rents (Ayuso & Restoy, 2006), etc.Market equilibrium is also analyzed with other approaches.For example, (Niyato & Hossain, 2008a) compares the market equilibrium algorithm with competitive and cooperative market.Market equilibrium is found to generate the most stable solution among the three models and it also ensures the smallest profit for the sellers which is also beneficial for the buyers.Besides, the market equilibrium has less communication overhead compared to the other models.
In this paper, market equilibrium is used because it represents the general trend of the market and captures the underlying competition of the sellers and/or buyers.Besides, it is more stable than other methods and its communication overhead is also the lowest.
MATERIALS AND METHODS
Different hardware and software tools have been used to model the agricultural market scenario with multiple sellers and buyers.The used hardware device was a laptop with Intel core i7 processor, 8 GB RAM and Windows 8.1 operating system.The software tools include emulated android devices running different android operating systems, Matlab (Attaway, 2013) and Java programming language (Schildt, 2014).The system model is shown in Figure 1.
Figure 1. System model for agricultural market scenario
Figure 1 shows that there can be number of sellers and number of buyers.Sellers may or may not have smartphones.There is also a central coordinator that collects all the supplies from the sellers and all the demands from the buyers using online and offline forms.The central coordinator is a server where the distributed market equilibrium algorithm calculates the price of the agricultural products.Each of the sellers will have different products to sell in the market.The buyers will have different demands for different products.Market equilibrium price will be calculated for each of the products based on the supplies from the sellers and demands from the buyers.
To model the supply and demand of an agricultural product, a utility function (Pindyck & Rubinfeld, 2009), (Niyato and Hossain, 2010) is used which defines the satisfaction of the buyers.The more the utility for a product, the higher the demand from a buyer.Based on the basic supply and demand theory of microeconomics (Pfitzner, 1993), the utility function for the buyer can be defined as: where, is the amount of product the seller is willing to sell; is the offered price per unit of product and is a constant.Utility or satisfaction of the buyer will initially increase and then saturate for higher amount of the agricultural product and for lower price offered.
Demand function for the buyer can be obtained from the utility function in (1) which also defines the profit for the buyer.By setting = 0, we can calculate the demand function for the buyer.This demand function represents the required amount of product by the buyer for the price .
The profit or utility of the seller can be defined as the income earned from selling the product to the buyer.Thus, the supply function for the seller can be computed as follows: where, is the total amount of product and is the price per unit of product charged by the primary service.The supplied product can be obtained by differentiating this equation with respect to , = 0. Thus the supply function for the given price for which the profit of the seller will be maximized can be written as: In market equilibrium, the supply function becomes equal to the demand function.In this state, there is no excess supply from the seller or excess demand from the buyer.Thus, the market equilibrium can be expressed as = from ( 2) and ( 4): The seller and the buyer may have limited information about each other's supply and demand, the market equilibrium can be obtained iteratively through the central coordinator.In the initial stage, the buyer will observe the advertised price from the seller for a particular product and submit its demand through the central coordinator.The seller submits the price [0] to the coordinator which then computes the required product amount using the demand function of (2) and the coordinator then send the price to the seller.The central coordinator thus updates the price using the following equation: where, is the learning rate that weighs the difference between demand and supply.The higher the value of , the more weight is given to the difference than the previous price.A positive value of the difference implies excessive demand from the buyer and negative value indicates excessive supply.Thus the price will be gradually adjusted based on the supply and demand.This iteration will continue until the price difference | [ + 1] − [𝑡]| becomes less than a predefined threshold value.
RESULTS AND DISCUSSION
Supply adaptation for different learning rates is shown in Figure 2. When the value of learning rate is 0.03, the supply changes to a single value after a few iterations.When the value of learning rate becomes larger (e.g., 0.05 and 0.07), the supply value fluctuates continuously, because smaller values of learning rate ensures that the value reaches equilibrium state gradually.
Figure 2. Supply adaptation with different learning rates
Similarly, demand adaptation for different learning rates is shown in Figure 3.The demand's value also reaches a stable value after a few iterations for learning rate 0.03. Figure 4 shows the adaptation of price for different learning rates.All these figures represent the stages of reaching the market equilibrium gradually.It is important to choose an appropriate learning rate.Because if a very small value (e.g., 0.001) is chosen, it will take many iterations to reach to the market equilibrium.On the other hand, larger values of learning rate will prevent the algorithm from reaching the market equilibrium.5 shows the trajectory diagram of supply and demand for learning rate 0.03.It shows the steps for reaching the market equilibrium when the supply is equal to the demand.From the results, we can see that it takes only few iterations to determine the market equilibrium price.The time to reach the market equilibrium was found to be less than 1 millisecond.
Figure 3 .
Figure 3. Demand adaptation for different learning rates
Figure 5 .
Figure 5. Trajectory diagram for different learning rates
Figure
Figure5shows the trajectory diagram of supply and demand for learning rate 0.03.It shows the steps for reaching the market equilibrium when the supply is equal to the demand.From the results, we can see that it takes only few iterations to determine the market equilibrium price.The time to reach the market equilibrium was found to be less than 1 millisecond. | 2,430.4 | 2017-08-27T00:00:00.000 | [
"Agricultural and Food Sciences",
"Computer Science"
] |
A Three-Dimensional Model of Small Signal Free-Electron Lasers
Coherent electron cooling is an ultra-high-bandwidth form of stochastic cooling which utilizes the charge perturbation from Debye screening as a seed for a free-electron laser. The amplified and frequency-modulated signal that results from the free-electron laser process is then used to give an energy-dependent kick on the hadrons in a bunch. In this paper, we present a theoretical description of a high-gain free-electron laser with applications to a complete theoretical description of coherent electron cooling.
I. INTRODUCTION
Coherent electron cooling (CeC) [1] is a new cooling method for intense relativistic hadron beams, to be implemented first at the proposed MEeRHIC/eRHIC upgrade to the RHIC accelerator at Brookhaven National Lab. Schematically similar to the stochastic cooling already implemented at RHIC [2], CeC has the advantage that its coherent bandwidth is on the order of the resonance wavelength of the operating free-electron laser, so that the cross-correlation that leads to heating and therefore saturation of the stochastic cooling system is not encountered in CeC.
To achieve a complete theoretical description of Coherent Electron Cooling, models for the propagation of a phase space perturbation through the pick-up [3] and kicker [4] were developed and presented [5]. All these calculations are based upon an infinite electron beam with κ − 2 energy spread [18]. However, an exact analytical solution for the high gain free-electron laser in the small signal regime, given an initial phase space perturbation, had not yet been developed.
A number of analytical models have been developed for the transverse laser profile for an FEL. A set of equations for the full dynamics of a three-dimensional FEL with betatron oscillations were first written down in [6]. Universal scaling for the gain of the FEL in terms of the energy spread, emittance and focusing properties were developed in [7]. A fully three-dimensional Maxwell-Vlasov equation was studied in [8] and ultimately a procedure for exact and variational solutions to the laser eigenmodes was presented in [9]. These results focus primarily upon an eigenmode of the generated laser field, without consideration for developing solutions to the phase space density of the electron bunch. In this paper, we present a theo-retical picture of the full dynamics of the electron phase space distribution, neglecting betatron oscillations, with an intent of using this result in application to CeC.
In Section 2, we present an overview of the configuration of Coherent Electron Cooling, and discuss briefly the existing results in the pick-up and kicker sections. With the context of this work in mind, we then present a derivation for the dynamics of a high-gain free-electron laser seeded with an initial phase space perturbation in Section 3. This leads to an equation for an arbitrary transverse distribution of an otherwise infinitely long electron beam. In Section 4, we analyze the case of an infinitely wide beam, which leads to a Green function for the 3D FEL process with an infinitely wide beam. A mode expansion method is considered for a finite beam in Section 5. To conclude, we consider the specifics of applying these results to Coherent Electron Cooling in Section 6.
II. OVERVIEW OF COHERENT ELECTRON COOLING
Coherent Electron Cooling is schematically identical to stochastic cooling [10], with a pick-up which gathers information about the position and energy of the individual particles in the hadron beam, an amplifier which takes this signal and amplifies it, and then a kicker which takes this information and uses it to deliver an energydependent non-conservative kick which decreases the longitudinal energy spread of the hadron beam.
FIG. 1: Schematic of Coherent Electron Cooling
For CeC, the pick-up is a co-moving electron bunch and hadron beam in a drift, where the individual hadron signals are the Debye screened charge perturbations described in [3]. The amplifier of the signal is the freeelectron laser which we describe in this paper. The kicker is a chicane which offsets the hadrons from their initial signals so that they are displaced from a local maximum of the electron density in such a way that hadrons with energy greater than the design energy lose energy, whereas hadrons with energy less than the design energy gain energy. In both the kicker and the pick-up, it is necessary that the time co-moving between the hadrons and the electrons be shorter than a full plasma oscillation or else the signal will be greatly diminished.
Because the bandwidth of the coherent kicks from the amplified signal is on the order of the resonant wavelength of the FEL, which for most CeC applications is on the order of a few hundred nanometers, the crosscoherence that arises in stochastic cooling is negligible. Thus, the cooling system will continue to reduce the energy spread of the hadron beam until another effect is encountered.
To describe the FEL process, we present a theory that follows closely the derivation for the FEL instability derived in [11], with the slight modification that the transverse profile of the electron beam is uniform. We then inject this result into the existing results for the kicker and pick-up, and determine an exact form for the cooling decrement. But first we begin with a single-particle description of the dynamics.
III. MAXWELL-VLASOV EQUATIONS
Consistent with the derivations of the high gain FEL in [12] and [13] and summarized in [11], we begin with the equations of motion for the single particles in an undulator subject to the radiation field generated by the collective dynamics of the rest of the beam. The hamiltonian equations of motion for small energy deviation and high energy (γ ≫ 1) are given by is the undulator vector potential (here we only consider helical undulators), A ⊥ is the laser field and A z is the longitudinal space charge. The scalar potential has been removed by choice of gauge transformation, and p z has been used as the generator of longitudinal translations.
The Vlasov equation is derived from the conservation of single particle phase space volume, so that Following along with the canonical description of instabilities in plasmas [14], we assume that the phase space density of the electron beam is given by f = f 1 + f 0 , where f 0 is a thermal background and f 1 is the instability. Furthermore, we assume that |f 1 | ≪ |f 0 |. This justifies (i) dropping the term proportional to A 2 ⊥ that would appear in equation (1b) and (ii) dropping terms proportional to f 2 1 or higher. Carrying these approximations out and knowing that A ⊥ , A z ∝ f 1 , we obtain the equation of motion given by where K = eA w /m e c 2 is the undulator parameter. Absent from this description is an accounting for the transverse betatron oscillations that arise from the confining FODO lattice used on the electron beam in the undulator. In fact, all the transverse dynamics of this theory arise from the Maxwell equations, and it is assumed that the current distribution will follow this transverse distribution.
By solving the single-particle equations of motion for an electron in an undulator, this leads to the relationship where j z ≈ −ec dHf 1 (H, z, t) is the longitudinal current density. We consider the transverse laser field in Fourier space, where its Fourier transform is defined by The transverse Maxwell equation, when Fourier transformed over r ⊥ , is given by It is assumed that the envelope functionà ⊥ is slowvarying in the longitudinal direction, and so higher order derivatives in z are small compared to the first derivative. This allows us to drop terms that go as ∂ 2 zà ⊥ over k r ∂ zÃ⊥ .
By dropping oscillating terms that are 2k w z out of phase with the laser field and defining the Fourier transform on j z by we obtain for A w · A ⊥ in Fourier space the expression where the initial laser field has been set to zero as is the case for CeC. For the proof of principle, space charge will be a nonnegligible component of the system. To account for space charge, we consider the longitudinal electric field given by which, under this Fourier transform, gives All of this is identical to the one-dimensional theory in [11] except the additional phase factor of k 2 ⊥ cz/2νω r that appears in the definition ofj z , which acts as a detuning.
By applying an identical Fourier transform of the type performed on the current density to the phase space density, and assuming that the thermal background is given by we obtain the coupled Maxwell-Vlasov equation for the phase space density of the FEL amplified electron bunch with an initial phase space perturbation: where F = F (E) is the normalized energy distribution and H = E + E 0 where E 0 is the average energy of the electron beam.R is the Fourier transform of the transverse bunch profile. The equation of motion is identical in form to that of the one-dimensional theory in [11], with the exception of the added transverse detuning term k 2 ⊥ c/2νω r . Regardless of whether the beam is infinite or finite in transverse extent, the inverse gain length, given by and Pierce parameter, given by To obtain the longitudinal current density, we take the definitionj z ≈ −ec dEf 1 to equation (11). Introducing the normalized detuning, space charge parameter, energy and transverse wave vector aŝ gives the cleaner and dimensionless form At this point, the method of solution depends on whether the beam is to be considered finite or infinite in transverse size, which is to say whether the transverse dimension of the electron bunch r 0 is large compared to the diffraction length scale of the FEL, On the other hand, whether the transverse spacial extent of the initial perturbation can be modeled profitably as a delta function in real space (which would be much simpler) depends on the comparison of the Debye radius to the transverse length scale, r D /d. If the Debye radius is much smaller than d, r D /d ≪ 1, then the physics of a point-perturbation in transverse space should match very closely the physics of the initial phase space perturbation. If r D /d ∼ 1 then the actual physical distribution is necessary. If r D /d ≫ 1 then we expect the FEL to be essentially one-dimensional. By necessity, r D ≪ r 0 for the models utilized in [3] and [5] to be valid. These considerations hold for both the infinite and finite beam solutions.
IV. INFINITE BEAM SIZE
We first consider a beam that is infinite in the transverse direction, as it is analytically simpler than the finite beam size but still contains a reasonable amount of physics in its own right. This can be considered in terms of the ratio r 0 /d, where r 0 is the typical transverse width scale of the electron beam and d is the diffraction length scale of the FEL. If r 0 /d ≫ 1 then the beam is effectively infinite and the treatment in this section is useful. Otherwise the finite beam solution of the next section needs to be employed.
For an infinite beam,R( q − k ⊥ ) = δ( q − k ⊥ ) so the above equation (11) reduces tõ This is identical in form to the equations of motion for the one-dimensional FEL [11] with the identification of C 3D =Ĉ −k 2 ⊥ . Due to this similarity, we omit many of the details and cut to the solution by Laplace transform for the current, which is given by determines the dispersion relation. Equation (15) gives immediately the linear response function in Laplace space for the current density perturbation versus an initial phase space perturbatioñ such that K is the linear response function of the modulated current density to an initial phase space perturbation. We will use this function to calculate a Green function for the FEL phase space distribution, which we will denote G F EL .
By inserting equation (15) back into equation (11), and Laplace transforming forf 1 in theẑ coordinate, we obtain a comparable expression to eqn. (15) for the phase space density of the perturbation of the e-beam given by: The form of this equation allows us to write down the Green function for the phase space density of an infinitely wide e-beam in an FEL amplifier as where the new FEL phase space density in Laplace-Fourier space is given byf It is interesting to note that this Green function can be clearly divided into two parts. The first part represents Landau damping and single-particle non-cooperative motion in the FEL undulator. This process does not lead to gain, and the term representing it can be dropped in a description of the FEL process. The second part contains the growing roots of the dispersion relation, and represents the cooperative gain process of the FEL. It is this Green function that is of practical application for the theory of Coherent Electron Cooling.
The dynamics in theẑ variable are determined by the roots of the dispersion relation, given by There is another pole from the s + ı(Ĉ 3D + E) term in the denominator, but the pole associated with this term will either oscillate or decay, and therefore does not represent amplification as a result of the FEL process, but rather a Landau damping of the initial perturbation due to its own energy spread.
As an example calculation, we consider an initial phase space perturbation that is monoenergetic, instantaneous in time, and a point source. We place this in the context of a cold electron beam, where the dispersion relation is well known.
In Fourier space, the transform of the initial condition is given byf where it is infinitely broad in the k ⊥ andĈ variables. Inserting this directly into the Green function calculation and taking the inverse Laplace transform on s gives a sum with three purely oscillating terms and with the three modes of the FEL process. The resulting expression is extremely cumbersome, and its physical intuition is embodied already in the Green function. We therefore only consider the single growing root of the FEL process from here on.
The phase space density is then approximately given byf where s + is the root of the dispersion relation with positive real value. Expectedly, all dependence on k ⊥ has dropped out, and onlyĈ 3D remains as the natural Fourier parameter for the infinite electron beam.
Recall the definition of the Fourier transformed phase space density as: It would now be useful to transform the integrals into integrals overĈ 3D andk ⊥ in order that we can determine the dynamics of this initial perturbation in real space.
Recalling the definitions of the parameters leaves where ξ = ω r (z/c − t) + k u z is the ponderomotive phase. It is interesting to note that, although the detailed temporal information cannot be extracted from this integral immediately, the transverse profile can be calculated directly asf Because this is a pure phase, it has no transverse size information intrinsic to it. The trouble arises from the equal value given to all k ⊥ by an infinitely small point source, which allows the signal to propagate transversely instantly as we have not properly accounted for a Lorentz covariant description of the transverse electron dynamics. A slightly less mathematically pathological case is to consider an initially Gaussian transverse distribution, infinitely short. In this case, the previous separation also occurs and the resulting width goes as σ 2 r ∼σ 2 0 −ı(ρξ +ẑ) and the profile is gaussian rather than sinusoidal.
V. FINITE BEAM SIZE
Having considered the simpler case of the transversely infinite beam, we now turn our attention to the case of a finite transverse beam profile. To achieve this, we consider an expansion in the eigenmodes of the transverse beam profile, as the Maxwell-Vlasov equation for a finite beam is an integral equation with the beam profile function as its kernel. From there, we can separate out the transverse and longitudinal dynamics, and observe that in real space there is no spreading of the eigenmodes, consistent with optical guiding.
A. Eigenmode Expansion
For the case whenR is not a delta function, it is beneficial to expand the current density solutions in the eigenmodes of theR kernel defined by where for this section we drop the overhats and subscripts to simplify the notation. This is best calculated by ex-pandingR( k − q) as a matrix in terms of some orthonormal basis. We shall consider such an example calculation later, but for now we assume such an eigenbasis is already known. For any reasonably smooth definition of the transverse beam profile,R( k − q) =R( q − k), that is that the kernel of the eigenvalue equation is hermitian [15]. This being the case, we know that the eigenvectors are orthogonal and the eigenvalues are all real, with the orthogonality condition being given that the eigenfunctions are square normalized. Expanding the integral of the longitudinal current density in a series of the eigenmodes gives Looking back at the definition of the Fourier transform for the current, it is clear that the e ık 2 ⊥ẑ terms will cancel, and there is no change in the transverse extent of the current perturbation, which is consistent with the optical guiding discussed in the literature [16].
The current equation (14) can then be reduced to a system of coupled equations for the expansion coefficients. That equation is given by determines the dispersion relation for each growing mode. It is worth noting, at this point, that the ℓ index could refer to multiple indices, particularly since this is a twodimensional model it could refer to both the azimuthal and axial indices, as will be the case when we consider the Gaussian beam profile below. For that particular case, the different azimuthal modes are uncoupled in the Q matrix, so the radial modes for a particular azimuthal mode are the ones coupled by the Q matrix, while differing azimuthal modes do not mix. This will become apparent during the calculation below.
B. Gaussian Profile
As an example of this calculation, we consider a Gaussian transverse beam profile. The procedure for solving the initial value problem is as follows: 1. Calculate the eigenfunctions and corresponding eigenvalues to equation (28) 2. Calculate Q m,ℓ to determine the correct dispersion relation
Invert equation (32) and solve for the initial value problem
Each of these steps should be identical for any other transverse bunch profiles; we present only the Gaussian case here. We begin with a Gaussian beam profile, whose Fourier transform is given bỹ and the eigenfunctions therefore satisfy the equation It is most convenient to consider this particular form in Cartesian coordinates, and in keeping with this we expand where each of the individual χ satisfy an eigenvalue equation of the form where the resulting eigenvalue for ψ ℓ is given by ω ℓ = λ n λ m . It is convenient to define the normalized variable µ = p ıL so that the above eigenvalue equation is given by The appropriate scaling for the transverse beam size for the full eigenvalue is given by whereω ℓ =λ mλn . To calculate the normalized eigenvalues, we expand the kernel of this single-variable integral equation in terms of Hermite polynomials, as they are already related to the paraxial Maxwell equations [17].
It turns out from the properties of Hermite polynomials that only the evens and odds couple, so each χ m is a series in either even or odd Hermite polynomials. In this case, the matrix equation for the even Hermite polynomials is given approximately by the matrix elements Furthermore, to good approximation, the expansion can be carried out for the first two Hermite functions in the series. We therefore consider the two-mode case. For the principle even mode, the matrix is given by To validate these numerical results we take the matrix to next order, i.e. to order H 4 (µ) in the expansion, and the matrix is given by We can conclude from this that the largest eigenvalue can be accurately determined to within 3% with the 2 × 2 matrix expansion, and from analysis of the eigenvector components the H 4 (µ) level of expansion is negligibly small compared to the other two components for the eigenvector with the maximal eigenvalue.
Carrying out a similar procedure for the H 1 (µ) -H 3 (µ) eigenmode gives a maximal eigenvalueλ odd = 1.7161 and eigenvector v odd = .8456 .5339 It is now necessary to calculate the various matrix elements for Q. For the purposes orderly book-keeping, we define the following modes ψ even = χ even (µ x )χ even (µ y ) (42a) The individual Q are given by Q even = 2.51446/L 4 , Q odd = 6.35275/L 4 , and Q + = Q − = 4.43333/L 4 . The growth rate for these parameters is given in figure (2), withL = 3.
To recap, we have calculated an eigenbasis for the transverse beam profile, yielding a linear superposition of even-and odd-numbered Hermite polynomials, and their corresponding eigenvalues. The series is truncated at two dominant modes, and because of the particular nature of the Hermite polynomial expansion basis, the Q matrix is diagonal. If Q had off-diagonal matrix elements, there would be "gain leakage" between the connected eigenvectors.
C. One-Dimensional Limit
Because the eigenvalues are totally independent of the transverse size, and only Q is dependent, it is straightforward to get directly to the one-dimensional beam limit for the dispersion relation. By redefining the normalization ass the dispersion relation takes the form s − ı (s + ıC) 2 (1 + ısΛ 2 p ) + (1 + ıΛ 2 p ω 2/3 m )Q = 0 (45) The actual scaling is such that, for large beams, the portion of this dispersion relation identical in form to the one-dimensional dispersion relation comes to strongly dominate over the perturbation correction for finite size, taken by the value of Q m . For the case of an infinitely large transverse size all functions are eigenmodes and all all eigenvalues are unity, therefore we can obtain the onedimensional limit through this limit.
VI. DISCUSSION
We have presented a theoretical model for the dynamics of a high-gain free-electron laser with threedimensional effects. The model is analytically solvable up to a numerical Fourier transform, and for that reason is useful for benchmarking the massive tracking programs used to simulate FELs. All results in this paper are reduced to a handful of dimensionless numerical Fourier transforms.
When applying the finite beam case, we observe that only the principle four modes grow rapidly. The higher order modes have eigenvalues substantially smaller than these modes, and can be neglected in comparison to the principles. We can therefore conclude from this model that an FEL can be effectively characterized by only a handful of well-understood eigenmodes. Furthermore, this particular model includes optical guiding by consideration of the transverse eigenmodes of a stationary beam. By contrast, we observe spreading of the infinite beam case at a slower than linear rate.
The principle goal of this solution to the threedimensional FEL equations is to develop an understanding of the charge modulation at the end of the undulator. A thorough understanding of the phase information of the FEL instability is necessary to properly calibrate the chicane and inject the hadrons with a proper displacement with respect to the local charge maxima of the bunch. This model provides the phase information up to a three-dimensional Fourier integral, which is wellbounded and provides adequate benchmarking for numerical simulations.
The existing analytical models for the kicker and pickup of CeC involve an infinitely large electron beam, or equivalently that the initial perturbation be small compared to the transverse size of the electron beam. The results are also obtained analytically for the κ − 2 distribution. To match up with these theories, we consider the case whereR(k ⊥ −q) = δ(k ⊥ −q) and with the corresponding dispersion relation for a κ − 2 distribution. The results for κ − 2 are not presented in this paper, but it is straightforward to obtain the dispersion relation from the dispersion integral, and we can now consider a complete description of the phase space evolution of the electron bunch through the CeC process.
This analytical model was developed to provide benchmarking for the proof of principle CeC system to be implemented at RHIC. For the FEL for the proof of principle, the transverse size of the electron bunch is r 0 ≈ 3 mm, the resonant wavelength is λ r ≈ .5 µm, and a gain length of approximately Γ −1 = 3 m. In this case, the transverse length scale d ≈ .35 mm and it is expected that the three-dimensional infinite beam theory should be a reasonable description of the FEL amplifier portion of CeC. At present this model has no way of coping with a transverse momentum spread in the initial phase space perturbation or with betatron oscillations, because all of the dynamics are taken directly from Maxwell's equations. As such, it is not clear what effect transverse momentum spread and betatron oscillations will have on the phase information of the amplified signal. Numerical modeling or a more complete theoretical description are necessary to account for these effects. | 5,954.2 | 2011-03-14T00:00:00.000 | [
"Physics"
] |
Forensic Analysis on False Data Injection Attack on IoT Environment
False Data Injection Attack (FDIA) is an attack that could compromise Advanced Metering Infrastructure (AMI) devices where an attacker may mislead real power consumption by falsifying meter usage from end-users smart meters. Due to the rapid development of the Internet, cyber attackers are keen on exploiting domains such as finance, metering system, defense, healthcare, governance, etc. Securing IoT networks such as the electric power grid or water supply systems has emerged as a national and global priority because of many vulnerabilities found in this area and the impact of the attack through the internet of things (IoT) components. In this modern era, it is a compulsion for better awareness and improved methods to counter such attacks in these domains. This paper aims to study the impact of FDIA in AMI by performing data analysis from network traffic logs to identify digital forensic traces. An AMI testbed was designed and developed to produce the FDIA logs. Experimental results show that forensic traces can be found from the evidence logs collected through forensic analysis are sufficient to confirm the attack. Moreover, this study has produced a table of attributes for evidence collection when performing forensic investigation on FDIA in the AMI environment. Keywords—Advanced Metering Infrastructure (AMI); False Data Injection Attack (FDIA); man in the middle (MITM); internet of things (IoT); forensic analysis
I. INTRODUCTION
Internet of Things (IoT) offers many benefits and advantages to people in the current modern era [1]. Besides, even in our daily life, IoT has proven to be beneficial. IoT is a system of interrelated intelligent devices that are provided with unique identifiers and given the ability to connect with other devices by exchanging information over a communication network. The IoT is seen as one of the foremost important zones in future development and is expanding tremendous consideration from a wide scope of businesses [2]. IoT will play a major role in improving many sectors such as manufacturing, public security, health care, accommodation, entertainment, environment protection, agriculture, industrial monitoring, intelligent transportation, and traditional metering system.
However, little consideration has been paid to IoT adoption that may affect the IoT device"s security measure, such as lack of authentication and insecure communication are among the main problems in most IoT devices [3]. These vulnerabilities will lead to many forms of attacks taking place, such as malware injection, SQL injection [4], false data injection (FDI), man-in-the-middle (MITM) [5], zero-day exploit, distributed denial-of-service (DDoS), DNS tunnelling [6], and many more cyber-attacks. Since the case of the Mirai botnet in 2016, over 600,000 IoT devices were targeted to launch cyberattacks that reached 620 Gigabits at the peak. The number of malware in the cyber world has been growing, giving threats to cybersecurity to face other types of aggravated attacks.
There is also concern about one of the IoT environments, the Advanced Metering Infrastructure (AMI). AMI is a system consisting of modern electronic-digital hardware and software, which enables data measurement intermittently and remote communication continuously. The system gives a few important capacities that were not already possible or had to be performed manually. For instance, the ability to remotely and automatically measure power usage, connect and disconnect service, and voltage monitoring. FDIA is one of the popular attacks that can impact AMI as countries around the globe are implementing an AMI in their infrastructure. Like the MITM attack, FDIA is more toward creating falsified data, which the attacker injected from compromised smart meters to change the actual value sent by another smart meter in AMI. This threat can negatively affect both utilities and customers as it is difficult to investigate from the available log in the AMI [7]. This paper aims to simulate the impact of FDIA on the IoT environment and perform forensic analysis on digital traces from data obtained.
In the next section of this paper, related literature on cyber attacks in the smart grid was reviewed. Subsequently, Section III presents the development of a testbed that is used to simulate the cyber attack in components of the smart grid. Section IV presents the result from the simulation and how forensic investigations are done to investigate FDI attacks in the smart grid environment. Section V provides a conclusion to the paper.
A. False Data Injection Attack (FDIA)
The operation of the smart grid faces extreme consequences when the smart meters have been compromised and reporting false power consumption. Most current cases include the crime of electricity stealing. However, a few other sorts of data falsification attacks are conceivable such as FDIA. AMI would (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 12, No. 10, 2021 266 | P a g e www.ijacsa.thesai.org be affected by this kind of attack badly as data falsification is difficult to detect.
Based on [8], they work on detecting falsification of data injection attacks focusing on smart grid systems. They made a successful and real-time scheme to distinguish FDIA in smart grids where they evaluate the reliabilities of state estimations by misusing spatial-temporal correlations and trust-based voting. The study"s objective is to minimize the harm from the threat of FDIA in smart grids by using these solutions to conduct detection of an attack. This case study was done by simulation of the smart grid and the proposed solutions to detect malicious FDIA. It is suggested that powerful countermeasures are necessary as these kinds of attacks can become highly potential threats as those FDIA are evolving by implementing anti-forensic techniques to prevent detection of the attack.
In [9], the study proposed a system to detect cyber-attacks that aim to sabotage the Instrumentation and Control (I&C) environment. The study intends to provide a last line of defense to sabotage attacks. A system called Goosewolf was produced which has the capability to detect when an adversary has manipulated the process control of the Programmable Logic Controller (PLC). The result obtained in that study shows that the proposed system is effective in checking the capabilities of the PLC and the ability to detect FDIA.
Another study by [10] has focused their work on statistical anomaly detection techniques to solve the difficulties in detecting data falsification in AMI. To identify compromised smart meters for deductive and additive attacks, they have proposed a trust model based on Kullback-Leibler divergence. Moreover, techniques such as the generalized linear and Weibull function-based kernels were proposed for camouflage and conflict attacks. After investigation on comparison under various attacks, which is additive, deductive, camouflage, and conflict, they found out that their models have good high true positive detection and the average false positive is just 8 per cent for most attacks conducted.
B. IoT Testbeds
For the purpose of better understanding on vulnerabilities of IoT devices, researchers utilized a security testbed designed to simulate the attack in a particular environment. The author in [11] illustrated a testbed for securing IoT devices by producing a testbed that can be used as a penetration testing platform to evaluate risks and vulnerabilities of IoT devices. The penetration testing included were port scanning, vulnerability scanning, downgrading attack, search exploits, brute force directories, passwords, port services, and SSL configuration. The software used to perform the testing were Snitch, ZAP, Wascan, Skipfish, Nmap, TLS proper, SSLScan, Nikto, Wireshark, Ettercap, Dirb, SQLmap, WAFWooF, Metasploit, Dex2jar, Binwalk, and UART. The network protocols used was WIFI and BLE. Penetration testing for this analysis was conducted on a smart bulb and IP camera. Vulnerabilities found were very common problems in IoTbased products such as no firewall, authentication in plain text, open ports, lack of certificate, etc. Moreover, paper [12] displayed a testbed designed to analyze security issues in IoT devices. This testbed indicated design and architecture prerequisites to support the development of penetration testing for the purpose of cybersecurity forensic investigation. They conducted the tests based on the security vulnerabilities in the IoT products such as Amazon Echo, Nest Cam, Phillips hue, SENSE Mother, Samsung SmartThings, Witching HOME, WeMo Smart Crock-Pot, and Netatmo Security Camera. The study was conducted using WIFI and Bluetooth. For control and administration, they handle the process and events using NI TestStand software. A closed source software runs only on Windows OS, which is intensely prohibitive and proprietary. Following a huge downside from limitation in network penetration testing capabilities, the software used to avoid testing from handling passive capture of packets, wireless cards, and other network or low-level functionalities.
In [13], researchers used SecuWear to recognize the weaknesses of commercial hardware. The testbed collects the data needed for distinguishing different attacks, thereby assessing the security of wearables devices. Besides, it gives a method for mitigating information and performing attacks in a network that used WIFI and BLE. The software used to perform the vulnerability assessments, and penetration testing was Wireshark. In that study, the eavesdropping and the Denial of Service (DOS) attack were executed. The results of the study found that SecuWear vulnerabilities may be similar to certain open sources such as false positives when recognizing security issues. Fig. 1 shows the topology of our testbed that is used to perform FDIA. The testbed consists of 4 main hardware components, two units Raspberry Pi 4 Model B, a computer and a switch. The smart meter (192.168.1.13) generates random data to mimic a real smart meter then sends the generated data to the data collector containing one virtual machine running Ubuntu Version 21.04 operating system to act as a data collector using the MYSQL version 10.14.9-MariaDB database which receives incoming data from the smart meter. Attacker smart meter (192.168.1.11) will act as an attacker to perform FDIA that will attempt to tamper the smart meter data to the data collector. 267 | P a g e www.ijacsa.thesai.org A comparative analysis between normal traffic logs and logs during the attack was made to verify the FDIA investigations in the IoT environment. Forensic evidence was analyzed based on the packet captured using Wireshark in the form of PCAP files.
IV. DISCUSSION OF FINDINGS
The experiments conducted on the testbed were carried out in two phases. The first phase of the experiment was the "Normal operation", and the second phase was the "Under Attack". The details of the experiments will be explained later on in this chapter. Fig. 2 shows the flow of the experiment during normal traffic and under attack.
A. Normal Operations
During the normal traffic phase, the smart meter as shown in Fig. 1 with an IP address of 192.168.1.13 with MAC address of Raspberry bc : 06: f9 will send data to the data collector where the IP address of the database is 192.168.1.20 with MAC address of VMware_98:1f:74. The smart meter will send power consumption data with an interval of 10 seconds between each data to imitate data for 1 week with an interval of 30 minutes between each data interval. In this experiment, 137 data will be collected using the Wireshark version 3.4.5 packet capturing tool. The consumption transmission script will run in 25 minutes to collect data in a total range of 135 to 140 data. Fig. 3 shows the data sent by the smart meter to the data collector, the value of power consumption with the timestamp. Fig. 4 shows the sample data sent by the smart meter to the data collector in the MySQL database. The first column shows the numbers of data in the database. The second column shows the ID of the smart meter, the third and fourth rows display the timestamp of when the data was accepted, and the last row shows the data value of the power consumption. Fig. 5, it is shown that the destination or the data smart meter answer the ARP request of the smart meter by giving its MAC address Raspberry_bc:06:f9. The smart meter also give ARP, a reply to its MAC address as shown in Fig. 5. Sample of ARP tables for smart meter and data collector are shown in Fig. 6 and Fig. 7. Based on the data gathered during the normal operations experiment, no anomalies were detected in Wireshark, ARP cache on the smart meter and ARP cache on the data collector. The data sent from the smart meter has the same value as the data stored in the database.
B. Under Attack
The smart meter with an IP address of 192.168.1.13 will send data as usual for the attack simulation. However, another Raspberry Pi will be included that will imitate the attacker for this phase. The attacker with IP address 192.168.1.11 and the corresponding MAC address of Raspberry_bc:06:27 will perform ARP spoofing on the respective smart meter and data collector in the topology. Once the attacker managed to intercept and change the power consumption value, the tampered packet will be forward back to the data collector using IPV4.
By performing ARP spoofing on a legitimate smart meter and data collector, the attacker machine will be the gateway for both of these devices. The attacker can now sniff and perform further attacks as the attacker already has access to data transferred. All communication between the smart meter and data collector now needs to go through the attacker"s machine first before reaching the destination.
Packet manipulation script is used to change the value of power consumption. In this experiment, the power consumption is increased by 12 on every reading. The difference is shown in Fig. 8, where the data generated and sent to the data collector is not tally with Fig. 9 which displays that the data accepted by the data collector was not the legitimate data sent by the smart meter. The data in the database has been modified because the data has been intercepted and sent to the data collector by the attacker. Fig. 10 shows the view of the attacker machine. The data from the smart meter will be intercepted, modified, and then forwarded to the destination. Fig. 10 shows that every data intercepted will be applied increment by 12. For the MITM part, this study successfully perform packet manipulation by using pattern searching tools and some modifications on the iptables to filter only the packet that needs to be modified to come through. The evidence captured using Wireshark is explained based on Fig. 11. Note on the highlighted line, the attacker sending a broadcast reply telling the data collector that the smart meter"s MAC address is now at his MAC address which is Raspberry_bc:06:27. Also, there are presents of duplicate use in the collected evidence. www.ijacsa.thesai.org Fig. 12 and Fig. 13 Based on the data gathered during the under attack experiment, anomalies were detected in Wireshark, ARP cache on the smart meter, and ARP cache on the data collector. The data sent from the smart meter has a different value from the data stored in the database as it was changed by the attacker.
C. Forensic Analysis
In this section, the PCAP file that stored all the digital evidence was extracted and analyzed. The analysis and comparison of the collected evidence in this study are used for in-depth analysis. Fig. 14 shows the steps taken during the forensic analysis.
The analysis process begins by collecting packets captured using Wireshark from the client during normal traffic and during under attack. The packets are also collected from the data collector during normal traffic and during the network under attack. The records from the normal traffic phase will be used as a benchmark for comparative analysis to investigate the FDIA in AMI. Fig. 15 shows that Wireshark captured another MAC address (bc:06:27). In addition, Fig. 16 shows the use of duplicate IP addresses was reported. This strengthens the evidence collected, as shown in Fig. 12 and Fig. 13. It could be observed that the IP address 192.168.137.13, which was earlier known to be the IP address of the smart meter, now has two MAC bindings: Raspberry_bc:06:f9 (initial MAC address), and Raspberry_bc:06:27 (owned by the attacker machine in the network), which was the outcome of ARP poisoning/spoofing. Fig. 16 shows that the time to live (TTL) of the packet from the client was 64 (left), and it was still 64 when it reached the data collector (right). This is normal as there is no router involved in this topology. However, Fig. 17 shows that TTL is different when the data was sent from the smart meter (left) and when it was accepted at the data collector (right) during the network was under attack.
As shown in Fig. 17, when the data was sent out from the smart meter (left), the TTL was 64 but when it reached the database, the TTL of the packet was 63, indicate that the packet had travel somewhere else before reached the data collector. The normal topology is assumed that the smart meter should directly deliver data to the data collector with a switch and not include a router, so it should not modify the TTL of the packet. This happens because the attacker intercepted the packet and modified the packet"s data before forwarding it to the real destination. As displayed in Fig. 18, the time taken for the data collector to respond to the smart meter when the smart meter wants to send data is much lower than when under attack. Fig. 19 shows the average time taken by the data collector to respond when the smart meter wants to send data. This shows a huge gap of time taken for the data collector to reply during normal traffic, and the network was under attack. It can be concluded that the delay that occurred during the network was under attack is caused by the path and process that happened on the data to reach the destination. The data went through a longer path and was processed by the attacker first before the data was forwarded back to the destination. Based on the forensic analysis conducted, changes were detected in Wireshark such as a single IP address having two different MAC addresses, one MAC address belongs to the normal smart meter and the other MAC address belonging to the attacker. Other changes that were detected are in the TTL of the packet and the time taken for the data collector to respond to the request query.
V. CONCLUSION
This study"s primary motivation was to study FDIA impact in the IoT environment and perform forensics analysis on digital traces from data obtained. Based on the data obtained from the experiments, the proposed list of attributes for forensic analysis could be useful to trace FDIA. In future works, there is a need to explore different types of attacks, such as buffer overflow payloads that may results in a system crash, creating a path for the hackers to initiate their malicious actions. Future studies may also focus on the integration of forensic-by-design principles in the design of any critical system because it will be quite difficult to know what has happened if there is no log or no proof. If the system is able to produce a series of events, it would be very helpful for the www.ijacsa.thesai.org forensic investigator to reconstruct the events in order to identify available sources and different types of potential evidence in such cases. Therefore, another potential study could explore how to integrate forensic-by-design principles in the design of such systems. | 4,481.4 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Decay of I-ball/oscillon in classical field theory
I-balls/oscillons are long-lived and spatially localized solutions of real scalar fields. They are produced in various contexts of the early universe in, such as, the inflaton evolution and the axion evolution. However, their decay process has long been unclear. In this paper, we derive an analytic formula of the decay rate of the I-balls/oscillons within the classical field theory. In our approach, we calculate the Poynting vector of the perturbation around the I-ball/oscillon profile by solving a relativistic field equation, with which the decay rate of the I-ball/oscillon is obtained. We also perform a classical lattice simulation and confirm the validity of our analytical formula of the decay rate numerically.
Introduction
Scalar fields are essential ingredients in particle physics and cosmology. They are ubiquitous in many low energy effective field theories, as they provide concise descriptions of spontaneous symmetry breaking. The scalar fields corresponding to the Nambu-Goldstone bosons also appear in many well-motivated high energy theories. In the de facto standard model of the cosmic inflation, inflation is driven by the scalar potential of a scalar field, the inflaton [1][2][3][4]. The scalar fields are also indispensable if supersymmetry is realized in nature.
In this paper, we study the time-evolution of the I-ball/oscillon which appears in real scalar field theories. The I-ball/oscillon has long been recognized as a spatially localized solitonic state which appears in a real scalar field theory [5][6][7]. The I-ball/oscillon associates with the conserved charge, the adiabatic charge I [8,9], as the topological solitons (i.e., domain walls, monopoles, cosmic strings) [10][11][12][13] as well as the non-topological solitons (i.e., Q-balls) [14][15][16][17][18][19] associate with their corresponding conserved (topological) charges. The I-ball/oscillon can also be regarded as a Q-ball in the non-relativistic field theory where the adiabatic charge I is reduced to the charge of an approximate U(1) symmetry related to the particle number conservation [20]. These two pictures are consistent with each other since the adiabatic invariant I is well conserved when the quadratic potential dominates the scalar potential, and hence when the non-relativistic limit is valid.
The I-ball/oscillon is produced in various contexts of the early universe. For example, the oscillations of the inflaton after inflation can lead to a strong inhomogeneity through JHEP04(2019)030 the self-resonance, which results in the formation of the I-ball/oscillon [21][22][23][24][25][26][27]. The inflatonic I-ball/oscillon formation produces the gravitational wave, and its spectrum is studied in refs. [28,29]. The axion can also form the I-ball/oscillon which is sometime called "axiton" [30][31][32]. The axion [33][34][35][36][37] is the Nambu-Goldstone boson associated with spontaneous symmetry breaking of the Peccei-Quinn symmetry [38,39], which is the most attractive solution to the strong CP problem [40]. Due to the axiton formation, the axion can be spatially localized in the universe, which could have a significant impact on the axion search experiments.
The conservation of the adiabatic charge I or the U(1) charge in the non-relativistic limit is not exact. Accordingly, the I-ball/oscillon is not completely stable and decays eventually. Although physics of the I-ball/oscillon have been studied in many papers [9,[41][42][43][44][45][46][47][48], the decay process of the I-ball/oscillon has not been fully understood. It is only recently that an analytic formula of the I-ball/oscillon decay has been derived based on the Q-ball picture where the decay rate is calculated in the Feynman diagrammatic approach [49,50].
The main purpose of this paper is to revisit the decay process of the I-ball/oscillon. In our approach, we solve the relativistic classical field equation of the perturbation around the I-ball/oscillon solution. By calculating the Poynting vector of the perturbation, we estimate how the localized energy of the I-ball/oscillon leaks out, which gives the decay rate of the I-ball/oscillon. Because our analysis only uses the classical field equation, it is more straightforward than the analysis in [49,50]. Our analysis also clarifies the physical picture of the I-ball/oscillon decay. The decay process is just a leakage of the localized energy of the I-ball/oscillon via a classical emission of the relativistic modes of the scalar field. We also validate our analytical formula of the decay rate by performing a classical lattice simulation.
Organiztion of this paper is as follows. In section 2, we briefly review the I-ball/oscillon solution in both the Q-ball picture and the adiabatic invariant picture. In section 3, we calculate the I-ball/oscillon decay rate by solving a relativistic field equation of the perturbation around the I-ball/oscillon configuration. In section 4, we perform a classical lattice simulation to validate our perturbative analysis. Finally in section 5, we summarize our results.
I-ball/oscillon solution
In this section, we briefly review the I-ball/oscillon solution in a real scalar field theory. In 2.1 we explain the Q-ball description of the I-ball/oscillon following refs. [20,49] and see that the I-ball/oscillon associates with the particle number conservation. In 2.2, we re-derive the I-ball/oscillon solution by using the conservation of the adiabatic charge [8,9]. The I-ball/oscillon profiles derived by these two approaches coincide with each other when the quadratic potential dominates its scalar potential.
I-ball/oscillon as Q-ball
Let us consider a classical field theory of a real scalar field φ with a Lagrangian density
JHEP04(2019)030
where we assume a scalar potential with coupling constants g n as The equation of motion of the field is represented by where V = ∂V /∂φ and the corresponding energy density is Let us take the non-relativistic limit by expanding φ(x) by a complex scalar field Ψ; where we assume |∂ 0 Ψ| |mΨ|, |∂ 2 0 Ψ| |m 2 Ψ|, and |∇ 2 Ψ| |m 2 Ψ|. By substituting φ NR to the Lagrangian and the energy density and taking time average of them with a time scale much longer than m −1 but much shorter than that of the time variation of Ψ(t, x), the terms proportional to e inmt (n = 0) drop out. The resultant effective Lagrangian and the time-averaged energy density are represented by 1 In this approximation terms with the odd number of Ψ vanish and the time averaged Lagrangian shows a U(1) symmetry which corresponds to the conservation of the particle number. The conserved charge is represented by, 2 (2.10) It should be stressed that no particle creation is allowed via the interaction terms in the non-relativistic limit, which is the reason why we have an approximate U(1) symmetry. Now, let us find a Q-ball solution for a given Q 0 by the Lagrangian multiplier method because the field configuration of the Q-ball is obtained by minimizing the time-averaged energy for a given charge.
Here, the time-averaged energy density E does not coincide with the effective Hamiltonian density derived from the effective Lagrangian in eq. (2.7), with Π =Ψ − imΨ being the canonical momentum of Ψ † . 2 The corresponding symmetry is Ψ → Ψ = e iα Ψ.
JHEP04(2019)030
Then, a Q-ball solution should satisfy and V eff (ψ) denotes the derivative with respect to ψ. 3 The necessary condition for the existence of solutions of eq. (2.15) is The parameter ω = m − µ is chosen so that the solution satisfies The total energy of the solution is given by With these definitions, we can show by taking derivative of ω and using the equation of motion eq. (2.15). Finally, let us comment on the relation between the time-averaged energy density eq. (2.8) and the Hamiltonian density eq. (2.6). For the I-ball/oscillon solution, these densities are related via where q 0 is the charge density of the I-ball/oscillon i.e. q 0 = 2(m − µ)ψ 2 (r). Thus, the I-ball/oscillon solution which minimizes E for a given value Q 0 also minimizes H NR .
I-ball/oscillon from adiabatic invariance
The I-ball/oscillon solutions are obtained in ref. [8] as localized scalar field configurations which minimize their time-averaged energy for a given adiabatic charge I. The adiabatic invariant approximately conserves when the scalar field dynamics is dominated by a quadratic potential. 4 The adiabatic invariance is defined as where ω is the angular frequency of the oscillating field and the overbar denotes the average over one period of the oscillation. The I-ball/oscillon solution is obtained by minimizing where λ is the Lagrange multiplier and V denotes the scalar potential in eq. (2.2). Since the I-ball/oscillon solution exists when the mass term dominates the scalar potential, the solution can be written as φ(t, x) = 2ψ(x) cos(ωt) in good approximation, where ω is nearly equal to but less than m. Thus we define µ as µ = m − ω m. Using φ = 2ψ cos(ωt), E λ is rewritten as where the-averaged scalar potential V coincides with V eff (ψ) in eq. (2.9). Assuming the configuration is spherical, i.e. ψ(x) = ψ(r), the I-ball/oscillon solution is obtained from with the boundary condition, The Lagrange multiplier λ is determined by using equation of motion for φ which is given byφ Multiplying this equation by cos(ωt) and averaging over a period, we obtain
As a result, we see that the I-ball/oscillon solution associates with the adiabatic charge I 0 is the same with the one derived in the previous section. The correspondence between the two approach is more evident by noting that the U(1) charge Q is nothing but the adiabatic charge for the I-ball/oscillon solution. It should be again emphasized that the conservation of the adiabatic charge and the approximate U(1) charge are valid when the scalar potential is dominated by the quadratic term, which makes the oscillation frequency of the real scalar field very close to m, i.e. µ m.
Analytical calculation of I-ball/oscillon decay
In this section, we derive a formula of the scalar radiation from I-ball/oscillon in the classical field theory. For a given I-ball/oscillon solution, we solve the equation of motion of the perturbation and calculate the energy loss rate of the I-ball/oscillon.
Scalar radiation from I-ball/oscillon
Let us take the I-ball/oscillon (Q-ball) solution at t = t 0 and consider the perturbation ξ around it, φ(x) = 2ψ(r) cos(ωt) + ξ(x) , When the perturbation is small, i.e., |ξ| |ψ|, the right-hand side of the equation of motion in eq. (2.3) can be approximated by In this approximation, the back reaction of the radiation is neglected. The equation of motion of ξ is written as
To solve the equation of motion of ξ, let us assume that the I-ball/oscillon is placed at t 0 → −∞, so that ξ is radiated constantly. In this setup, the equation of motion can be easily solved by using the Fourier transformed fields, where G ret is the Green function satsifying ( +m 2 )G ret = δ(x) with the retarded boundary condition, i.e. ε > 0. Here, ρ(t, x) denotes the right-hand side (×(−1)) of eq. (3.7). It should be noticed that the source at t only affects ξ(t) for t > t . The domain of p 0 is (−∞, ∞) as it just parameterizes the frequency. By using the Fourier transformation of ρ(t, x) in eq. (3.7),ρ(p 0 , x) is written aŝ ρ(p 0 , p) = n≥1 n k=0 π 2 n n C k (δ(p 0 − (n − 2k)ω) + δ(p 0 + (n − 2k)ω))ρ n (p) , (3.12) ρ n (p) = d 3 x ρ n (r)e ip·x = 4π drρ n (r) r sin pr p , (3.13) ρ n (−p) =ρ n (p) . (3.14) Thus,ρ(p 0 , p) does not depend on the direction of p but only on p = |p|. Solving the equation of motion of ξ (see the appendix A for a detailed derivation), we obtain ψ n (p) = 4π dr ψ n (r) r sin pr p , (3.16) for r → ∞. Here, the summation over k is taken only for ω nk > m, and hence, n = 1 does not contribute since m − µ < m. Therefore, ρ 1 , and hence, V eff do not contribute to the scalar radiation. Now, let us estimate how the localized energy around the I-ball/oscillon leaks out to r → ∞. The energy loss rate of the I-ball/oscillon is represented by where T 0r denotes the Poynting vector given by, By averaging over time, we obtain for r → ∞. By using this time-averaged Poynting vector, the decay rate of the Iball/oscillon for a given energy E and a charge Q 0 = I 0 is represented by 22) which is finite at r → ∞. By using Γ, the lifetime of an I-ball/oscillon with an initial charge Q i = I i is given by, where Q cr is the critical value of the charge below which no stable I-ball/oscillon exists (see the next subsection).
Example
Here we estimate the decay rate for a specific potential. In the following, we consider Figure 2. Left) The relations between ω (blue), φ 0 = ψ(r = 0) (yellow), and R Q (green) and Q 0 = I 0 . The normalization of the parameters Q 0 and ψ 0 are smaller than those in [49] by a factor of two. Right) The enlarged plot of ω. Figure 3. Plots ofψ n (ω nk ) for given Q 0 . We see thatψ 5 (ω 50 ) is subdominant compared with ψ 3 (ω 30 ) andψ 5 (ω 51 ).
with g 4 = −3! and g 6 = 0.4 × 5! m −2 to conform with the analysis in [49]. The scalar potential with these parameters satisfies the I-ball/oscillon (Q-ball) condition in eq. (2.18). In figure 1, we show the I-ball/oscillon configuration for a given ω. It is seen that ψ(r) is well described by the Gaussian profile for ω 0.9. The profile deviates from the Gaussian shape for a smaller ω (e.g. ω = 0.85).
In figure 2, we show ω (blue), ψ 0 = ψ(r = 0) (yellow), R Q (green) as functions of Q 0 = I 0 , which reproduce figure 1 in [49]. 5 Here, ψ 0 and R Q are defined to fit the profile by a Gaussian profile, In the figure, we show only the parameters for stable solutions, i.e. dω/dQ 0 < 0 [49]. 6 There is no stable solution for the charges smaller than the critical value, Q cr 10 1.9 . We also plotψ n (ω nk ) for given Q 0 = I 0 in figure 3. The figure shows thatψ 5 (ω 50 ) is subdominant compared withψ 3 (ω 30 ) andψ 5 (ω 51 ). This can be understood as the emission of the mode of ω 30 = ω 51 = 3ω corresponds to the first excited state, while that of ω 50 to the second excited state. 7 Figure 4. The time-derivative of the I-ball/oscillon energy (left) and the decay rate Γ (right) for given Q 0 . Figure 4 shows the absolute value of dE/dt (left) and the decay rate Γ (right) for given Q 0 . As the Γ is dominated by the contributions formψ 30 andψ 51 , the position of the zeros of the decay rate are determined by the zero points of g 4ψ3 (ω 30 )/3!+g 6ψ5 (ω 51 )/5!, though the decay rate is not exactly vanishing at the zero points due to the contributions of other modes such asψ 50 . 8 The I-ball/oscilon loses its energy gradually by emitting relativistic radiation with a give rate in the figure. As there is no stable I-ball/oscillon solution below Q cr 10 1.9 , the I-ball/oscillon it rapidly decays once the charge reaches Q 0 = I 0 = Q cr (see section 4.2). 9 4 Validation of the analytic decay rate
Setup of numerical simulation
To confirm the validity of the analytical calculation in the previous section, we perform a classical lattice simulation of the time-evolution of a real scalar field φ. We calculate a relation between the I-ball/oscillon charge Q 0 = I 0 and the time derivative of the Iball/oscillon energyĖ.
In the simulation, units of energy and time are taken to be m and m −1 , that is, We also assume that the configuration of φ is spherically symmetric in three spatial dimensions, so the equation of motion of φ is represented by The potential is the same as that adopted in 3.2, where g 4 = −3! and g 6 = 0.4 × 5!. To avoid the divergence of the second term of the right-hand side of eq. (4.2), we impose the following condition at the origin: At the boundary r → ∞, we impose the absorbing boundary condition (see the appendix of the reference [52] for details). Under this condition, radiation of the real scalar field emitted from the I-ball/oscillon is absorbed at the boundary, so that we can calculate the dynamics of I-ball/oscillon correctly. For the initial condition, we use the theoretical I-ball/oscillon profile for a given ω ini anḋ We choose ω ini properly to aquire the desired value of the I-ball/oscillon charge Q 0 . The other simulation parameters are shown in table 1. We develop our own classical lattice simulation code, in which the time evolution is calculated by the fourth-order symplectic integration scheme and the spatial derivatives are by the fourth-order central difference scheme. To check the correctness of the code, we have confirmed that the results do not significantly change when we set different simulation parameters (box size L, grid size N , time step ∆t).
Result
In numerical simulations, we cannot calculate Q 0 nor I 0 directly since Q 0 is defined by Ψ while I 0 is defined by an average over one period of the oscillation as in eq. (2.23). Instead, we approximate Q 0 = I 0 by Q defined by We also cannot calculate the relation Q 10 1.9 because the field does not have the stable I-ball/oscillon solution in this range (see figure 6), so we remove the data after the I-ball/oscillon decay for clarity. The two results look slightly different in large charge (Q 10 2.5 ) because the approximation µ m may not be appropriate as explained in 3.2. From this figure, we can find that the result of the analytical calculation is almost in agreement with the simulation results.
where T ave = 100 is the duration of the time average. This value is much larger than 2π/ω 10, but much smaller than the typical time scale of the I-ball/oscillon decay 1/Γ 10 4 . Thus T ave = 100 does not affect the results of our simulations.
We also take the time average to calculate the I-ball/oscillon energy and calculate Γ =Ė/E by the fourth-order central difference scheme. The results of the simulations are shown in figure 5, which are compared with our analytical calculation (see figure 4). The figure shows that the analytical results are in good agreement with the results of the classical lattice simulation for Q 0 10 2.5 . On the other hand, for the I-ball/oscillon with a large charge Q 0 10 2.5 , the lattice results deviate from the analytical results. The deviation is partly because the approximation µ m is no more valid for Q 0 10 2.5 (see 3.2). Because we set the final time of the numerical simulation as t = 10 5 , the decay late smaller than ∼ 10 −7 cannot be shown in figure 5.
As we mentioned in the previous section, there is no stable I-ball/oscillon solution for Q 10 1.9 80. Accordingly, we expect that the I-ball/oscillon decays rapidly when its charge reaches Q cr 80. This situation is realized in the numerical simulation for ω ini = 0.910 as shown in figure 6. In this case, the I-ball/oscillon charge Q becomes 10 1.9 JHEP04(2019)030
Conclusion
In this paper, we have shown that the decay rate of the I-ball/oscillon within the classical field theory. Our method applies to various scalar field models (potentials) that exhibit long-lived, spatially localized and time-dependent solutions. Our analysis clarifies the decay process that it is just a leakage of the localized energy of the I-ball/oscillon via a classical emission of the relativistic modes of the scalar field. From the point of view of the adiabatic charge, the decay process is caused by the deviation of the scalar potential from the quadratic one, where the adiabatic invariant is not precisely conserved. From the point of view of the U(1) charge, it corresponds to the U(1) symmetry breaking due to the violation of the non-relativistic approximation by the emission of the relativistic modes.
To validate our analytical approach, we have performed a classical lattice simulation. There, the classical relativistic field equation is solved by setting the initial condition of the real scalar field as an I-ball/oscillon configuration. The results are in good agreement with the analytical result. For Q 0 10 2.1 , for example, the lifetime of I-ball/oscillon is t ∼ 10 4 m −1 , which is expected from the estimation of the decay rate Γ (See figure 4). The agreement between the analytical result and the numerical simulation shows that the leading order approximation in our analytical calculation is sufficient to obtain the decay rate of the I-ball/oscillon, since the numerical calculation does not rely on the perturbative expansion of the solution around the I-ball/oscillon. | 4,953.2 | 2019-04-01T00:00:00.000 | [
"Physics"
] |
On the conformal higher spin unfolded equation for a three-dimensional self-interacting scalar field
We propose field equations for the conformal higher spin system in three dimensions coupled to a conformal scalar field with a sixth order potential. Both the higher spin equation and the unfolded equation for the scalar field have source terms and are based on a conformal higher spin algebra which we treat as an expansion in multi-commutators. Explicit expressions for the source terms are suggested and subjected to some simple tests. We also discuss a cascading relation between the Chern-Simons action for the higher spin gauge theory and an action containing a term for each spin that generalizes the spin 2 Chern-Simons action in terms of the spin connection expressed in terms of the frame field. This cascading property is demonstrated in the free theory for spin 3 but should work also in the complete higher spin theory.
Introduction
The purpose of this paper is to propose a set of field equations in three dimensions that describe a fully interacting conformal system consisting of a scalar field and the higher spin theory generated by the SO(3, 2) higher spin algebra. We will follow the work [1] where this approach to the problem was discussed in some detail and some results were found indicating that this may be worth pursuing further.
After giving the proposed equations in section 2 we will first explain the notation and content of them and then present some of the arguments leading to their particular form together with some explicit checks that will give some support for this proposal. The higher spin part of the theory has its origin in a gauged SO(3, 2) Chern-Simons theory which can be reformulated as a generalization to all higher spins of the standard spin 2 Chern-Simons theory for the spin connection. This will be elaborated upon in section 3 where a cascading trick is used to relate the two different Chern-Simons formulations of the higher spin theory. Some additional comments are collected in the Conclusions.
Thus our main goal will be to present two higher spin equations, one field strength and one unfolded equation, and to show that the following spin 0 (Klein-Gordon) and spin 2 (Cotton) equations can be reproduced: and Note, however, that these equations are taken from the topological gauging of three dimensional CF T s with eight supersymmetries [2,3] where all coupling constants are determined in terms of the gravitational one g. This needs not to be the case in non-supersymmetric theories like the ones we deal with here. Once the higher spin equations are presented we can discuss their consequences for the field equations for spin 3 and above. Only a few such comments will be given here while a more extensive survey will be left for a future publication. We may note already at this point that the spin 2 equation above will be augmented by new terms with more than two derivatives of the scalar φ(x) provided higher spin frame fields also appear. This is true also for the equations of spin 3 etc and follows directly from the fact that φ has conformal dimension L − 1 2 and that the number of derivatives in the spin s equation is 2s−1 which implies that the spin s frame field itself is of dimension L s−2 . Also the Klein-Gordon equation will contain terms with higher spin fields and more than two derivatives on the scalar.
We will find it convenient to write the Cotton equation (1.2) in irreps of SO (1,2). The point is that the trace is exactly the Klein-Gordon equation which means that the rest of the Cotton equation is in the irrep 5 and reads where we recall that the Cotton tensor is already in this irrep. The purpose of this paper is to suggest two higher spin field equations containing component equations for all spins ≥ 2 coupled to a scalar field φ with φ 6 potential and which in particular reproduce both the above Klein-Gordon and spin 2 Cotton equations. Already in [1] where this approach was discussed, but without source terms in either equation, it was shown that the correct curvature scalar term does arise in the Klein-Gordon equation and in addition also a spin 3 contribution 1 φ − 1 8 Rφ +f φ = 0, (1.4) where the spin 3 term contains the tracef := e µ afµ a (1,3). The fieldf µ a (1,3), which is an expression containing three derivatives on the spin 3 frame field e µ ab is discussed briefly later in this paper. The reader is advised to consult [1] for definitions and more details on the spin 3 sector of the higher spin system. The problematic issue of constructing source terms was mentioned in this context at the end of that paper, and a suggestion how it can be solved is presented in the next section.
The higher spin algebras together with the linearized versions of the zero field strength and unfolded equations have been discussed in many papers in the past, see, e.g., [4][5][6][7][8][9][10] and referencies therein. The conformal higher spin sector of the theory that is the subject of this paper is also analyzed in a recent paper by Vasiliev [11] where its relation to higher spin theory in AdS 4 is used to draw conclusions about Lagrangians etc. The scalar sectors, on the other hand, are not the same. In fact, the scalar considered in this paper is the one discussed in [9]. The linearized spin 3 frame field system used in section 4 below is discussed in the "metric" formulation in, e.g., [12]. Furthermore, the explicit analysis of the conformal higher spin system performed in this paper is closely related to the more formal approach of σ − cohomology developed by Shaynkman and Vasiliev, see, e.g., [13][14][15].
The conformal interacting higher spin equations
The two basic field equations for the SO(3, 2) conformal higher spin (HS) theory coupled to a scalar field with fifth order self-interactions that we propose and study here are the unfolded equation where D = d + A, and the following field strength equation valued in the SO(3, 2) higher spin algebra
The higher spin setup
We now explain the notation and content of these equations following [1]. F = dA + A ∧ A is the HS field strength obtained from the HS gauge field A with the expansion To understand the structure of this gauge field we give the parts of the higher spin conformal system that will explicitly play a role below, namely the spin 2 part where the generators of translation, Lorentz, dilatation and special conformal transformations are, respectively, P a , M a , D and K a with their associated gauge fields e a , ω a , b and f a , and the spin 3 part A 2 = e ab P ab +ẽ abP ab +ẽ aP a +ω abM ab +ω bM a +bD +f aK a +f abK ab + f ab K ab . (2.5) The gauge fields (lower case quantities) and generators (upper case) of the HS algebra appearing in these expressions are all in irreps, i.e., the a 1 ....a n are totally symmetric and traceless sets of three-dimensional vector indices. The fields e a 1 ....an are the spin s = n + 1 frame fields and we will call f a 1 ....an the Schouten tensor 2 since Df a 1 ....an + .... = 0 turns out to be the spin s = n + 1 Cotton equation which is of order 2s − 1 in derivatives.
We emphasize here that all the fields depend only on the three dimensional space-time coordinates x µ and there are thus no dependence on any other coordinates or auxiliary variables like the ones 3 often appearing in Vasiliev's constructions of interacting higher spin theories in AdS.
The HS algebra can be defined as follows. Consider the so(2, 1) ≈ sp(2, R) spinor variables q α , p α (with α, β, .. = 1, 2) which are hermitian operators satisfying [q α , p β ] = iδ α β . The spin s = n + 1 HS generators are then given by all Weyl ordered polynomials in q α , p α of degree 2n. For example, for s = 2 we have and for s = 3 By computing the algebra of these generators keeping only single commutator terms we obtain the classical higher spin algebra based on the Poisson bracket used in this context in the original work on conformal higher spins in three dimensions [6]. Instead, by quantizing the variables q α , p α and Weyl order them as above they generate the for us relevant higher algebra of SO (3,2). Note that for all generators G(2n) (which are of order 2n in q α , p α ) 2 A perhaps more appropriate definition of the Schouten tensor is used, e.g., in [12], which corresponds to the spin 3 fieldf ab in (2.5). 3 The notation adopted by Vasiliev often use y α which correspond to our q α , pα while the auxiliary z α "coordinates" have no analogue here.
with n vector indices the ordering of the q and p operators do not matter and they are thus automatically ordered as required. We give the operators q α dimension L 1 2 and p α dimension L − 1 2 which means that both A and F will be dimensionless. This will be useful later when we discuss how to construct S and T on the RHSs of (2.1) and (2.2). With these rules all multiplications can be viewed as star products which, however, has to be remembered since it is not explicitly shown by our notation.
We now turn to the LHS of (2.1). The derivative operator appearing there is just D = d + A where A is as defined above 4 . However, the scalar field Φ(x) is special and differs in its definition form A. Φ is expanded only in terms of the most special conformal generators K a 1 ...an which is the last one of the generators in each spin s = n + 1 field A n above and contains only the variables p α , in fact, exactly 2n of them. We define the HS scalar field as follows where the first term defines the usual scalar field φ(x) that will appear conformally coupled to spin 2 and all higher spin frame fields coming from A and with its own fifth order selfinteraction in the Klein-Gordon equation. The vacuum used in (2.1) is defined to be annihilated by the q α operators making it translationally invariant in the sense that P a |0 q = 0. Although Φ itself does not contain any q α operators, the fact that A does will lead to the appearance of interaction terms already for spin 2. In particular, a correctly normalized Rφ interaction term appears directly after starting the unfolding procedure as observed in [1]. The scalar field is conformal and thus of dimension L − 1 2 so the LHS of (2.1) is a one-form of dimension L − 1 2 which must be true also for S on the RHS of that equation. We will propose an expression for S below after explaining the structure of the second equation F = T .
The role of the vacuum in the unfolded equation (2.1) is clear and a well-known property of this kind of scalar field, see, e.g., [9]. However, one of the crucial points in this discussion is to understand the relation of the two field equations (2.1) and (2.2) where the former one involves the vacuum while the latter one does not and hence has components for every generator of the higher spin algebra. To make the following argument a bit more explicit we give the spin 2 and spin 3 equations coming from the generator decomposition of F = 0. Note, however, that the following spin 2 and 3 equations have been truncated to the single commutator terms for simplicity.
For spin 2 the equations are (in the gauge b µ = 0)[5] e a ∧ f a = 0, (2.14) Df a = 0, (2.15) where T a = De a = de a + ω a bc ω b ∧ e c and R a = dω a + 1 2 ω a bc ω b ∧ ω c . The second of these is useful for us since solving it for f µ a gives where S µν is the Schouten tensor. The third equation in the above F = 0 spin 2 system then just says that the Schouten tensor is symmetric and the last equation that it satisfies the Cotton equation.
For spin 3 we give only the cascading equations (see the next section) used to express the spin 3 Schouten tensor f µ ab in terms of the frame field e µ ab (the full system including the constraint equations is discussed in [1,16]) From the explicit structure of these equations (and the work in [1,6]) it should be clear that they can all be solved algebraically except for the very last one which is the Cotton equation and that this works for all spins. The result of this procedure thus expresses the spin s = n + 1 Schouten tensor f µ a 1 ...an in terms of the frame field e µ a 1 ...an , a relation that involves 2s − 2 derivatives. In order to be able to introduce interactions, i.e., a stress tensor on the RHS of all the Cotton equations for arbitrary spin we must relax the equation F = 0 and instead consider F = T where the RHS must have the property that all the field strengths F a 1 ...an (0, 2n) pick up the proper source terms.
Having concluded that all the component equations F (n q , n p ) = 0 for n q > 0 can be solved we note that the field strength F reduces to i.e., it has become a field with the same structure as Φ defined above. This reduction of F may, however, not be compatible with the Bianchi identities. Also other parts of F that are assumed zero here are probably only so in the linear analysis performed in [1]. As will be elaborated upon elsewhere [16] some of the constraint equations of the spin 3 system along other generators than K ab will contain the spin 2 Cotton tensor and will thus be affected by the introduction of source terms. Hence this description of the structure of F implies that F can not be made to act directly on the vacuum like in the unfolded equation (2.1) since then we would loose information. This suggests that the proper equation for F and T involving the vacuum is instead the integrability equation for (2.1) namely 5 with F taking values in the whole HS algebra, which implies that from which it should be possible to construct T .
We have now explained how to view the two equations (2.1) and (2.2) and also defined the RHSs of these equations without giving any explicit expressions for them. The main result of this paper is that it is in fact possible to construct the RHSs producing a fully interacting theory with back reactions in both equations. We emphasize here that we have not yet provided a complete proof that the equations we propose constitute a consistent system. In this context it might also be relevant to analyze the equation (where ⋆ is the Turning to the source terms we start by constructing S. It was explicitly shown in [1] that the unfolded equation with a zero RHS gives rise to the conformal interaction term Rφ with the correct coefficient for three dimensions. The goal now is to construct a RHS such that also the fifth order interaction term is generated after unfolding the equation. In fact, also the scalar terms in the full Cotton equation (1.3) require the addition of a source term. The only structures that can be written down which are one-forms with dimension L − 1 2 and could generate the wanted terms are in fact, where λ 1 and λ 2 are two free parameters. Here we have used the definitions where M a and K a are the spin 2 Lorentz and special conformal generators of dimension L 0 and L 1 , respectively. Unfolding (2.1) indeed gives the correct Klein-Gordon equation at the spin 2 level (see below) and interestingly enough also the correct Cotton equation. As described for spin 3 in [1] this unfolding can be carried out further up in spin without any problems. There are, however, features involving infinite sets of higher spin terms in the full equations which probably means that the equations have to be iterated and truncated at some desired high spin level. However, for this to work in the sense of producing the spin 2 Cotton equation with the correctly coupled scalar field as in (1.3) one further step is required. As we will clear below we have to make use of the possibility to shift the gauge fields in A by tensor terms which for spin 2 we choose as We will, however, not work with this shifted gauge field but instead move the tensor term over to the RHS of the unfolded equation. Combining this term with the one already in S we find that the RHS becomes Note that a corresponding term for the P a generator does not exist since the term P in A is of dimension zero so it cannot contain any factors of Φ. It may be mentioned in this context that the unfolded equation will itself produce the full spin 2 Cotton equation with the stress tensor as a source. For higher spins one may speculate about the structure of the corresponding Cotton equations. For spin 3 for instance, the Cotton tensor is in the irrep 7 and has dimension L −4 , compared to L −3 for spin 2, and hence the two-scalar terms must contain one further derivative which should result from the unfolding. Also other more complicated terms are possible with derivatives distributed between scalars and higher spin frame fields in various ways. It is even possible that there is a non-derivative e µab φ 10 term as a source in the spin 3 Cotton equation which may come from further terms in the source S. E.g., one may envisage terms containing e µ ab which must involve e µ ab M ab , e µ ab K ab , etc, multiplied by |Φ| 4 Φ, |Φ| 8 Φ, etc for dimensional reasons. How it is possible for such terms in S to affect the spin three equations will be clear below. Note that the issue of whether or not terms like these will contribute also to the Klein-Gordon equation depends on traces like e µ a e µ ab being non-zero. However, this is probably not the case since they can be set to zero by higher spin "scale" transformations.
In a similar manner we may deduce the structure of T in the HS equation (2.2). T must be a two-form of zero dimension giving rise to, after unfolding, both ∂ µ φ∂ ν φ and φD µ ∂ ν φ type terms. An especially intriguing fact is that derivative terms of the kind φD µ ∂ ν φ may only arise through unfolding. The T that has these properties will not be presented here and we hope to come back to this question elsewhere. Note that a structure similar to S, i.e., where the two-forms are ⋆ P = 1 2 dx µ ∧ dx ν ǫ µν ρ e ρ a P a , etc. (2.28) will not suffice since terms with explicit derivatives seems to be needed. In fact, this follows directly from (2.20) which of course will imply relations between parameters in T and S. Nevertheless, it is the first term in (2.27) that has the correct structure to generate the required source term for the spin 2 Cotton equation in F = T . As for S also T will contain HS contributions of the kind e µ ab P ab etc.
Explicit unfolding
In order to perform some checks we need to unfold the scalar equation We start by computing the first few levels of the left hand side of the unfolded equation. At level n the expressions multiplying K a 1 ...an |0 q are (where D = d + ω(1, 1) and O(HS) indicates further higher spin terms that can be computed when needed) where we need to keep in mind that the uncontracted flat indices are always in irreps, i.e., in symmetrized traceless representations. This means that for levels n ≥ 1 each equation splits into three irreducible parts n − 1, n and n + 1 obtained by multiplying it with the level one generators P µ := e µ a P a , M µ := e µ a M a and K µ := e µ a K a , respectively. We refer to the resulting equations as n − , n 0 , n + , respectively. Applying this procedure to the n = 1 equation above we find At level 2 we find the following LHSs of the unfolded equation To get a feeling for the non-trivial information in these equations we again assume DΦ|0 q = 0 and continue by analyzing the first of the n = 2 equations, the 2 − one. To do that we need to use information from the two lower levels. This gives (dropping s ≥ 3 terms) which simplifies to Then using the fact (where the zero torsion condition is assumed) and the Klein-Gordon equation, we find the above 2 − equation to read We note then that this equation becomes an identity if we set where S µν is the Schouten tensor. As we have seen above in (2.16), the equation F = 0 also contains this information. The equation n = 1 + above can now be seen to play an interesting role in rewriting the Cotton equation in a way that will help us to guess source terms for the entire higher spin system. The point is that if we use (2.46) in the n = 1 + equation we can eliminate the term D (µ φ ν) = −D µ ∂ ν φ from the Cotton equation in (1.3). This gives Using the Ricci identity this equation reads and setting f µν = 1 2 S µν as found above it reduces to is the Cotton tensor. Again we find information present also in the equation F = 0. Thus is it clear that while the F = 0 equations contain, of course, only higher spin dynamics without scalar field sources the unfolded equation DΦ|0 q = 0 contains dynamical information for both the scalar field and the higher spin fields but without any non-trivial couplings between the scalar field and the higher spin ones. Introducing sources must thus be done for both equations in a consistent way. We will address this issue again below.
We now introduce the non-zero source terms (2.25): (2.51) One crucial property of this expression for the source is that it is zero at level n = 0, and that S M contributes only to the n 0 equations at level n while S K contributes only to the n − equations. Thus there are no source terms affecting the n + equations which therefore are the same as for DΦ|0 q = 0 where it is used to determine the fields φ a 1 ...a n+1 in terms of fields at lower levels. This is seen as follows: consider a general term at level n in the expansion of K µ Φ 5 |0 q = e µ a K a Φ 5 |0 q which we write as e µ a (Φ 5 ) b 1 ...bn K ab 1 ...b n−1 |0 q . Contracting it with P µ then gives K b 1 ...b n−1 . Contraction with M µ gives instead e µ a ǫ µ (a c K b 2 ...b n−1 )c = 0 and using K µ one gets e µa K µab 1 ...b n−1 = 0. S M works in a similar way with contributions only to the n 0 equations.
We will now continue to analyze the effects of adding the explicit source terms given in (2.25): In the case of S K we need the results We find the following contributions to the n − equations P a 6λK a Φ 5 |0 q = 0, n = 0, (2.61) With these results for the source terms of the unfolded equation where we have adopted the solution f µν = 1 2 S µν = 1 2 (R µν − 1 4 g µν R) to the n = 2 − equation in order to obtain the Cotton equation from the n = 2 0 equation. The Cotton equation obtained this way has the same structure as the one we were seeking namely the one given in (2.47) so this seems to be on the right track. A crucial further test is to construct the source T in the adjoint equation F = T such that the same Cotton equation results as discussed above after equation (2.27). This should be possible to do and we hope to come back to this in a future publication.
We should also emphasize another feature of the calculation leading to these results. The fact that the scalar self-interaction term K|Φ| 4 Φ gives rise to new terms in both the 1 − and 2 − equations leads to a consistency check in the sense that hese terms are seen to cancel in the the 2 − equation and hence the result quoted for f µν is not affected by the addition of the K|Φ| 4 Φ term.
A cascading Lagrangian
In this section we will make use of a feature of the Chern-Simons gauge theory for the higher spin gauge field A that allows us to show that the component Lagrangian is naturally expressed in terms of the generalized spin connections as suggested in [1]. We will demonstrate explicitly that it is possible to derive such a Lagrangian once a certain subset of the F = 0 component equations are solved. This subset of equations, which will be called cascading, does not include the spin s Cotton equations which are therefore obtained by varying the resulting Lagrangian with respect to the frame fields for each spin s ≥ 2. F = 0 contains a number of other equations that one must use to determine other fields contained in A or prove are identities; for a complete discussion of the spin three situation see [1].
We use here the results coming from the single commutator terms in the spin 2 and 3 cases as examples of the technique. However, these examples make it plausible that this procedure works for all spins and in the full star product formulation where all multicommutators and non-linearities are included. Its origin is in the gauge Chern-Simons theory where the trace is in the higher spin algebra which should generate precisely the terms in the Lagrangian used in the cascading procedure described below. This Lagrangian can in principle be written out explicitly in terms of all the fields appearing in A [7]. This will produce very complicated expressions and seems useful only for lower spin truncations. To get from this first order formulation to a "second" order one in terms of only the frame fields seems even more complicated and any kind of simplifications that can be utilized in this context would be welcome. Below we will describe one potentially useful feature of this kind. The standard spin 2 Chern-Simons like action reads in terms of ω 1 := ω(1, 1) which leads to the field equation C µν = 0 for the Cotton tensor C µν = ǫ µ αβ D α (R βν − 1 4 g βν R). Here we use the notation from [1] for the s = 2 spin connection obtained by solving the zero torsion condition. In that paper it was suggested that this spin 2 action has a generalization to arbitrary spin in the sense that the action is naturally expressed in terms ofω n :=ω(n, n) which is a one-form in the irrep n of SO(1, 2). (A spin 3 example isω ab appearing in the expansion of A 2 in (2.5).) Here we will show how to derive the action for spin 3 which is of the suggested form Note that a cubic term with threeω 2 does not exist. However, once the multi-commutators are taken into account there will appear new interaction terms containing spin connections with arbitrarily high spin. Also other higher spin fields will occur in these interaction terms and to be clear about which fields we talk about we consider again the spin 3 higher spin gauge field in (2.5) A 2 = e ab P ab +ẽ abP ab +ẽ aP a +ω abM ab +ω bM a +bD +f aK a +f abK ab + f ab K ab . (3.4) Solving the first four of the equations in (2.17) will result in a cascading sequence of relations that will express the one-form field f ab , called the spin 3 Schouten tensor, in terms of the frame field e ab each step producing a new derivative. The last equation in (2.17) is then the spin 3 Cotton equation containing five derivatives. The action we derive here uses only the fieldω ab in A 2 denotedω 2 in (3.3). In fact, as we will see below also the fieldb in (2.5) will appear in the action but it turns out that this field is expressible in terms ofω 2 as shown in [1]. The main goal of this section is to give a simple procedure for deriving a Lagrangian that gives the full non-linear Cotton equation for any spin, and indeed for the whole higher spin system. This result follows provided some basic conditions to be specified below are met. Here we give the main ideas and the details only for spin 2 and 3 but it is likely that this method can be generalized to the whole higher spin theory.
We will now show how one can derive an action that automatically gives rise to the fifth order Cotton equation for this spin 3 system. In order to streamline the discussion we simplify the spin 3 equations in (2.17) as far as possible without destroying features of the system that are relevant for this particular discussion. First we note that the spin 3 equations contain fields from the spin 2 system that we can discard at this point but put back if a complete analysis is required. This statement applies to all terms containing the spin 2 fields ω a and f a but not to the terms with a dreibein e a . This reduces the equations to We now need to make use of the possibility to gauge fix the higher spin symmetries to further simplify these equations. As explained in [1] the fieldẽ a can be set to zero by using the symmetries related to the parametersΛ ab (2, 2),Λ a (2, 2), andΛ (2,2). This sets to zero the last term in the first equation above. In order to eliminate also the last term in the second and third equations we need to be able to choose a gauge whereω a = e aω andf a = e af . However, while this is possible forω a it is not so forf a . In the former case we can useΛ ab (1, 3) andΛ a (1, 3) to establish this fact but in the latter case we have onlỹ Λ ab (0, 4) at our disposal which means that the best we can do is to gauge fix tõ which unfortunately will complicate the situation somewhat. Instead of trying to construct a Lagrangian directly for the frame fields and then perform a variation with respect to the frame fields to obtain the Cotton equations these can be obtained in a manner that is slightly easier if we make use of the description of this system as a Chern-Simons gauge theory for the conformal group SO (3,2). In order to see how this is done we consider first the spin 2 Chern-Simons system which is given in terms of the gauge field A 1 = e a P a + ω a M a + bD + f a K a , (3.11) where the SO(3, 2) generators P a , M a , D, K a have been assigned one gauge field each. The exercise is then to solve the zero field strength equation F 1 = 0 which if decomposed along the different generators become (here the Riemann tensor is R a = dω a + 1 2 ǫ a bc ω b ∧ ω c and we have imposed the gauge b = 0) which we call the cascading equations while the remaining equation is a constraint on the solution of the cascading system above. The first equation is solved for the spin connection in terms of the dreibein, the second for f a in terms of the Riemann tensor with the result that f a µ e νa is just the symmetric Schouten tensor. The last equation is then a constraint that is automatically satisfied while the last of the cascading equations becomes the Cotton equation. The goal is now to use these cascading equations to show that the variation of the action gives the Cotton equation.
We start the cascading procedure from (3.14) The variation of L 1 is which would give the Cotton equation by demanding δL 1 = 0 if the last two terms could be gotten rid off. To achieve this we note first that the second term vanishes due to the torsion constraint after an integration by parts. To deal with the last term we add the standard Chern-Simons term where we have used the second equation in (3.12) in the last equality. Thus we obtain the Cotton equation as a result of varying the Lagrangian L = L 1 + L 2 . However, L 1 = 0 after an integration by parts as a consequence of the torsion constraint which is assumed solved in this analysis. This implies that the Lagrangian L 2 alone provides the Cotton equation when varied with respect to the dreibein e µ a . This derivation of the Cotton equation is a bit too trivial to be interesting but for spin 3 and higher it seems to simplify the calculation of the spin s Cotton equation quite a bit. Recall that these equations are of order 2s − 1 in derivatives.
We now turn to the spin 3 system and repeat these steps. To this end we note that the spin 3 Cotton equation in the simplified version given above follows trivially by varying the Lagrangian L 1 = e ab ∧ df ab , (3.18) with respect to the explicit frame field. However, this conclusion is only correct if we can eliminate the second term in its variation But this can be done by adding another term to the Lagrangian whose variation cancels the last unwanted term. The term we need to add is The reason this works is that in the variation the first term equals −de ab ∧ δf ab by using the field equation for e ab coming from F = 0 (recall that we are in a gauge whereẽ a = 0). To make use of this field equation is of course allowed here since it is algebraic and actually solved so that it is identically satisfied. This fact can now be used for all the "field equations" in F = 0 except the last one which is the five derivative Cotton equation.
Having established this cancelation we now need to add a further term to cancel also the second term in δL 2 above. The required term is whose variation can be written, again making use of the algebraic "field equations" this time the ones forf ab andẽ ab , as The remaining term is then the last one in the previous equation which we cancel by adding which varies into After canceling the last term against the same term coming from δL 3 we are left with the first term in δL 4 which we write as − 1 6 δω ab ∧ (dω ab − e a ∧f b ). (3.26) The first term in this expression is canceled by the variation of Now we can use the algebraic equations from F = 0 again to find that L 1 + L 2 = 0 and L 3 + L 4 = 0. Since the procedure stops here L 5 is actually (the main part of) the final answer and is precisely the Lagrangian proposed in [1]. We have, however, still one term that we need to deal with, namely the second term in (3.26), which as we will now see is of a slightly different nature. We start by adding Thus the last spin 3 term to add is We have therefore shown that the spin 3 part of the Lagrangian reads The second term on the RHS is related (see above) to the last one and we find the final form of the Lagrangian to be which is therefore expressed entirely in terms of the spin connectionω ab of the spin 3 sector.
One also needs to verify that the remaining constraint equations are satisfied as explained in [1]. It would then be interesting to see how the different steps in the cascading procedure are affected by increasing the spin, adding non-linear terms and coupling the system to other fields. For the spin 2 -spin 3 system these questions may be answered by the analysis of the full non-linear equations in [16]. As noted previously in this section the cascading trick is suggested by writing out the original Chern-Simons action for A using the trace over the higher spin algebra at each spin level separately as done in [7].
Conclusions
In this paper we have continued the approach to conformal higher spin theories in three dimensions set up recently in [1]. There it was emphasized that the unfolded equation DΦ|0 q = 0 for the higher spin algebra based on SO(3, 2), the conformal group in three dimensional space-time, realized in terms of two hermitian spinor operators q α , p α satisfying [q α , p β ] = iδ α β , produces the correct Klein-Gordon equation for a conformal scalar coupled to the spin 2 metric and its generalization to spin 3 6 .
Here we take this approach some steps further by proposing an unfolded equation for a scalar field coupled to all higher spins ≥ 2 including the φ 5 self-interaction term in the Klein-Gordon equation. The expected scalar interactions with the spin 2 and higher spin fields show also up in the field equations and are produced in the unfolding. That the correct spin 2 Cotton equation is obtained is checked explicitly while for spin 3 the corresponding equation can easily be derived from this setup but it needs to be checked independently. Such a check would strengthen the argumentation for the higher spin equations suggested here.
We also present a simple method by which the Lagrangian for each higher spin field can be derived starting from the F = 0 equation valued in the higher spin algebra. This is demonstrated in section 3 for a truncated version of the spin 2 and 3 equations and can be seen to be a consequence of expanding the Chern-Simons gauge theory action using the trace over the entire higher spin algebra. Then a cascading trick leaves the whole action written in a form where the spin connections for each spin play a central role. The spin connections are here expressed as s − 1 derivatives acting on the spin s frame field implying that L n = ω n dω n for s = n + 1 contains 2s − 1 derivatives as it should. The result is a "second" order formalism type Chern-Simons Lagrangian generalizing the standard one L 1 (ω 1 (e)) for spin 2 to all spins. The cascading is here only performed for the linearized theory but will most likely give the full answer once all interaction terms are included.
This approach also suggests a way to write down a higher spin Lagrangian in the higher spin language. One may try to combine the gauge Chern-Simons action S = 1 2 T r (AdA + where we have introduced a dual scalar fieldΦ(q) which is expanded in terms of even powers of q α instead of p α as for the ordinary scalar field Φ(p). By assumption the dual fieldΦ(q) = φ(x) +φ a (x)P a + ... where P a = − 1 2 (σ a ) αβ q α q β is non-zero on the vacuum p 0| which is used to produce a well-defined inner product p 0|0 q = 1.
This action can be expanded in component fields whose field equations should correspond to the scalar field equation D ⋆ DΦ(p)|0 q = 0. The source S can probably be hidden in the covariant derivative. The unfolded equation DΦ(p)|0 q = 0 could then be regarded as a solution to this equation which means that some information is lost if one instead solves only D ⋆ DΦ(p)|0 q = 0. It would be nice to have an action principle that directly generates the unfolded equation as the field equation. Interaction terms for the scalar field may also arise by considering actions of the kind S 6 = p 0|(Φ * (q)) m ⋆ (Φ(p)) n |0 q | m+n=6 . (4. 2) The actions considered here are dimensionless and the integrands are three-forms but they seem not to produce in a simple way the field equations used previously in this paper. The main reason for this is that although the dual fieldΦ(q) is here assumed to start with φ(x) the terms of higher order in q α will probably be very complicated (even non-local).
Another potentially interesting aspect arises if this higher spin theory can be generalized to contain the topologically gauged spin theories derived in [2,3,17]. Then perhaps the background solutions found in [17,18] could be lifted to the higher spin theory which could then provide information about how to write, e.g., an action also in AdS 3 , Schroedinger and the semi-flat Schroedinger geometries discussed in [18,19]. If this turns out to work it would give additional support for a "sequential AdS/CF T " phenomenon as suggested in [20] where Neumann boundary conditions and the associated dynamical conformal boundary theories play a crucial role. | 10,148.6 | 2015-06-10T00:00:00.000 | [
"Physics"
] |
Bose-Hubbard dynamics of polaritons in a chain of circuit QED cavities
We investigate a chain of superconducting stripline resonators, each interacting with a transmon qubit, that are capacitively coupled in a row. We show that the dynamics of this system can be described by a Bose-Hubbard Hamiltonian with attractive interactions for polaritons, superpositions of photons and qubit excitations. This setup we envisage constitutes one of the first platforms where all technological components that are needed to experimentally study chains of strongly interacting polaritons have already been realized. By driving the first stripline resonator with a microwave source and detecting the output field of the last stripline resonator one can spectroscopically probe properties of the system in the driven dissipative regime. We calculate the stationary polariton density and density-density correlations $g^{(2)}$ for the last cavity which can be measured via the output field. Our results display a transition from a coherent to a quantum field as the ratio of on site interactions to driving strength is increased.
Introduction
In recent years, the investigation of condensed matter and quantum many-body systems with quantum simulators, artificial quantum many-body systems that offer unprecedented controllability and measurement access in the laboratory, has become an active research area and is currently receiving increasing attention. The technology employed for quantum simulators ranges from ultra-cold atoms [1] to ion traps [2] and systems of coupled cavities [3,4,5,6] among others.
Due to recent technological progress, arrays of coupled cavities and optical nano-fibers in which the trapped light modes couple to atoms are now becoming suitable devices for the generation of quantum many-body systems of polaritons [3,4,5,6,7,8,9,10], i.e. quantum-mechanical superpositions of atomic and photonic excitations. In these systems, it is of particular interest but also most challenging to reach a strongly correlated regime, where their dynamics differs most significantly from that of classical light fields. The key experimental requirement for reaching these conditions is a so called strong coupling regime for the cavities respectively the fiber. This means that the coherent coupling between the light modes and the atoms or other optical emitters must be strong compared to the loss processes which are inevitably present in every device. A very impressive strong coupling regime has recently been realized in circuit cavities, making these devices an ideal platform for studying strongly correlated polaritons.
Circuit QED [13,14,15] was developed as a solid state equivalent to optical cavity QED, coupling Josephson qubits that are acting as artificial atoms with stripline resonators acting as cavities for microwave photons. Here, the reduced quasi onedimensional mode volume of the stripline resonator and the enhanced dipole moment of the Josephson qubit with respect to atoms give rise to a pronounced strong coupling regime, where the coupling between resonator and qubit, g, significantly exceeds both the decay rate of the resonator, κ, and the qubit, γ, g/κ 1 and g/γ 1. Moreover, since stripline resonators trap microwave photons, they are more than 1cm long. The precision of current fabrication techniques thus allows to build several resonators that can resonantly couple to each other via mutual photon tunneling on the same chip [16,11,12]. The currently employed circuit QED technology thus permits to build arrays of resonantly coupled cavities that each interact in a strong coupling regime with qubits. In this way it is one of the first setups to feature all properties which are needed to generate strongly correlated many-body systems of polaritons.
In this work we show that an effective Bose-Hubbard Hamiltonian for polaritons can be engineered in an array of stripline resonators that each couple to a transmon qubit [17]. Josephson qubits [18,19] come in basically three different flavours depending on the property that is controlled from the outside or rather the channel of the qubit environment coupling: flux- [20], phase- [21] and charge-qubits [22,23]. Here we consider a setup with transmon qubits [17] which are charge qubits (Cooper pair boxes) operated at sufficiently enhanced values for the ratio of Josephson energy, E J , over charging energy, E C , E J /E C ≥ 50 and are robust against decoherence caused by fluctuations of background charges. We emphasize that the effective Bose-Hubbard Hamiltonian we derive can be realized with resonators and qubits of readily existing technology.
Quantum phases for the ground state and low temperature thermal states of the Bose-Hubbard Hamiltonian have been studied with ultra-cold atoms trapped in optical lattices [1]. This system has also been employed to study the dynamics of noneequilibrium states that were prepared by sudden quenches of some lattice parameters [24]. In contrast, a realisation in an array of stripline resonators allows to investigate the Bose-Hubbard Hamiltonian in a fundamentally different regime, where the resonator array is permanently driven by lasers to load it with photons and thus compensate for the excitations that are lost due to qubit relaxation and cavity decay. Whereas substantial understanding of equilibrium quantum phase transitions has been achieved, a lot less is known about these non-equilibrium scenarios where the dynamical balance between loading and loss mechanisms leads to stationary states. It is the investigation of these stationary states, that our approach to the Bose-Hubbard Hamiltonian is ideally suited for.
Experiments with transmon qubits [17,25] coupled to a stripline resonator are often conducted without directly measuring the state of the qubit but by spectroscopically probing the transmission properties of the resonator. In an experiment the effective Bose-Hubbard Hamiltonian will thus be operated out of thermal equilibrium in a driven dissipative regime [26,27,28,29]. In a suitable setup with a linear chain of resonators one would thus drive the first resonator with a coherent microwave input and measure the properties of the output signal at the opposite end of the chain. Figure 1. Sketch of the proposed system to simulate Bose-Hubbard physics. Stripline resonators are coupled capacitively in a chain and each stripline resonator is coupled to a transmon qubit that gives rise to an on-site interaction for the polaritons. For definitions of J 0 and g, see (7) and (4).
In the regime we consider, this situation can be accurately described by a Bose-Hubbard Hamiltonian for polaritons with a coherent driving term at the first site and Markovian losses of polaritons due to cavity decay and qubit relaxation at all sites of the chain. In this scenario, the interplay of coherent drive and polariton loss leads to the emergence of steady states, for which we derive the particle statistics and characteristic correlations. In doing so we focus on the polariton statistics, in particular the density and density-density correlations, in the last resonator as these can be measured via the output signal. Our results show a transition from a coherent field to a field with strongly non-classical particle statistics as the ratio of on site interactions to driving strength is increased.
This work is devided into two main parts. In section 2 we show that the dynamics of our system can be described by a Bose-Hubbard model and in section 3 we present the results of our calculations for the polariton density and density-density correlations.
Transmon-QED and the Bose-Hubbard Model
To generate a Bose-Hubbard model with polaritons we consider an array of capacitively coupled stripline resonators with each resonator coupled to a transmon qubit, see figure 1. In this section, we first introduce the Hamiltonian that describes this setup and then show how it can be considerably simplified and transformed into a Bose-Hubbard Hamiltonian for two polariton species.
The full Hamiltonian
The full Hamiltonian of our setup is a sum of single-site Hamiltonians, H 1−site,i , that each describe a transmon qubit coupled to a stripline resonator and terms that describe the capacitive coupling between neighbouring stripline resonators, H C J ,i,i+1 , A transmon qubit can be regarded as a Cooper pair box that is operated at a large ratio of E J /E C 1. This regime can be accessed by shunting the Josephson junction with an additional large capacitance and thereby lowering the charging energy E C = e 2 /(2C Σ ).
Here, C Σ = C J + C g + C B is the sum of the junction's capacitance, C J , the mutual capacitance with stripline resonator, C g , and the shunting capacitance C B . The Hamiltonian for one stripline resonator coupled to a transmon qubit reads, Here, ω r is the resonance frequency of the isolated resonator and we have omitted the site-index i for readability. Transmon qubits consist of two superconducting islands connected by Josephson junctions.n is the operator for the difference in the number of Cooper pairs on the two superconducting islands, n dc g the offset charge induced by an applied dc voltage and intrinsical defects andφ is the operator for the superconductingphase difference between the two islands. We assume the transmon qubit to be placed in the antinode of the stripline resonator's field mode. This gives rise to an additional ac component in the offset charge, n ac rms = ω r /C r is the root mean square voltage of the vacuum field mode and a the annihilation operator of photons in the stripline resonator. The offset charge n ac g thus induces a coupling between the transmon and photons in the resonator. For circuit QED setups one normally uses λ-resonators with the antinode located at the middle of the resonator.
The energy of the coupling capacitor between neighbouring transmission line resonators, e.g. sites i and i + 1, can be expressed in terms of the difference in the electrostatic potentials across the capacitor, Here, C r is the capacitance of the whole stripline resonator with respect to the ground plane and C J the capacitance of the capacitor that connects the two resonators. We assume the electrostatic potential in resonator i to have antinodes at the ends of the resonator and write it in terms of the creation and annihilation operators, a † i and a i . We now turn to simplify the Hamiltonian (1) by a sequence of approximations.
Approximations to single-site terms
We first simplify the single-site terms, H 1−site , as in (2). For large E J /E C and low energies, the phase difference between the two islands remains small and we can expand the cosine in (2) around ϕ = 0 up to quartic order, Higher order terms can be neglected, c.f. [17]. In terms of bosonic creation and annihilation operators for the transmon qubit excitations, where with, The terms linear in the creation and annihilation operators can be eliminated by performing the unitary transformation that displaces the creation and annihilation operators by the constants r and s respectively is. r and s can now be chosen such that all terms linear in a and b cancel in the transformed Hamiltonian. Finally the interaction between the transmon qubit and the field mode of the stripline resonator gets reduced to an exchange interaction in a rotating wave approximation. To justify this rotating wave approximation we have to ensure that the interaction strength between the transmon qubit and the stripline resonator is smaller than the sum of the frequencies of the two, g ω r + ω q 1 .
Parameters extracted from [30] are ω r = 43.6Ghz, E C = 0.4Ghz and a maximal value for E J /E C of 150. We choose C g eV rms /(C Σ ω r ) = 0.1 which is in agreement with the theoretical upper bound in [17] and find g ω r +ωq ≈ 0.1. The single-site Hamiltonian can thus be approximated by,
Approximations to couplings between resonators
We now turn to simplify the couplings between neighbouring resonators, H C J ,i,i+1 . We assume that C J C r which implies that C J ω r /(2C r ) is small compared to the isolated cavity frequency ω r , C J /(2C r ) 1 and apply a rotating wave approximation to neglect those terms in the intercavity interaction that don't conserve the total photon number, where J 0 = C J Cr ω r . The first term on the rhs of (7) can be absorbed into the single-site Hamiltonians by introducing a shifted resonator frequency and the remaining term in (7) describes tunneling of photons between neighbouring resonators. Next, we explain how the simplified Hamiltonian can be transformed to a two component Bose-Hubbard Hamiltonian.
The polariton modes
In the case of circuit QED with transmon qubits the coupling constant between photonic and qubit excitations is the dominating interaction energy of the system. Excitations of the whole system therefore can't be characterized as purely photonic or qubit excitations in general. To obtain a more suitable description we introduce new creation and annihilation operators, describing excitations commonly termed polaritons where, sin(θ) = g g 2 + ∆ω + ∆ω 2 + g 2 2 cos(θ) = ∆ω + ∆ω 2 + g 2 with ∆ω = ω r − ω q . The sine and cosine terms account for the transition of the character of the exitations from photonic to qubit excitations for the c + -mode as the ratio E J /E C increases and vice versa for the c − -mode. Expressing the Hamiltonian C J ,i,i+1 in the polariton modes (9a-b) we get, This Hamiltonian consists of two harmonic chains for the c + and c − polariton modes, with ω ± = ((ω r + ω q ) ± ∆ 2 + g 2 )/2, a term describing hopping from a c − -mode at site i to a c + -mode at site i + 1 and all other possible combinations, and a term describing the nonlinearity, We assume the frequencies of the two polariton modes to be well separated, apply another rotating wave approximation where we neglect the term H cc and convert the nonlinearity term H nlin into Kerr form and get a renormalization of the polariton frequency and a density-density coupling between the polariton modes. This requires the difference in frequencies for the unperturbed modes ω + − ω − = ∆ω 2 + g 2 involved to exceed the magnitude of the coupling between the modes and the nonlinearity, Plugging in realistic values for the parameters, extracted for example from [30], (E C = 0.4Ghz, ω r = 43, 6Ghz E J /E C = 0..150) we realize that the second inequality is indeed fulfilled. Engineering the capacitance C J such that J 0 is of the order of E C /12, we can ensure that the first equality is fulfilled as well. The rotating wave approximation eliminates the intermode exchange coupling and we obtain a Bose-Hubbard Hamiltonian for both modes, c + and c − , with a density-density coupling between them, where with We thus arrived at a two component Bose-Hubbard model for the modes c + and c − with attractive interactions and a density-density coupling between both species. The two species are a mixture of stripline resonator field mode and qubit excitations (9a-b) with different weights of the photonic or qubit contribution depending on the value of For small values of E J /E C the c + polaritons become increasingly photonic. Consequently, their tunneling rate J + approaches the tunneling rate of bare photons, J 0 , and their on-site interaction U + vanishes. For large E J /E C , on the other hand, J + vanishes and the nonlinearity U + approaches the nonlinearity of the qubits, E C . For the c − polaritons, the roles of both limits are interchanged.
For each value of E J /E C , the separation between the resonance frequencies of c + and c − polaritons, |ω + − ω − |, is sufficiently large such that, in a scenario where we drive the first cavity by a microwave source, we can always adjust the frequency of the drive to only selectively excite one of the modes. For reasons that will become clear later we therefore choose the c + -polaritons to be our quantum simulator for a driven dissipative Bose-Hubbard model.
Validity of the approximations
To further illustrate the validity of our approximations we compare the eigenenergies of the full Hamiltonian H, c.f. (1), approximated under assumption (5), with the eigenenergies of the Bose-Hubbard Hamiltonian H (3) , c.f. (10). The single-site Hamiltonians summed up in the full Hamiltonian describe the interaction between transmon qubit and stripline resonator in a rotating wave approximation. This Hamiltonian has already been used to describe an experiment revealing the nonlinear response of a resonator and transmon qubit system with excellent agreement between theory and experimental data [33]. Therefore comparison of the eigenvalues of our Bose-Hubbard Hamiltonian and the eigenvalues of the full Hamiltonian provides a good means to estimate the effects of the approximations we made. For simplicity we restricted our model to two sites.
Both Hamiltonians conserve the total number of excitations and we can diagonalize them in each subspace with a fixed number of excitations independently. Eigenvalues of the full Hamiltonian in the one excitation subspace are plotted in solid lines in figure 3a. Without qubits, the Hamiltonian of the two resonators has eigenmodes a ± = (a 1 ± a 2 )/ √ 2. In figure 3 a) we also plotted the energies of these eigenmodes of the two coupled empty stripline resonators marked by two horizontal dash-dotted gray lines and the eigenenergy of the transmon qubit marked by a dash-dotted gray line. Additionally to the differences between the eigenenergies of the full Hamiltonian and the Bose-Hubbard Hamiltonian in the one excitation subspace we plotted the differences in the two excitation subspace in figure 3 c). These eigenenergies can be grouped for the Bose-Hubbard Hamiltonian according to the distribution of excitations among the two polariton species. Differences of eigenenergies for states with two c − polaritons are plotted in blue, for two c + polaritons in red and for one c − polariton and one c + polariton in green. In the two excitation subspace we have similar findings as in the single excitation subspace. There are aberrations for the anti-crossing area because of the neglected intermode polariton exchange interaction. In addition, states containing c − polaritons have aberrations for small values of E J /E C due to the approximations of the transmon Hamiltonian whereas c + polaritons do not.
Therefore the Bose-Hubbard Hamiltonian for the c + polaritons mimics the behaviour of the full Hamiltonian for the full range of E J /E C , provided the intersite coupling J 0 is at most of the order of the on-site nonlinearity E C and the polariton densities are not to high. To conclude: In a driven dissipative setup where we selectively excite the c + -Polaritons we do have a quantum simulator for a Bose-Hubbard Hamiltonian.
Polariton statistics in the driven dissipative regime
In this section we make use of the above explained mapping of the full Hamiltonian H to a two component Bose-Hubbard Hamiltonian H (3) and consider a chain of coupled resonators, where we coherently drive the first resonator and adjust the microwave drive frequency to selectively excite the c + -polaritons. In the driven dissipative regime we expect to explore new physics that go beyond the equilibrium features that are commonly examined in many body physics. We thus calculate the polariton density and the density-density correlations g (2) in a master equation approach and analyse the dependencies on the system parameters J + ,U + and the Rabi frequency of the microwave drive Ω.
First experimental realisations of coupled stripline resonators are expected to consist of only a few resonators. To closely approximate the expected experiments and to speed up numerical calculations, we thus focus on a minimal chain of only two resonators. More specifically, we consider two stripline resonators coupled to transmon qubits, where the first stripline resonator is driven by a microwave source and the output signal of the second cavity is monitored as a function of the microwave drive frequency and the ratio of E J /E C which can be controlled by applying an external magnetic flux to the transmon qubits c.f. figure 4. This setup and very similar setups are currently investigated in experiments for example [16], and the spectroscopic measurement technique proposed here has already been demonstrated in single site experiments for example in [14].
The ouput fields are linear functions of the intra-cavity field in the second resonator and thus show the same particle statistics. We therefore calculated the polariton density and the g (2) -function for the second cavity. To do this, we use a master equation approach in which each element, the stripline resonators and the transmon qubits, couple to separate environments with decay rates denoted κ for the stripline resonators and γ for the transmon qubits. Absolute values can for example be extracted from [33] where γ = 3.7MHz . Decay of the stripline resonator is due to the finite transparency of the coupling capacitors at both ends of the resonators and decay rates for example in [30] are κ = 5.7MHz. Both environments, for the transmon qubit and the stripline resonator, are assumed to be in a vaccum state which is a valid assumption at typical temperatures for circuit QED experiments of T = 15mK. Therefore in a master equation for a Hamiltonian expressed in the operators for the resonator field mode a and the transmon qubit b the dissipators read, These can be cast into dissipators expressed in the polariton modes c + and c − , In the driven dissipative case where we selectively excite the polariton c + mode we can We truncate this set of coupled equations by omitting couplings to mean values with n + m + k + l bigger than some n max and solve the reduced set of equations of motion.
To confirm the accuracy of our approach, we test its convergence with increasing n max . That is, we repeat the procedure for n max → n max + 1, compare the results and increase Figure 5. Logarithmic density plot of the polariton density c † 2 c 2 in the last cavity plotted against E J /E C and the frequency of the microwave drive in units of the frequency of the stripline resonator ω µw /ω r . Resonances in the density of polaritons arise where the microwave frequency matches one of the transition frequencies of the non-driven conservative system Hamiltonian H c+ the value for n max in case both results differ by more than some required threshold value. The advantage with respect to a method that truncates the Hilbert space at some maximal number of excitations, is that our method becomes exact in the limit where the Hamiltonian becomes harmonic which is the case for small values of E J /E C . Moreover we experience a substantial decrease in cpu-time for this method.
Polariton density
We are interested in the field particle statistics in the driven dissipative regime and its dependencies of the on-site nonlinearity U + , the intersite coupling J + and the strength of the microwave drive Ω. We therefore first consider the density of c − polaritons in the last resonator. Figure 5 shows the density of c + polaritons, c † 2 c 2 , in the second cavity as a function of the ratio E J /E C and the microwave drive frequency ω µw . The density of polaritons in the last cavity exhibits resonances when the microwave drive frequency matches one of the transition energies of the undriven conservative system Hamiltonian H c + and decreases rapidly because of the small decay rate Γ. One can clearly see the resonances due to transitions driven between the groundstate and eigenenergies in the one excitation subspace plotted in figure 3 a). Transitions from the groundstate into a two excitation state are much weaker owing to the finite Rabi frequency of the microwave drive Ω.
Density-density correlations
We now consider the density-density correlations g (2) in the last resonator. The g (2)function is a quantity that describes the likelihood to measure two photons at the same place. The g (2) function of the last resonator is the normalized meanvalue of the second order moment of the field operators in the last resonator, Classical, thermal fields have g (2) -values larger or equal to unity with the coherent field exhibiting a g (2) -value of 1. A g (2) -value below 1, meaning that the photons are antibunched, is a sufficient condition to call the field quantum mechanical in the sense that there is no classical field showing the same results in measurements of the g (2) -function.
With recently developed refinements of microwave measurement techniques [31,32], measurements of g (2) -functions in circuit QED are now becoming feasible. In figure 6 we plotted the g (2) -function of the field in the last stripline resonator. To get a more detailed insight of the processes leading to a g (2) -value for specific parameters we plotted the g (2) -function along special values of the microwave drive frequency and the ratio E J /E C marked by white lines in figure 6. Figures 8 and 9 show the results for the different paths, denoted by a), b), and c) in the density plot of the g (2) -function in figure 6, for the g (2) -function as well as the corresponding values for the density of polaritons in the last cavity, c † +,2 c +,2 , and the second order moment, c † +,2 c † +,2 c +,2 c +,2 . For small values of E J /E C our system is basically linear because the nonlinearity U + /2 = (E C /2) sin 4 (θ) is negligible. A harmonic field mode driven by a coherent source is in a coherent state. Therefore the g (2) -function is equal to one for small values of E J /E C . As the nonlinearity grows for increasing values of E J /E C the g (2) -function plotted against the ratio of E J /E C and the frequency of the microwave drive ω µw gets more structured. In the density plot of the g (2) -function in figure 6 we can identify resonances where the frequency of the microwave drive matches the eigenenergies of the unperturbed system without microwave drive and dissipation. These resonances manifest themselves as separating lines between bunching regions (values of g (2) > 1) and anti-bunching regions (values of g (2) < 1).
To understand the origin of these separating lines, it is illustrating to analyze our system in terms of a symmetric mode, d + , and an antisymmetric mode, d − , where rather than the two localized modes c +,1 and c +,2 . In terms of d + and d − the Hamiltonian reads, The Hilbert space of the Hamiltonian H c+ can be described by two different bases, states that are labeled by the number of excitations in the collective modes, or states that are labeled by the number of excitations in the localized modes, The lines separating bunching and anti-bunching regions in figure 6 can now be identified with the energies of the 1 excitation states, Figure 7. Sketch of the energy spectrum of the Bose-Hubbard Hamiltonian H c+ for a two site model for vanishing nonlinearity U + compare a) and vanishing intersite coupling J + compare b). For vanishing nonlinearity a microwave drive can drive multiple transitions leading to a coherent state. Contrary to the linear case for strong nonlinearity one can only drive a transition between two distinct states as the energy differences between the eigenenergies aren't degenerate any more and the energy of a 2-excitation state, To understand the origin of the anti-bunching regions for a microwave drive that is blue detuned with respect to the energies of the states (13a-c) and the bunching regions for a red detuned microwave drive one has to consider the spectrum of the Hamiltonian H c + . For small nonlinearity, that is for values of E J /E C < 50, the Hamiltonian (12) reduces to a Hamiltonian for two uncoupled harmonic oscillators described by the modes d + and d − with energies ω + − J + and ω + + J + respectively. The eigenenergies in this situation are shown in figure 7 a). A microwave drive with frequency ω + −J + as depicted in figure 7 not only drives the transition from the groundstate to the first excited state of the symmetric collective mode |0 0 cm → |1 0 cm but also all other transitions to higher excited states |n 0 cm → |n + 1 0 cm . As a result the steady state in this situation is always the coherent state exhibiting a g (2) -value of 1. For slightly increased values of the nonlinearity that remain in the range U + < Γ c + , the system can still be described in terms of two weakly interacting collective modes. But the symmetric as well as the antisymmetric mode are subject to the nonlinearity and an intermode interaction, c.f. Hamiltonian (12). This can be seen in figure 8 a) where we plotted the g (2) -values that deviate from the value of a coherent field. The g (2) -function shows anti-bunching regions for blue detuned microwave drive with respect to the energies of the states (13a-c) and bunching regions for red detuned microwave drive. To gain insight into the underlying physical principles in this situation we calculated the density c † +,2 c +,2 and the second symmetric moment c † +,2 c † +,2 c +,2 c +,2 by an iterative meanfield approach to solve the master equation (11) with the Hamiltonian written as in (12). Operator mean values of a single driven dissipative mode with Kerr nonlinearity can be computed exactly [34] and we expand this model in a meanfield way to incorporate the denistydensity coupling. With this method we get good agreement with the numerical exact values for the density in the last cavity and are able to compute values for the polariton density close to the systems eigenenergies (13a-c) where our numerical approach fails to converge. For details about the method please see Appendix A. These results support our assertion that the system can be described by weakly interacting collective modes in the limit of small nonlinearities U + . In figure 8 a) numerically exact values are plotted in solid lines and values obtained by the above mentioned mean field method are plotted in dashed lines.
For strong nonlinearity U + and small intersite coupling J + , that is for values of E J /E C > 50, the c + -polaritons become transmon excitations and the Hamiltonian H c + splits into two parts describing the first and the second transmon qubit respectively. Here the collective modes d + and d − no longer decouple and the localized modes c 1 and c 2 become a more appropriate description of the system. The eigenenergy spectrum in this situation is shown in figure 7 b). The main difference to the spectrum without nonlinearity is that the microwave drive can't be adjusted to drive multiple transitions. In order to drive the transition to the state |02 s for example one has to adjust the microwave frequency to match half of the energy difference between the groundstate and the 2-excitation state |02 s because it is a two photon transition. Due to the anharmonicity of the eigenenergy spectrum no other transition can be driven. The difference of microwave frequencies needed to drive the transition from groundstate to |01 s respectively |02 s amounts to U + /2 which is bigger than the linewidth Γ c + . To get an estimate for the value of g (2) we simplify our model assuming that the frequency of the microwave drive is adjusted such that it resonantly drives a transition between the groundstate of our model |00 s and some excited state |0n s . Provided the Rabi frequency Ω and the loss rates κ and γ are all small compared to the frequency separation between different resonance lines, the system can then be modeled by a two level system consisting of the groundstate of our model |00 s and the excited state |0n s . In this situation the maximal occupation inversion one could get in the steady state is, ρ max = 1 2 (|00 00| + |0n 0n|) , and the g (2) -value for this density matrix would be g (2) = tr ρ max c † +,2 c † +,2 c +,2 c +,2 tr ρ max c † +,2 c +,2 2 = 2n(n − 1) n 2 which is below one for a 1-excitation state and above one for every state containing more than 2 excitations. Therefore bunching areas arise if states containing more than 2-exciations are excited and anti-bunching areas arise if only 1-exitation states can be excited and the photons pass the setup "one by one". In our Bose-Hubbard model the on-site nonlinearity is negative and hence all transition frequencies to states containing more than two excitations are red detuned with respect to transition frequencies to states containing only one excitation (13a-c). This is why bunching areas arise for red detuned microwave drive and anti bunching areas arise for blue detuned microwave drive. If we adjust the microwave drive frequency for every value of E J /E C to match the eigenfrequency of the antisymmetric 1-excitation state we get the transition from a perfectly uncorrelated field with g (2) = 1 to strongly correlated, anti-bunched field statistics with g (2) < 1 see figure 9. For a quantum phase transition of the ground state of the Bose-Hubbard Hamiltonian one would expect this transition as a consequence of the interplay of the intersite hopping J + and the on-site nonlinearity U + . For the driven dissipative system we observe that the interplay between the Rabi frequency of the microwave drive Ω cos(θ) and the on-site nonlinearity U + determines the particle statistics. This can be seen in figure 9 where we plotted the g (2) -function and the intersite coupling, on-site nonlinearity and the Rabi frequency of the microwave drive.
Summary
We have shown that a chain of capacitively coupled stripline resonators each coupled to a transmon qubit can be described by a Bose-Hubbard Hamiltonian for two species of polaritons. The validity of our approach has been checked for realistic parameters of the transmon qubits and stripline resonators and for low densities of polaritons. In a driven dissipative regime where a microwave source coherently drives the first cavity one can selectively excite only one species of polaritons and investigate the properties of a driven dissipative Bose-Hubbard model. We calculated the density and the g (2) -function of the polaritons in the last resonator of a two site setup and investigated their dependencies on the microwave drive, the intersite coupling and the on-site nonlinearity. For vanishing nonlinearity the g (2) -function is approximately equal to unity indicating a coherent field. With increasing nonlinearity bunching and anti-bunching areas arise depending on the frequency of the microwave drive. For a microwave drive that is in resonance with a transition to a state with one excitation, the polaritons are anti-bunched. If, on the other hand, the microwave drive can resonantly excite states containing more than 2 excitations in a multi-photon transition, the polaritons become bunched. If we adjust the microwave drive frequency to match one of the system's single excitation eigenenergies and compute the g (2) -function for different values of on-site nonlinearity, Figure 8. Plots of the g (2) -function, the density of polaritons c † 2 c 2 and the second order moment c † 2 c † 2 c 2 c 2 in the second cavity for special values of E J /E C are shown. For all plots we have chosen the intersite coupling constant and the on-site nonlinearity to be J 0 /ω r = 0.04 and E C /ω r = 0.04 and the decay rates of transmon qubit and stripline resonator to be γ/ω r = 0.00008 and κ/ω r = 0.00004 respectively but we applied different Rabi frequencies of the microwave drive: for a) Ω/ω r = 0.004, and for b) Ω/ω r = 0.001. Plotted are results obtained by numerical calculation of the masterequation in solid lines and results obtained by a mean field approach with an exact single-site solution in dashed lines. Eigenenergies of the system without dissipation and driving are signalized by vertical dash-dotted lines. For E J /E C = 25 one can see clearly separated resonances for the symmetric and antisymmetric states d † ± |00 and a two photon resonance for the state d † + d † − |00 . The shape of the resonances at d † + |00 and d † − |00 are reproduced by the meanfield approximation and therefor a single mode feature. The two photon resonance for the sate d † + d † − |00 is not reproduced by the meanfield approach since it does not correctly incorporate the interactions between d † + and d † − modes. For E J /E C = 125 multiple resonances determined by the eigenenergies of the system without dissipation and driving arise .
intersite coupling and Rabi frequency of the microwave drive we see a transition from coherent to anti-bunched field statistics. That is, the polaritons are uncorrelated for small nonlinearity and exhibit a transition to anti-bunched behaviour as the on-site nonlinearity becomes larger than the Rabi frequency of the microwave drive. All our findings could be explored in experiments based on readily available technology. Figure 9. A plot of the g (2) -function, the density of polaritons c † 2 c 2 and the second order moment c † 2 c † 2 c 2 c 2 in the second cavity is shown. The microwave drive frequency is chosen to match the transition from the ground state to the symmetric 1-excitation eigenstate. For all plots we have chosen the intersite coupling constant and the on-site nonlinearity to be J 0 /ω r = 0.04 and E C /ω r = 0.04, the decay rates for transmon qubit and stripline resonator to be γ/ω r = 0.00004 and κ/ω r = 0.00008 respectively and the Rabi frequency of the microwave drive to be Ω/ω r = Γ/ω r = 0.00004. Plotted are results obtained by numerical calculation of the masterequation in solid lines and results obtained by a mean field approach with an exact single-site solution in dashed lines. In resonance to the symmetric state g (2) shows a transition from uncorrelated coherent field particle statistics to anti-bunched correlated field particle statistics. The transition from coherent to anti-bunched is determined by the interplay of the Rabi frequency of the microwave drive and the on-site nonlinearity. | 9,109.8 | 2010-06-15T00:00:00.000 | [
"Physics"
] |
Design considerations for multi-terawatt scale manufacturing of existing and future photovoltaic technologies: challenges and opportunities related to silver, indium and bismuth consumption
To significantly impact climate change, the annual photovoltaic (PV) module production rate must dramatically increase from B 135 gigawatts (GW) in 2020 to B 3 terawatts (TW) around 2030. A key knowledge gap is the sustainable manufacturing capacity of existing and future commercial PV cell technologies imposed by scarce metals, and a suitable pathway towards sustainable manufacturing at the multi-TW scale. Assuming an upper material consumption limit as 20% of 2019 global supply, we show that the present industrial implementations of passivated emitter and rear cell (PERC), tunnel oxide passivated contact (TOPCon), and silicon heterojunction (SHJ) cells have sustainable manufacturing capacities of 377 GW (silver-limited), 227 GW (silver-limited) GW and 37 GW (indium-limited), respectively. We propose material consumption targets of 2 mg W (cid:2) 1 , 0.38 mg W (cid:2) 1 , and 1.8 mg W (cid:2) 1 for silver, indium, and bismuth, respectively, indicating significant material consumption reductions are required to meet the target production rate for sustainable multi-TW scale manufacturing in about ten years from now. The industry needs urgent innovation on screen printing technologies for PERC, TOPCon, and SHJ solar cells to reduce silver consumption beyond expectation in the Industrial Technology Roadmap for PV (ITPRV), or the widespread adoption of existing and proven copper plating technologies. Indium cannot be used in any significant manufacturing capacity for PV production, even for futuristic 30%- efficient tandem devices. The current implementation of low-temperature interconnection schemes using bismuth-based solders will be limited to 330 GW of production. With half the silver-limited sustainable manufacturing capacity as PERC, the limited efficiency gains of SHJ and TOPCon cell technologies do not justify a transition away from industrial PERC, or the introduction of indium-and bismuth limitations for SHJ solar cells. On the other hand, futuristic two-terminal tandems with efficiency potentials over 30% have a unique opportunity to reduce material consumption through substantially reduced series resistance losses.
Introduction
Approximately 25% of global greenhouse gas emissions (GHG) come from electricity and heat generation, with one of the main sources of CO 2 emissions being the burning of fossil fuels. 1 One critical approach of reducing GHG is using cleaner and renewable energy sources such as solar energy, wind energy, geothermal energy, hydro energy, and biomass.To reduce the potential impact of fossil fuel usage on climate change, many countries have set targets of renewable energy penetration, for instance, 100% in Denmark, Switzerland, and United Kingdom by 2050, 2 50% in Australia by 2030, 3 40% in India by 2030, 4 and 60% in China by 2050. 5Notably, remarkable progress in the transition to renewable energy has been made by some countries already.For example, countries like Norway and Iceland already achieved 100% of their electricity supply being produced from renewable energy only, such as hydro, geothermal and solar energy, 2 and 18 other countries have reached a level of 80%.
Every second, the amount of energy reaching the earth's surface from the Sun is enough to power humankind's energy requirements for approximately 2.7 hours. 6Photovoltaic (PV) technologies have pronounced advantages in accessing abundances of solar energy, predictable energy output based on the weather forecast, low land consumption, easy installation and maintenance, and low costs.Therefore, assuming a significant electrification of all energy sectors, using PV modules, with the direct conversion of sunlight into electricity, has great potential to play a central role in the future clean energy system.Although there is an emission during the manufacturing phase, due to little or no emission during the operation phase, PV can greatly reduce the greenhouse emissions to generate electricity in the long term.
Historically, PV was born as an expensive technology to satisfy the need for energy in remote locations such as high efficiency devices for space applications.For terrestrial applications, the first commercially sold solar cell at US$25 per cell with an efficiency of only around 10%. 7 Since then, a tremendous amount of effort has been put into developing new cell technologies and increasing cell efficiencies.To date, the average efficiency of the mainstream industrial passivated emitter and rear cell (PERC) technology has already reached 22.5-23%, [8][9][10] and an efficiency record for single-junction silicon solar cell at 26.7% was achieved by Kaneka et al. with an n-type silicon heterojunction (SHJ) solar cell with interdigitated back contacts. 11Integrating another solar cell on crystalline silicon solar cell to form a tandem structure, a record efficiency of 29.52% was demonstrated by Oxford PV. 12 Meanwhile, technological advancements and exponential growth in the industrial size have been dramatically reducing the manufacturing cost of solar modules by more than two orders of magnitude comparing to that in 1980.Especially after 2008, the average selling price of commercial solar modules was reduced from US$4.12 W À1 in 2008 to US$0.17 W À1 in 2020, corresponding to a 24 times reduction within 12 years. 13 recent analysis by LAZARD estimates the levelized cost of energy (LCOE) of coal-fired power and PV in utility-scale at US$65-159 MW À1 h À1 and US$31-42 MW À1 h À1 , 14 respectively, demonstrating the great potential of PV as a cheap and sustainable replacement of the traditional fossil-fuel-based energy generation system.
In 2020, a total of 135 GW of PV module was produced, which subsequently brings the cumulative installed capacity of PV to more than 756 GW, 10 accounting for about 4% in global electricity generation. 158][19] Historically, the PV industry has already exhibited the capability of fast growth in the annual production capacity with an average two-fold increase in every three years. 10However, the continued aggressive growth of the PV industry and transition towards a major component in the global energy production system leads to a new concern on the availability of scarce elements being used for the manufacture of industrial solar cells and deployment of photovoltaic modules in the field.
At a systems level, copper is required for cables and transformer windings in balance of system (BoS) components and ribbons in cell interconnection in modules.The values of copper consumption at 2800 kg MW À1 in PV systems 20 are approximately twice more than nuclear, coal, or natural gas power plants.However, the value of copper consumption in solar is similar to that of on-shore wind and lower than that of off-shore wind, and of no significant concern for terawatt-scale manufacturing with an annual copper supply of more than 24.6 megatonnes, 21 particularly considering ongoing efficiency enhancements of solar panels.Aluminium is primarily used at the module level for aluminium framing with consumption of B9000 kg MW À1 for typical 17% efficient modules. 22With an even larger global aluminium supply scale of 130 megatonnes, 23 aluminium consumption in the PV industry also does not impose any significant material challenges.As another commonly used material in BoS components such as racking systems and transformers, steel has a high consumption level of around 30-45 tonnes per MW. 24owever, given the annual supply of 1800 megatonnes 25 and average growth rate of 3-6% per year, the availability of steel also does not impose constraints to the PV manufacturing at the TW scale.
The primary concern for photovoltaics is silver due to its scarcity and widespread use in essentially all current implementations of industrial silicon solar cell technologies such as PERC, TOPCon, and SHJ.In addition, there are significant concerns for the use of indium if the manufacturing capacity of SHJ solar cells increases or for future tandem devices, and also the use of bismuth in the low-temperature interconnection approach typical for SHJ solar cells.
In this work, we consider the impact of solar cell efficiencies and physical geometries of metallic structures on the material consumption of silver, indium, and bismuth to assess the suitability of solar cell technologies for sustainable PV manufacturing at the terawatt scale.We then use the findings to highlight requirements for existing industrial solar cell technologies (PERC, TOPCon, and SHJ) and future implications for two-terminal (2T) tandem devices on Si-based bottom cells.
Global supply of silver, indium, and bismuth and industrial applications
The mass fraction of silver, indium, and bismuth in the earth's crust is estimated at 7.5 Â 10 À8 , 2.5 Â 10 À7 , and 8.5 Â 10 À9 kg kg À1 , respectively. 26These values correspond to total material resources of approximately 2.1 Â 10 12 tonnes, 6.9 Â 10 12 tonnes, and 2.4 Â 10 11 tonnes, respectively.However, realistically, only a certain fraction of these material resources can be considered as usable reserve for the PV industry.This is since the proven reserve is based on the availability, accessibility, and feasibility to extract the material both economically and technically.
The recorded global silver (Ag) mineral reserve in 2019 was estimated at 560 kilotonnes. 27During the past decade, the global supply level of silver remains relatively stable, ranging between 2.8-3.0Â 10 4 tonnes per year.Due to its high intrinsic values and excellent electroconductive quality, silver has a wide range of applications in modern society, such as in silverware, jewellery, coins and medals, photography, and in industrial processes and in electronics such as forming high-quality contacts on solar cells.On the demand side, 'smarter' devices with more functions require a circuit design with increased complexity and, therefore, a higher silver consumption.For instance, a modern smartphone produced in 2012 has 1500-2700 mg of silver embedded per kg of circuit boards, compared to only 100-500 mg in a cellular phone in 2004. 28In addition, all electric vehicles (EVs) and hybrid-electric vehicles (HEVs), as promising substitutes to conventional vehicles, consume 1-1.5 times more silver due to the high level of electrification. 29Consequently, the large-scale deployment of EVs and HEVs, as a key component of fighting climate change, is expected to drive the total silver demand of the automotive sector from 1600 tonnes in 2019 (5% of global supply) to around 4500 tonnes by 2040 (15% of global supply), where almost half of the silver demand in the auto sector will be contributed by EVs and HEVs. 30The aggressively increasing silver demand in these emerging industries will very likely raise concerns over the future availability and price of silver for mainly PV and other applications.
Despite indium (In) being more abundant than silver, the usable fraction for indium (2.2 Â 10 À7 %) is significantly lower than that of silver (2.7 Â 10 À5 %).Indium is produced exclusively as a by-product of the processing of other metal ores, such as zinc smelting and refining, leading to a lower production cost than if it were produced by itself. 31Therefore, the production capacity of the main product will impact the production rate and cost of indium.In 2019, the global indium reserve was estimated in the range of 15 000 tonnes 31 to 50 000 tonnes, 31,32 more than one order of magnitude lower than silver.In 2019, the total indium supply was 2100 tonnes (see Table 1), consisting of 968 tonnes from primary production and 1100-1200 tonnes from secondary production such as from recovering and recycling. 31,33On the demand side, more than 70% of indium is used in the production of indium tin oxide (ITO), which subsequently has broad applications in touch screens, flatscreen displays, and glass windows.The number of mobile phones and televisions is expected to continuously increase at a rate of 5-7% and 1.5-2% per year, reaching 24.2 billion 34 and 2.1 billion 35 by 2030, respectively.In addition, the demand for indium will be further increased as displays become larger.Indium is also frequently used to form alloys with other metals to make solder with a low melting temperature.
Bismuth (Bi), one of the least toxic heavy metals but is the least abundant of the three materials, a factor of 9 and 30 lower than silver and indium, respectively.However, the global reserve for bismuth is estimated at 320 000 tonnes, approximately 57% of the silver reserve and 6-21 times larger than that of indium.The global production capacity of bismuth has dramatically increased by more than 3.5 times since 2000, especially during 2015-2016, when the production scale in China almost doubled.In 2019, a total of 21 000 tonnes of Bi was produced world-wide, of which more than 75% was contributed by China. 36Bismuth has applications in a diverse set of industries such as pharmaceuticals, cosmetics, pigments, automotive, and fusible alloys, etc. Due to similar characteristics, bismuth is considered as a promising non-toxic replacement to lead in various applications such as food processing equipment and ceramic glazes 37 to alleviate growing environmental awareness and legislation prohibiting the use of lead.This is resulting in the development of new markets for bismuth, which are likely to increase demand.In addition, given that the majority of Bi is produced by a single country, large uncertainties and potential disruptions could occur in the global supply chain of Bi.Due to considerations of resource and environmental factors, the production capacities in some traditional major exporting countries such as Mexico and Bolivia are continuously decreasing, which is likely to increase the cost of Bi in the future.
A key concern for the PV industry with the use of silver, indium, and bismuth is that the expected duration of operation in the field for PV modules is 25 years.This creates a long period of delay before those scarce materials can be recycled and recovered from end-of-life PV modules.As such, although recycling for PV modules will be essential moving forward, it is of utmost importance to reduce material consumption in the first place to ensure sufficient materials remain for PV manufacturing at ever-increasing production capacities.Due to the significant reliance on silver by all existing mass-produced silicon solar cell technologies (PERC, TOPCon, and SHJ), the following sections are devoted to silver consumptions.Subsequently, the use and limitations for indium and bismuth are discussed.
Silver consumption in silicon solar cell technologies
Industrial silicon solar cell technologies use silver in small amounts to form metal contacts to extract photo-generated current out of the solar cells.In 2020, the average industrially produced 21%-efficient solar cell used only 90-100 mg of silver.However, more than 25 billion cells were manufactured last year to achieve a production capacity of 135 GW, the equivalent of 20 Tuoketuo power stations (the largest coal-fired plant in the world 38 ).This resulted in the PV industry using a total of 2860 tonnes of silver, 10.3% for 2020 global silver supply. 39he metal contacts are formed by screen-printing of silver pastes, the mainstream metallization approach featuring in all major PV technologies such as PERC, TOPCon, and SHJ.The schematic diagrams of these cell structures can be found in Fig. 1.PERC is the industry-dominating technology with over 80% market share and represents a low-cost industrial implementation of the record 25% efficient PERL cell fabricated at UNSW in the 1990's. 7The current average efficiency in 2020 for PERC reported by ITRPV is B22.8%.Much higher efficiencies have been realized and reported by several companies, such as 23.39% by Trina Solar, 40 over 23.95% by Jinko Solar, 41 and a record efficiency of 24.06% by LONGi Solar. 42For PERC, the use of silver in front busbars, fingers, and soldering pads allows a single print step to be used on the front surface for all key functions of metal/Si contact formation, electrical conduction in fingers/busbars, and solderability for interconnection.An image showing such a 'H-pattern' grid for silver contacts can be found in Fig. 1(e).The use of silver on the front of PERC, particularly for metal/Si interface formation, is favourable over the use of aluminium or copper.In particular, it avoids undesirable interactions of the aluminium that reacts with silicon at low temperatures (577 1C) to form a p-type region 43 and could punch through the shallow n-type emitter to shunt the device. 44The use of copper-based pastes could lead to penetration of copper into the silicon, which can subsequently deteriorate carrier lifetime, 45 leading to degradation in cell performance.In addition, due to a relatively higher resistivity of both aluminium (35-50 mO cm) 46 and copper (B30 mO cm) 47 screen-printing pastes compared to silver pastes (5-10 mO cm), 48,49 fingers with the much larger crosssectional area will need to be formed with aluminium or copper pastes to provide the same conductivity as silver pastes, which will undesirably increase optical shading losses, particularly when used on the front surface.
On the rear side of PERC solar cells, cheaper and more abundant aluminium is used to form fingers and busbars for bi-facial solar cells (or the entire rear side for mono-facial cells) as shown in Fig. 1(f).In this instance, interactions of aluminium with silicon are advantageously used to form aluminium back-surface field (Al-BSF) at contacted regions, as a simple and low-cost version of that implemented in the world-record PERL cell which reached 25% efficiency. 7Due to a reduced incident illumination intensity on the rear surface, the restrictions of the metal coverage area for optical shading are relaxed.Consequently, much wider (B100 mm wide) and more closely spaced aluminium fingers can be used to compensate for the lower conductivity of aluminium compared to silver, with an aluminium consumption of B200 mg in bi-facial PERC solar cells.However, due to difficulties in soldering to aluminium, additional silver is required to form soldering pads on the rear side for interconnection.This is typically achieved using an Ag paste with 50-60% silver content by weight, compared to 80-90% silver content in Ag pastes used on the front side. 50As such, two printing steps are required for the rear surface.Overall, this results in the consumption of approximately 90-100 mg of silver per PERC solar cell fabricated on 166 Â 166 mm 2 silicon wafers in 2020, 10 corresponding to a silver consumption of approximately 15.4 mg W À1 (see Table 2).
TOPCon and SHJ solar cells are generally considered as promising candidates among academic and industry experts for next-generation high-efficiency industrial solar cells due to the use of 'passivating contacts' which overcome the efficiency limitations of conventional contact schemes such as that in PERC and PERL. 51The highest efficiency for a tunnel-oxide passivated contact solar cell stands at 26.1% by Haase et al., also fabricated using a p-type wafer. 52For this solar cell, however, both contacts were on the rear in an interdigitated structure (POLO-IBC).However, a recent result by Richter et al. achieved a record 26% efficiency for a solar cell with contacts on both surfaces. 53This TOPCon solar cell was also fabricated using p-type wafers, slightly higher than the efficiencies achieved by the same group with n-type wafers at 25.8%. 54ndustrial TOPCon solar cells are fabricated on n-type wafers, with recent average efficiencies of 23.2% reported by ITRPV, while peak efficiencies as high as 25.25% have been reported by Jinko Solar. 55For industrial n-type TOPCon solar cells, silver pastes are used on both front and rear surfaces, resulting in substantially higher silver consumption than PERC. 10 On the front, an Ag/Al paste (B90% Ag by weight) is used to enable sufficient conductivity in fingers and busbars with a line resistivity of 5-10 mO cm to avoid excessive shading and resistive losses, yet also ensuring the formation of highquality ohmic contacts with the boron-diffused p-type emitters.For the n-type passivated contact on the rear of the device, specially designed silver pastes featuring more controllable etching rates are used to fire through the silicon nitride layer but avoid penetration through the polysilicon and tunnel oxide layers.Both of these pastes are fired at high temperatures, typically in a co-firing process.The estimated silver consumption for TOPCon in 2020 was 25.6 mg W À1 , approximately 66% higher than PERC (see Table 2).
The SHJ solar cell technology is responsible for the highest efficiency silicon solar cell at 26.7%, fabricated on n-type wafers with an interdigitated back-contact structure. 56Industrial SHJ solar cells are also fabricated on n-type wafers, however, mostly feature screen-printed contacts on both surfaces.The average efficiency for industrial n-type SHJ solar cells is in the range of 23-24%, although efficiencies as high as 25.26% have been reported by LONGi solar. 57ndustrial SHJ solar cells also use silver pastes for contacts on both surfaces.To avoid a severe deterioration of surface passivation quality that can occur for higher temperatures, the processing for SHJ solar cells is typically limited to temperatures below 200 1C. 58As such, a low-temperature silver paste is required for both the front and rear contact of SHJ solar cells, which is cured in the vicinity of 150-200 1C.Due to the restriction of low curing temperatures, SHJ silver pastes contain more silver particles and different solvents, additive, and curing agents than traditional silver pastes to ensure the proper formation and curing of contacts at low temperatures.Due to the low curing temperature, the low-temperature Ag pastes for SHJ solar cells tend to have a higher line resistivity (r m ) in the range of 10-20 mO cm, 59 which is about a factor of two higher than the r m of the high-temperature silver pastes that are typically used for PERC and TOPCon solar cells.However, significant progress has been made in improving the electrical properties of the low-temperature cured Ag paste, where a reduced line resistivity of 5-6 mO cm or even lower has been demonstrated. 59Due to the need for silver contacts on both sides, higher silver content within pastes, and relatively poor printability of such low-temperature pastes, more silver is required such that the typical silver consumption for an SHJ solar cell is more than double that was used for PERC (see Table 2).
As the efficiency of single-junction Si-based solar cells approaches the intrinsic limit of around 29%, 60,61 multijunction (tandem) devices formed by stacking materials with different bandgaps to absorb light at different wavelengths in the solar spectrum, provides a promising pathway to surpass the efficiency limit imposed by single junction devices.With a tandem structure, solar energy can be harvested and utilized more efficiently by reducing thermalisation energy losses 62 from high-energy photons being absorbed by a small-bandgap material (e.g.UV light (43.1 eV) being absorbed in silicon with a bandgap of 1.12 eV), or the transmission losses of photons with insufficient energy to excite materials with larger In this work, discussions on futuristic tandem solar cells will be focused on 2J&2T tandem fabricated on either PERC or SHJ solar cells.For such tandem solar cells, it remains unclear what metallization technology will be used in the future mass production environment.However, due to constraints on processing temperatures, it is likely that screen-printing of lowtemperature cured silver pastes will be more desirable and suitable than using high-temperature co-fired pastes or evaporated metal contacts for the mass production of these tandem cells.The use of screen-printing metallization technology has been successfully demonstrated by Oxford PV on commercial sized 2T perovskite/Si heterojunction tandem solar cell in their 100 MW pilot production line. 64,65Therefore, the silver consumption in the futuristic 2T tandem solar cells will also be assessed and discussed in this work, particularly due to their unique current-voltage characteristics and opportunities to reduce silver consumption.
According to the 2021 ITRPV, over the next decade, cell efficiencies of PERC, TOPCon and SHJ solar cells are expected to continuously improve alongside a gradual reduction in the silver usage per cell.Taking into account the expected cell efficiencies and silver consumption per cell, the silver consumption in mg W À1 is expected to reduce by 50-60% by 2031, which will substantially improve the material sustainability of PERC, TOPCon, and SHJ solar cells.However, TOPCon and SHJ are still expected to have a substantially higher consumption of silver than PERC by 63-68% (see Table 2).
The significantly higher silver consumption of TOPCon and SHJ solar cells than PERC greatly reduces the sustainable manufacturing capacity of the technology for the usage of a given percentage of global silver supply within the PV industry (see Table 2 and Fig. 2).Based on the 2020 cell efficiencies and silver consumption for PERC, TOPCon, and SHJ, each TW of annual production capacity for these technologies would consume 53.1%, 88.3%, and 116.9% of the 2019 global silver supply, respectively.Similarly, despite lower projected silver consumption in 2031 for PERC, TOPCon, and SHJ, each TW of annual production for the cell technologies would still consume 29.3%, 47.6%, and 49.3% of the 2019 global silver supply, respectively.
To allow a PV manufacturing capacity of 3 TW per annum to fight climate change, Verlinden recently suggested that silver consumption must be reduced to below 5 mg W À1 or lower for all PV technologies to be sustainable, 18 which is well below ITRPV predictions in 2031 even for PERC.However, even for 5 mg W À1 , an annual production capacity of 3 TW would consume more than 50% of the current annual global silver supply.Considering increasing silver demand from other industries, the sustainable fraction of the silver supply that the PV industry can use, may in fact, be much lower.The exact percentage of global supply that the PV industry can use in the mid to long-term silver consumption is unclear, particularly when accounting for future PV recycling efforts and the expected lifespan of PV modules into the future.However, given the current 25-30 years typical lifetime of commercial solar modules and 20-30% growth rate of the industry, recycling and recovering silver from end-of-life modules will not likely provide significant relief in the pressure of silver supply in short to mid-term.
Fig. 2 shows that if the PV industry can sustainably use 20% of the 2019 global silver supply, this would correspond to a sustainable manufacturing capacity of 227 GW for TOPCon and 171 GW for SHJ compared to 377 GW for PERC based on 2020 efficiency and silver consumption levels.However, silver consumption has reduced substantially over the last decade by a factor of 5 from B90 mg W À1 .Such reductions in silver consumption are expected to occur into the future with ongoing technology development.With reductions projected by ITRPV in silver consumption in 2031, the allowed manufacturing capacity would increase to around 700 GW of PERC solar cells, B420 GW of TOPCon solar cells, or B400 GW of SHJ solar cells.
As a result, predictions of improvements in current screenprinting metallization technologies by the ITRPV for 2031 are not sufficient to enable manufacturing of PV at the TW or multi-TW level without using much more than 20% of the global silver supply, a value that is likely not sustainable.As shown, SHJ and TOPCon solar cells have approximately half the size of a sustainable manufacturing capacity as PERC.Therefore, from a sustainability perspective, a transition to such technologies is not justified yet for the limited efficiency improvements that industrial TOPCon and SHJ solar cells offer over PERC.However, to allow 3 TW production capacity of PV using only 20% of the global supply, regardless of technology, silver consumption needs to be below 2 mg W À1 .With this 2 mg W À1 target, the ITRPV predicted silver consumption in 2031 for PERC, TOPCon, and SHJ solar cells are a factor of 4, 7 and 6 too high to allow a 3 TW manufacturing capacity.
Apart from material sustainability, the LCOE of PVgenerated electricity could be at risk due to the dependence on silver.For a typical industrial PERC solar cell, the use of silver already contributes a large portion of the total manufacturing costs (US$0.075per cell), corresponding to more than 60% of the non-wafer cell price and 6% of the total module cost. 10Therefore, an increase in silver paste price by a factor of two would increase the cost of a PV module by B6%.We have to expect that, in the next decade, if no replacement is found for silver in cell manufacturing, the total manufacturing costs of a solar cell and PV module will be strongly affected by the price of silver, which has been quite volatile in the last year.
Historically, the typical cost of capital equipment for manufacturing solar panels has been steadily reducing at a rate of À18% per year over the last decade, benefiting from the scale effect in the PV market, the growing competition in the industry, and continuous technological developments. 10However, this trend does not apply to the price of silver among some other raw materials, where the law of supply and demand generally plays a central role towards the price.Given the growing demand in all industries and the limited reserves and supply of silver, the supply and demand relationship of silver will likely be experiencing more pressure, which could potentially drive the price of silver as well as the manufacturing cost of a solar cell up.Ironically, from the historical point of view, the biggest driving force behind price fluctuations of silver appears to be contributed by the huge volatility in the financial market rather than the law of supply and demand due to the commodity attributes of silver so far.For example, the global financial crisis during 2008 to 2011 resulted in a surging demand for investing in silver to evade the investment risk, driving the price of silver from less than about 350 US$ kg À1 up to almost 1760 US$ kg À1 while the industrial demand has not been changed significantly.A typical PERC solar cell consumes around 80-100 mg of silver with a selling price of US$ 0.78 per cell, corresponding to around 10% of the selling price.But a smartphone normally has a silver consumption of 200-300 mg with a much higher selling price between US$ 400-1500, where the cost of silver only accounts for 0.01-0.05% of the selling price.Consequently, solar cells have a far lower tolerance to any fluctuations in silver price without impacting overall cost.Therefore, the need for silver in solar cells puts the LCOE of PV generated electricity into a more vulnerable position, which can be compromised by possible long-term increases in silver price driven by the growing supply pressure, and the unpredictable short-term volatility in the silver price originated from the global financial market.
As a result, careful consideration of silver consumption within the PV industry will be critical for sustainable PV manufacturing and also protect against potential silver price volatility in the future.The following section discusses the interdependencies between physical geometries of silver metallization contacts, solar cell efficiencies, and the corresponding silver consumption to assess the feasibility of existing and emerging technologies.
Physical constrains in silver reduction in screen-printed solar cells
The physical constraints on the finger dimension and geometry must be taken into account to ensure a feasible and realistic reduction of the silver consumption in fingers when heading towards more sustainable solar cell manufacturing practices for screen-printed solar cells.In this section, we provide limitations in finger geometries based on both a shorter-term target of 5 mg W À1 and a longer-term target of 2 mg W À1 .
The consumption of silver in screen-printed fingers can be understood simply in terms of finger spacing and crosssectional area.An upper limit for the allowed silver consumption in fingers for solar cell technologies is given for use with busbar-less interconnection technologies such as the Smart-Wire approach, whereby silver is only used for fingers.With the SmartWire technology, the conventional silver busbars and soldering tabs are replaced by copper wire coated with lowtemperature solders such as tin bismuth and supported by a polymer laminate sheet, 66 thereby eliminating silver usage associated with busbars and soldering tabs.The electrical contact between the copper wires and underlying fingers is formed during the module lamination process in the vicinity of 130-170 1C with a melting and re-flow of the low-temperature solder, without the need of a dedicated soldering step in conventional interconnection approaches prior to lamination.In this context, to limit the silver consumption to 5 mg W À1 in finger regions, there is a given allowed finger cross-sectional area for a given finger spacing and device performance.Fig. 3 shows the impact of finger spacing and cross-sectional finger area for a 23.8% efficient PERC cell on the finger silver consumption.Using the current 1.3 mm finger spacing in typical industrial PERC solar cells, the cross-sectional area must be reduced to less than 300 mm 2 to reduce finger silver consumption to less than 5 mg W À1 , compared to a current value of between 500-600 mm. 2 For TOPCon and SHJ solar cells, despite slight increases in efficiency, due to the need for silver fingers on both sides, the maximum allowable finger crosssectional area for a given silver consumption is substantially smaller than that for PERC.For 24.58% efficient TOPCon solar cells, with a 1.5 mm finger spacing on both sides, the allowed cross-sectional area for fingers would be 170 mm, 2 equating to a finger silver usage of 2.5 mg W À1 on each of the surfaces.For the front and rear surface of 25.1% efficient SHJ solar cells with finger spacings of 2 mm and 1 mm, respectively, the allowed cross-sectional area for fingers would be even smaller, at 150 mm 2 .With a more restricted finger silver usage of 2 mg W À1 , both TOPCon and SHJ solar cells would require the finger cross-sectional area to be reduced to 60-70 mm 2 , comparing to 120 mm 2 for PERC solar cells.On the other hand, the significantly increased efficiency potential of tandem solar cells naturally increases the allowed cross-sectional area for a given finger spacing.However, the largest increase in the allowed cross-sectional area of 270 mm 2 and 748 mm 2 for 2T tandem solar cells comes due to increased front and rear finger spacings of 3 mm and of 1.5 mm, respectively, as will be discussed in the following section on series resistance.With a total finger silver usage of 2 mg W À1 , both tandems on SHJ and tandems on PERC could still allow a reasonable finger crosssectional area of 108 mm 2 and 299 mm 2 , respectively.
Using PERC structure as the bottom cell in tandem also presents a unique opportunity of preserving Al fingers on the rear side to reduce the silver consumption.However, this will largely depend on the configuration of the top cell among the choice of interconnection layers, where Al fingers can only be used when the n-type diffused emitter of PERC is contacting the top cell.Otherwise, silver fingers are still required on both sides of the tandem device, leading to no significant advantage for tandem on PERC compared to tandem on SHJ in terms of silver consumption.In this work, we assume Al fingers and busbars are used on the rear side of tandem on PERC solar cells.
The choice of the optimal finger spacing is essentially a trade-off between series resistance losses and optical shading losses, in which a larger finger spacing leads to reduced optical shading but increased series resistance losses contributed by finger resistance, lateral resistance within the silicon or conducting layers, and contact resistance.As a result, the trend of using more lightly doped front emitters in PERC and TOPCon solar cells and reductions in finger width with the ongoing development of screen print will very likely point towards a continuously reduced finger spacing in the future compared to that in current industrial solar cells.With a smaller finger spacing, the finger cross-sectional area that can be tolerated by a given finger silver consumption is expected to be even smaller.For instance, as shown in Fig. 3, the use of 1 mm finger spacing instead of 1.3 mm in PERC solar cells will reduce the allowable finger cross-sectional area from 300 mm 2 to 230 mm 2 for a finger silver usage of 5 mg W À1 .
For conventional interconnection technologies, extra silver is required for busbars and soldering pads for the interconnection of cells.This means that to achieve a target value of total silver consumption for the device, the silver consumption in fingers needs to be further reduced to account for the silver consumption required in busbars and solder pads.In the case of using 12 busbars (12BB) per solar cell and 18 soldering pads per busbar, values in the range of 3.7-4.1 mg W À1 are required for busbar and tabbing regions of PERC and Tandem/PERC solar cells, and 4.2-5.0mg W À1 for TOPCon, SHJ, and Tandem/ SHJ solar cell technologies (see Table 3).This would reduce the allowed cross-sectional area of fingers of a challenging 52 mm 2 and for PERC.Tandem/SHJ would be restricted to an even more challenging 41 mm 2 due to the need for silver busbars and tabs on both surfaces.However, for tandem on PERC, a much more reasonable finger cross-sectional area of 200 mm 2 could be tolerated.Due to a busbar and tab silver consumption of almost 5 mg W À1 , the option of using 12BB configuration with silver being used in all fingers, busbars, and tabbing regions of SHJ and TOPCon solar cells is clearly unfeasible at the TW scale with a targeting total silver consumption of less than 5 mg W À1 .If reduced to a limit of 2 mg W À1 , no such technology is feasible with silver being used all finger, busbar and tabbing regions.
On the other hand, if non-silver busbars are used, such as copper or aluminium, a relaxation on the silver consumption could be enabled in fingers while still ensuring compatibility with standard soldering techniques through the use of silver tabbing regions.In this instance, 2.1-2.4 mg W À1 is used for the tabbing regions of PERC and tandem/PERC devices, and 1.3-1.5 mg W À1 is used for TOPCon, SHJ, and Tandem/SHJ solar cells.A target of 5 mg W À1 for the entire device would limit the allowed finger cross-sectional area for PERC to 160 mm 2 , SHJ and TOPCon to 100-120 mm 2 , and a more manageable value of 200-440 mm 2 for Tandem/SHJ and Tandem/ PERC.If reduced to 2 mg W À1 for the total device, no such technology appears feasible with a 12BB design, even for Tandem/PERC with an allowed cross-sectional area of 43 mm 2 .
A summary of the allowable finger cross-sectional area of various solar cell structures in different scenarios can be found in Fig. 4, of which the shaded regime represents the crosssectional area that we consider to be technologically unfeasible or very challenging with existing screen-printing technologies, which we assume as below 100 mm 2 .With the smallest finger width current being demonstrated with screen printing of 20 mm, 67 a cross-sectional area of less than 100 mm 2 would essentially require the average finger height to be reduced to less than 5 mm.Given the typical height of textured pyramids of 1-3 mm, such a low printed height could raise significant concerns about the printability and reliability of such fingers.
With a limited silver consumption of 5 mg W À1 for the whole device, the use of 12 silver busbars, as in current industrial solar cells, cannot be tolerated, as using silver busbars will likely reduce the allowable finger cross-sectional area to well below 100 mm 2 for PERC, TOPCon, SHJ, and tandem/SHJ solar cells as shown in Fig. 4.However, one notable exception is tandem/PERC solar cell, for which 5 mg W À1 silver consumption could be sufficient for silver fingers, busbars, and tabs.For a more restricted silver consumption of 2 mg W À1 , neither silver busbars nor tabs can be used in any of these cell structures based on the current laydown of silver in busbar and tabbing regions if silver fingers are also used.In addition, even if all 2 mg W À1 of silver were used in fingers, TOPCon and SHJ solar cells would still require the finger cross-sectional area to be reduced to around 60 mm 2 , and the allowable finger cross-sectional area of PERC and tandem/ SHJ is only slightly larger than 100 mm 2 .
An area of critical research will be on understanding the impact of greatly reduced cross-sectional areas of screenprinted fingers on the performance yield and printing reliability of solar cells in mass production.A recent study by Chen et al. indicated that for a 5-busbar design with 155 fingers, an optimal cross-sectional area of 300 mm 2 should be targeted, below which the efficiency would decrease. 68However, this number can likely be reduced for a higher number of busbars such as with the multi-busbars (MBB) technology currently gaining popularity in the industry, of which 9 (or even more) narrow busbars are used with small soldering tabs to replace the traditional 3-busbar or 5-busbar configuration. 69In addition, state-of-the-art stencil printing in the laboratory 68 has achieved a cross-sectional area of approximately 200 mm 2 for a finger width of 20 mm, which is well above the allowed crosssectional area for many of the configurations presented in Fig. 4.
Silver consumption can also be considered using parameters such as the coverage area and average printed height.Fig. 5 shows the correlation between the coverage area and printed height on the silver consumption in a typical industrial PERC solar cell.Here it is assumed that all parts of the device (i.e., fingers, busbars, and solder tabs) have the same printed height.An upper limit of the front metal coverage area for PERC could be assumed for the case of 35 mm wide fingers with a finger spacing of 1 mm and also using silver for busbars and tabs in a 12BB design.As such, an upper limit of the coverage area would be 6.13%.In this instance, the average printed height must be below 4.7 mm or 1.9 mm to limit front surface silver to 5 mg W À1 or 2 mg W À1 , respectively.On the other hand, a lower limit for the coverage area with continuous silver fingers would be considered as using 20 mm wide fingers, as recently demonstrated in laboratory, 70 with a 1.3 mm finger spacing in conjunction with a busbar-less design.In this instance, the lower limit of the coverage area is 1.54%.For this, the average printed height must be below 15.2 mm and 6.1 mm for a total front surface silver consumption of 5 mg W À1 and 2 mg W À1 , respectively.For SHJ/TOPCon, which requires silver on both surfaces, 35-40 mm wide fingers and the use of existing 12 busbars and soldering tabs configuration result in a Fig. 4 Allowable finger cross-sectional area for various solar cell technologies with different finger silver consumption.The assumed cell area is 210 Â 210 mm 2 .Assumed efficiencies of PERC, TOPCon, SHJ, tandem on SHJ, and tandem on PERC are 23.83%, 9 24.58%, 25.11%, 29.15%, and 27.70%, respectively.Filled circles: total 5 mg W À1 silver consumption with silver being used in fingers, busbars, and tabs.Filled triangles: total 5 mg W À1 consumption with silver being used in fingers and tabs.The hashed region has the allowable finger cross-sectional area less than 100 mm.coverage area of 2.15% and 3.48% on the front and rear surface for SHJ, and 3.35% on both surface for TOPCon.As such, to limit the total silver consumption to 5 mg W À1 and 2 mg W À1 , the allowed printed height is below 3.1 mm and 1.2 mm for SHJ, and 3.4 mm and 1.4 mm for TOPCon, respectively.The requirement of substantially reduced printed height will likely raise significant concerns in terms of the printability and reliability of such fingers in the mass production environment, especially as the printed height approaches or becomes lower than the height of textured pyramids.Although the minimum printed height can be tolerated in mass production remains unknown and will be an area of critical research, reducing the printed height will likely increase the chance of having broken fingers and damage to screens with thinner emulsion.
If the coverage area could be reduced to 1%, a substantial increase in the printed height would be allowed.Table 4 summarizes the coverage area for different solar cell structures and a breakdown of regions such as fingers, busbars, and tabs.As shown, the busbars and tabs of a 12BB structure account for 0.57% of the coverage area in PERC, and 1.14% in SHJ/TOPCon, which rules out the option of using silver busbars and tabs if the coverage area were limited to 1%.However, even with busbar-less interconnection, 1% of the finger coverage area would require the finger width to be reduced to less than 13 mm for PERC with 1.3 mm finger spacing, 7.5 mm for TOPCon with 1.5 mm finger spacing, and 6.7 mm for SHJ with 2 mm and 1 mm finger spacing on the front and the rear, well below the minimum finger width currently being achieved for continuous silver fingers with screen print in industry and laboratory.Therefore, innovation in the finger pattern, such as the use of intermittent silver fingers or development in new printing technologies, is required to achieve a finger coverage area of 1%.Alternatively, a lower metal coverage area, if using the same print height, would enable significant silver savings.For example, with a total coverage area of 1% and an average print height of 5 mm, the silver consumption for the front surface of PERC would be only 1.1 mg W À1 , providing scope for innovation to enable sustainable TW manufacturing for screen printed PERC solar cells, without the need to transition to alternative metallization technologies.
Impact of silver reduction on finger series resistance losses
One of the key functions of silver in all industrial solar cells is conducting electricity along the fingers to the busbars for current extraction.In general, solar cell fingers are primarily of uniform composition along the length of the finger in terms of width (W f ) and height (t f ), and a uniform spacing is provided between fingers (S f ) as shown in Fig. 6.In this case, the differential resistive power losses on fingers are governed by eqn (1) below, where x and qx represent the position and the width of the differential component along the length of fingers, J mp is the current density of the cell at the maximum power point, S f is the finger spacing, and r m is the line resistivity of fingers.
By integrating eqn (1), the absolute and relative power loss (Ploss finger resist; rel ) from the finger series resistance can be expressed in the form of eqn ( 2) and (3), respectively.Both absolute and relative power losses from finger series resistance exhibit an inverse linear dependency on the cross-sectional area of fingers (W f  t f ).With increased busbar spacing (S BB ), the current is required to travel over a longer distance along fingers, leading to higher finger resistance losses.The use of a larger finger spacing for a given finger cross-sectional area would also increase finger resistance losses due to increases in the amount of current being collected and transported by each finger.
Ploss finger resist; abs ¼ Ploss finger resist; rel ¼ Ploss finger resist; abs However, in this form, the dependence of finger resistance losses on silver consumption in eqn (3) is not so apparent.The silver consumption in mg (M Ag ) due to fingers on a solar cell is given by eqn (4), where r f is the mass density of fingers, and f Ag is the fraction of solid Ag content in fingers.The busbar spacing can also be expressed in terms of the cell width (W cell ) and the number of busbars (N BB ), as shown in eqn (5).A new expression of relative finger resistance power losses (eqn ( 6)) can be obtained by combining eqn (3)- (5).A key conclusion from eqn ( 6) is that with uniform fingers, as is essentially the case for industrial silicon solar cells, the relative power losses from the finger series resistance can be clearly defined by the number of busbars, line resistivity, and the total mass of silver being used in fingers, M Ag .That is, an identical consumption of a given paste (i.e., identical M Ag and line resistivity) and the same number of busbars will result in the same relative power losses from finger series resistance, regardless of finger spacing and geometry (cross-sectional area).As such, efforts to reduce finger silver consumption by 50% will lead to a doubling of the relative finger series resistance power losses.The relative finger resistive loss also has an inverse square dependence on the number of busbars.As such, this favours the use of interconnection technologies with a higher number of busbars (e.g., MBB technology) as an effective solution to counteract increased finger resistive losses caused by reductions in finger silver consumption.For example, the transition from a 9BB to an 18BB configuration would allow a reduction in the finger silver consumption by a factor of four without increasing finger resistive losses.Fig. 7 shows the relative power losses from the front finger series resistance of a typical PERC solar cell, assuming an efficiency of 23.8%, as a function of the silver consumption in the fingers and the number of busbars.
Another striking feature of eqn ( 6) is the dependence of finger resistance losses on the J mp /V mp ratio, hence favouring solar cell technologies with high voltage and low current density output.Table 5 shows values of the cell performance and J mp /V mp ratios for a range of solar cell technologies, including PERC, TOPCon, SHJ, and Tandems.As shown, TOPCon and SHJ allow a reduction in the J mp /V mp ratio of B10% and 8-15% compared to PERC, respectively.As such, assuming an identical grid design with the same line resistivity, resistive losses of TOPCon and SHJ will be 10% and 15% lower than PERC, respectively.However, the most noticeable reduction in the J mp /V mp ratio comes from tandem solar cells.Specifically, tandem devices are composed two solar cells with different materials with different bandgaps.The top cell has a larger bandgap (ideally in the range of 1.6-1.8eV 71,72 ) and absorbs the shorter-wavelength part of the solar spectrum, while the bottom cell has a smaller bandgap (ideally in the range of 0.9-1.2eV 71,72 ) to absorb the longer-wavelength part of the solar spectrum.An example of JV curves and external quantum efficiency (EQE) of an industrial PERC solar cell and a 2J&2T tandem solar cell 73 can be found in Fig. 8.Because the two cells are connected in series in the tandem device, and each absorbs the photon-weighted half of the solar spectrum, the generated current is half of that in a singlejunction silicon solar cell.In addition, due to the series connection of two cells and that the bandgap and therefore voltage of the top cell is almost twice that of the bottom cell, the output voltage of such 2J&2T tandem solar cells will be increased by a factor of around 3 compared to that of a typical single junction silicon solar cell, leading to a 5-6 times reduction in the ratio of J mp /V mp , as also shown in Table 5.Therefore, 2T tandem solar cells are expected to have substantially reduced finger series resistance losses and could provide significant scope for reducing silver consumption in fingers, and hence improved sustainability compared to single junction solar cells.
Fig. 9 shows the relative power losses for finger series resistance as a function of the J mp /V mp ratio and bulk resistivity of the metal paste assuming a constant number of busbars (12BB) Fig. 7 Relative power losses from the front finger series resistance as a function of the finger silver consumption and the number of busbars in PERC solar cells assuming an efficiency of 23.8%, cell area of 210 Â and fixed cross-sectional finger area and finger spacing of 640 mm 2 and 1 mm, respectively.Bands of J mp /V mp ratios are shown for PERC, TOPCon, SHJ, and tandem devices according to the IV properties in Table 5, along with a range of values for the bulk resistivity of typical screen-printed Ag, Al, and Cu pastes and bulk resistivity values of the pure Ag.Cell technologies with a lower ratio of J mp /V mp not only have potential in more silver reduction and less finger resistance losses but also could have better tolerance to materials with higher line resistivity.For example, assuming the same finger spacing, a slightly lower J mp /V mp ratio of TOPCon and SHJ solar cells could allow a 10-15% increase in the finger line resistivity without increasing the finger resistance losses compared to PERC solar cells.As for tandem solar cells, similar finger resistance losses of PERC solar cells with existing Ag pastes could be achieved on tandem devices with much more resistive but low-cost and abundant Cu pastes, enabling an additional pathway of reducing silver consumption in tandem solar cells.
The relative finger resistance power losses as a function of finger silver consumption in various cell structures are shown in Fig. 10.It should be noted that for TOPCon, SHJ, and tandem on SHJ solar cells, an even distribution of silver on the front and rear side fingers is assumed, which results in the calculated finger resistance lose values being the lower limit for a given total finger silver consumption.In addition, values for PERC and tandem on PERC only account for resistance power losses from front silver fingers.Nevertheless, owing to the lower J mp /V mp ratio and larger finger spacing, we estimate that tandem on SHJ will not only have significantly lower finger silver consumption than existing TOPCon and SHJ due to a much larger finger spacing used but also with much lower finger resistance losses.For tandem on PERC solar cells, we expect a finger silver consumption of less than 5 mg W À1 can already be achieved with current industrial screen-printing Table 5 Reported efficiency (Z), open-circuit voltage (V OC ), short-circuit current density (J SC ), fill factor (FF), current density (J mp ) and voltage (V mp ) at the maximum power point, and the ratio (J mp /V mp ) for different cell technologies technologies with a finger width of 40 mm, finger height of 16 mm, and a finger spacing of 3 mm.In addition, despite the finger silver consumption of only 4.3 mg W À1 , the finger resistance loss in tandem solar cells is estimated to be around 0.07% rel , which is 3 times lower than that of current industrial PERC solar cells (0.21% rel ) but with twice less finger silver consumption.A summary of the estimated relative finger series resistance power loss for different technologies is shown in Table 6 for cases of 5 mg W À1 and 2 mg W À1 total silver consumption.As shown, even if extremely small cross-sectional areas or print heights are technically feasible, the lower silver consumption in fingers for various configurations will result in prohibitively high series resistance power losses.This will likely place strict limitations on the lowest silver usage that can be allowed for a given device.
For SHJ and TOPCon solar cells, even with a silver-free interconnection scheme (no silver being used in busbars or tabbing regions), the relative power loss will be in the range of 1.2-1.3%and 3.1-3.4% for a total silver consumption target of 5 mg W À1 and 2 mg W À1 , respectively.If also using silver in tab regions, these values increase to 1.7-1.9%and 11.7-13.6%,respectively.Such values would be prohibitively high for solar cells.The transition towards silver-free interconnection schemes with a higher number of busbars or wires can effectively reduce losses from finger resistance in TOPCon and SHJ solar cells, however, a minimum of 27 or 75 wires will be required, for a target total silver consumption of 5 mg W À1 and 2 mg W À1 , to maintain the same finger resistance losses as current industrial cells with 12 busbars.As for PERC solar cells, finger consumption of 5 mg W À1 not only will not allow the use of silver in all fingers, busbar, and tabs from physical constraints perspective as discussed in the previous section, the finger resistance loss will also be significantly increased from 0.21% to 2.11%, which will lead to a B0.5% abs efficiency loss with a current 12-busbar configuration.
Among all cell technologies, tandem on PERC exhibits the greatest potential of achieving low silver consumption but also low finger resistance losses.With 5 mg W À1 of silver being used in fingers, busbars, and soldering tabs, a very low finger resistance loss of 0.22% rel can still be expected for tandem on PERC solar cells.However, technical challenges likely remain regarding integration with the high-temperature metallization for the rear of PERC and the low-temperature requirements for many top cells such as perovskites.
Prospects for silver reduction
Considering the above physical limitations and impact of silver reductions on finger series resistance power losses for screenprinted solar cells, the development and deployment of novel 3. Filled circles: total 5 mg W À1 consumption with silver being used in fingers, busbars, and tabs.Filled triangles: total 5 mg W À1 consumption with silver being used in fingers and tabs.
Table 6 Summary of the estimated finger silver consumption allowed by the 5 mg W À1 or 2 mg W À1 targets and corresponding finger resistance losses of different cell technologies.Note: values of power losses for PERC and tandem on PERC only include power losses from front silver fingers, while losses from both front and rear Ag fingers are taken into consideration for TOPCon, SHJ, and tandem on SHJ
PERC
TOPCon SHJ Tandem on SHJ Tandem on PERC Efficiency 23.83 9 24.58 124 25.11 127 29.15 73 27.70 Finger silver usage (mg W À1 ) -----Finger R s loss (% rel ) -----screen-printing methods to reduce silver consumption and alternative silver-free metallization and interconnection technologies must be accelerated to enable sustainable manufacturing at the TW scale.For screen-printed solar cells, the MBB technology, as it is done today with 12 busbars for a 210 mm solar cell, will not be feasible for manufacturing at the TW scale for PERC, TOPCon, and SHJ due to silver consumption of more than 4 mg W À1 in the busbar and tabbing regions alone for all of them.One option is reducing the number of busbars, which is normally undesirable in a solar cell due to increased finger resistance losses.However, with a strictly limited silver consumption level, reducing the number of busbars (assuming unchanged busbar width) will allow more silver to be used to form fingers, which may in fact, lead to lower finger resistance.In addition, the maximum allowable finger cross-sectional area will also be increased by reducing the number of busbars, which improves the reliability and printability of such fingers in the mass production environment.For instance, if the number of busbars in a tandem on SHJ solar cell can be reduced from 12 to 9 on a 210 mm cell (assuming busbar width unchanged), the allowable finger silver consumption will substantially increase from 0.76 mg W À1 to 1.85 mg W À1 with a total silver consumption of 5 mg W À1 .Subsequently, a lower finger resistance loss of 1.01% rel and a more manageable finger cross-sectional area of 98.5 mm 2 will be allowed, comparing to a finger resistance loss of 1.65% rel and a finger cross-sectional area of 41.3 mm 2 if 12 busbars are assumed.Alternatively, tandem solar cells can tolerate greatly reduced paste conductivities to more readily allow the use of non-silver fingers and busbars.
The development and deployment of non-silver busbars (e.g., Al or Cu) or 'busbar-less' technologies must be explored for their potential to reduce silver consumption in conventional busbar and tabbing regions, provided that they don't introduce additional material limitations.However, even with all silver only being used for fingers, achieving a long-term target of 2 mg W À1 will still be challenging with the finger design currently being used in the industry, especially for TOPCon and SHJ solar cells.A finger silver consumption of 2 mg W À1 would only allow a finger cross-sectional area of 120 mm 2 for PERC and around 60 mm 2 for both TOPCon and SHJ, comparing with 500-600 mm 2 in current industrial solar cells.In addition, an equation linking the direct impact of silver consumption on the relative power loss due to series resistance in the fingers highlights that such a dramatic reduction in finger silver consumption will lead to substantially higher finger resistance losses, where a 4-times increase is expected for PERC, and B10 times for TOPCon and SHJ solar cells.With this in mind, we cannot rely simply on pure silver fingers for the conduction of carriers to the busbars.Alternative materials or finger geometry and pattern must be developed to accelerate the reduction of silver consumption in fingers allowing a total silver consumption below 5 mg W À1 or even 2 mg W À1 .One potential path is using a print-on-print approach with a seed layer of a silver paste to form metal-silicon interface areas, capped by a non-silver conductor.Another approach will be using intermittent silver finger regions to form the metal/ silicon interface and relying on non-silver conductors to connect the intermittent regions and provide lateral conduction to the busbars.This will overcome limitations based on the printing width capabilities of screen printing, and simultaneously allow greater reductions in silver consumption.
Another route for reducing silver consumption that must be seriously considered by the PV industry for existing and future technologies is copper plating.Despite reported challenges related to adhesion and reliability, 74 solar cell technologies incorporating copper plating have already been successfully deployed for large-scale production by numerous companies.For example, BP Solar used copper plating for its Saturn s technology from 1992-2006, 75,76 based on the UNSW buried contact solar cell.A recent study highlighted field performance after 12 years of operation in the field, noting comparable durability with standard screen-print solar cells. 77Suntech's Pluto technology also used plating and was scaled to 500 MW in the period of 2009-2013. 78,79This approach was responsible for the world's first p-type commercial solar cell with an efficiency of over 20%. 78Plating has also been successfully deployed for solar cells with passivated contacts, highly relevant for today's emerging industrial solar cells featuring passivated contacts, 80 namely TOPCon and SHJ.For example, Tetrasun's Tetracell technology used plated contacts on top of passivation layers. 81imilarly, Sunpower's Maxeon back-contact technology uses copper plating. 82,83SHJ solar cells with plated contacts have already been deployed by GS Solar. 59There is increasing interest in the academic community for plating on both TOPCon and SHJ solar cells, for example see ref. 84-86.The use of copper as a replacement for silver at the cell level would have a negligible increase to the overall copper consumption for PV technologies.
In all instances, futuristic tandem devices have a unique opportunity to greatly reduce material consumption, including silver, far beyond that achievable with existing technologies in mass production such as PERC, TOPCon, and SHJ solar cells.Due to the low J mp /V mp ratio and a strong dependency of finger resistance power losses on the ratio of J mp /V mp , the tandem cell can better tolerate a reduced number of busbars or reduced finger cross-sectional area without significantly impacting series resistance, which will subsequently enable a considerable reduction in silver consumption in tandem devices.
Prospects for emerging module technologies
In addition to advancements in cell technologies, several new interconnection approaches and module technologies, such as SmartWire, half-cell, and shingled modules have been developed to improve the efficiency/output power at the module level and are currently gaining increasing attention from the industry.With a higher output power, the mg W À1 consumption of silver at the module level is naturally reduced.However, due to relatively small increases in power, the sustainable manufacturing capacity of PV modules is not expected to significantly increase.On the other hand, some of these module technologies could provide unique opportunities for considerable silver reductions on the cell level.
With the SmartWire technology, the interconnection between cells is achieved by copper wires coated with lowtemperature solders directly contacting with fingers, 66 which eliminate the usage of silver in traditional busbars and tabbing regions.In addition, the increased number of wires commonly featured in the SmartWire configuration 66 could provide greater tolerance to a reduced finger silver consumption or increased finger resistivity for the use of other materials (e.g.Al or Cu) without causing excessive increases in finger resistance losses.However, the additional usage of other scarce metals, specifically bismuth in the low-temperature solders, needs to be evaluated carefully to ensure no outstanding concerns will be raised by the availability and supply of bismuth for the SmartWire technology.
The concept of half-cell modules, as suggested by their name, is essentially having pre-cut half cells rather than fullarea cells in the module.Due to the use of half-cell, the amount of current of each string in a module is effectively halved, leading to a significant reduction in the power loss of series resistance, 87 which is governed by the relationship of P loss = I 2 Â R.However, it should be noted that the amount of current collected by and traveling within each finger remains to be unchanged with the half-cell configuration, resulting in finger series resistance power losses the same as in full-cell modules.As such, half-cell modules do not have significant advantages over full-cell modules on reducing the silver consumption in fingers.In addition, the interconnection of half-cell modules relies on the conventional soldering process, of which busbars and soldering tabs are still needed, providing no obvious scope of silver reductions in those regions.
For shingled modules, each full-area solar cell is cleaved into 5 or 6 strips (also known as shingles), and those strips are 'shingled' together like roof tiles on the long edge and bonded with an electrically conductive adhesive (ECA). 88,89This approach eliminates the need of a conventional soldering process as well as the gap between each cell that is normally required in most ribbon-connected modules.As such, the packing density can be improved, and optical shading losses from busbars, soldering tabs and ribbons can be avoided.Due to the omission of the soldering process, silver soldering tabs are no longer needed in shingled modules.In addition, it is possible to replace conventional busbars with localized Ag pads 90,91 or use busbar-less solar cells in a shingled module, 92,93 especially with SHJ solar cells, of which the conductivity of ITO layers on both sides could also contribute to the current transport between shingles.Therefore, the shingled configuration presents a unique opportunity to considerably reduce the silver consumption in busbars and soldering tabs regions.However, attention must also be paid to the silver consumption of the ECA if Ag particles or Cu particles coated with Ag layers were used as the filling material.Since normally a few grams of ECA will be printed or dispensed on each shingle, the percentage content of silver in ECA needs to be limited to a very low level to avoid any excessive increases in silver usage.Therefore, careful evaluation between the silver content and electrical properties, mechanical properties, and reliabilities is of vital importance to the shingling technology.Meanwhile, other cheaper and more abundant materials should also be explored for ECAs.
On the other hand, within the shingled solar cells, the finger length for current to travel is substantially longer than that in conventional full cells with 9BB, resulting in significantly higher finger series resistance losses in shingled modules.For example, if each full cell was cleaved into 6 shingles, the finger length of each shingle is equivalent to that in a 3-busbar solar cell and is three times longer than that in a conventional 9-busbar solar cell, leading to a 9-fold increase in finger series resistance losses as shown in eqn (3).In this instance, reducing the finger silver consumption will become more challenging in current shingled design with silver fingers due to undesirably high-power loss from finger series resistance.For example, as was shown in Fig. 7, this would increase the power loss from a 9BB PERC solar cell from 0.57% and 1.12% to 5.13% and 10.08% for shingled solar cells (6 shingles) assuming a finger silver consumption of 5 mg W À1 and 2 mg W À1 , respectively.
Indium consumption in PV industry
In the PV industry, indium is predominately used in the form of indium-tin oxide (ITO) as a transparent conductive oxide layer (TCO) for SHJ solar cells.In-based alloys have also historically been used for low-temperature soldering and interconnection technologies such as SmartWire. 66However, due to the high cost and scarcity, indium was subsequently replaced by bismuth in those applications, which will be discussed in the next section.In addition, indium also has applications in copperindium-gallium selenide (CIGS) thin-film solar cells, which was historically considered 31 as the primary technology that can lead to indium shortage.However, given the limited market share of thin-film solar cells (o5% of the total PV market share) and ongoing cost reductions of silicon solar cell technologies, indium consumption in thin-film solar cell technologies will likely be insignificant compared to SHJ solar cell production.
In SHJ solar cells, an 80-100 nm thick ITO layer is typically used on each surface to form thin transparent conductive layers.Based on the density of ITO with 90% of In 2 O 3 weight content, this is equivalent to about 5.7 mg W À1 consumption of ITO and 4.2 mg W À1 consumption of indium, with an assumed cell efficiency of 25.11%.A key function of the ITO layer is to provide lateral conduction for charge carriers before being collected by metallization grids.Other commercial Si-based solar cells, such as Al-BSF, PERC, and TOPCon solar cells, have sufficient lateral conductivity from the doped silicon layers or bulk with boron or phosphorus as dopants such that ITO layers are not required.Hence, indium is of no concern for the mainstream PERC technology or emerging TOPCon solar cell technology.
For SHJ solar cells, however, the ITO layer also serves as an anti-reflection coating on top of amorphous silicon layers, particularly on the front surface of the device, in contrast to layers such as PECVD deposited silicon nitride for PERC and TOPCon solar cells.For some new emerging solar cell technologies such as perovskite 94 and tandem solar cells, [95][96][97] in addition to the role of anti-reflection coating layers and contacting layers enabling necessary lateral conduction and formation of high-quality ohmic contacts with metal electrodes at both the front and rear metal contacts, ITO layers may also be used as transport interlayers between the top and bottom cells in tandem devices.
Table 7 includes the ITO consumption in existing industrial SHJ solar cells and possible future scenarios obtained from theoretical calculations, literature, and private discussion with three SHJ solar cell manufacturers.Surprisingly, values from different sources exhibit large discrepancies, where the theoretical calculation based on the volume and density of typical ITO layers yields the lowest Indium consumption of 4.23 mg W À1 .The highest number comes from industrial manufacturers, approximately 2.5 times higher than the theoretically calculated value.It is unclear why values in the literature have such a large variation.The possible differences may include the ITO lost on the wafer carriers or in the chamber, and the unusable portion of the sputter target.However, it should be noted that typical sputter tools claim to have utilization rates over 80%. 98As such, the non-utilized ITO can only account for a small portion of the total indium consumption.
Currently, there are approximately 0.7 GW of SHJ solar panels manufactured per year 99 and 40-50 GW of planned production capacity. 100 Alarmingly, if all such planned production capacity were to use ITO, this would consume 170-540 tonnes per year of indium, corresponding to 8.5-26.9% of 2019 global indium supply already.It is critical that solar cell manufacturers are aware of the limited supply of indium and the scale of use within the PV industry to avoid investments in technologies that are not feasible at the TW scale.
The maximum allowable production capacity of SHJ solar cells as a function of the Indium consumption and the fraction of the 2019 global indium supply is shown in Fig. 11.20% of the global Indium supply in 2019 would only be sufficient to support around 35-95 GW of SHJ solar cell production with indium usage reported by industrial manufacturers or values from theoretical calculations, respectively.For a 1 TW of production capacity using 20% of global indium supply, the indium consumption per cell must be reduced to below B0.38 mg W À1 , which would only allow 3.7 nm or 9 nm thick ITO layers to be used in SHJ solar cells per side based on current usage reported by manufacturers or from theoretical calculations.For a 3 TW market, only 1.2-3 nm of ITO would be allowed per side.Even for a 30% efficient tandem solar cell, at 3 TW level, the total thickness of ITO must be below 1.4-3.6 nm, respectively, to limit indium consumption to 20% of global supply.It should be noted that if additional ITO layers were used as transport layers between top and bottom cells in tandem, the indium consumption level in tandem will be increased depending on the thickness as well as the exact chemical composition of such layers, which will make the use of ITO layers in tandem solar cells even more undesirable in the large-scale production.
Such an aggressive reduction in the thickness of ITO layers is highly unlikely to be either realistic or appropriate from the device fabrication perspective.Concerns and challenges associated with this will be discussed in detail in the next section.On the other hand, with a reduced thickness of around 30 nm for ITO layers, which may be practical and feasible in stacked TCO layer arrangements, the resource sustainable manufacturing capacity of SHJ solar cells will be substantially increased to 115-330 GW.However, this will only account for around 5-10% of a 3 TW market size, suggesting the SHJ solar cell using ITO in some form will remain a niche product.
Prospects for reduction in indium consumption
The severe limitations on the allowed thickness of ITO layers for sustainable PV manufacturing at the terawatt scale would greatly impact the ability of ITO layers to act as lateral transport and anti-reflection coating layers.Since the sheet resistance of ITO layers is inversely proportional to the thickness, the corresponding resistivity of ITO layers must be reduced by over 33-83 times so that the same lateral conductivity of 100 nm-thick ITO layers can be maintained with a thickness of 1.2-3 nm.The material conductivity of ITO layers can be improved by increasing the carrier density. 101However, this will adversely affect optical properties through increased parasitic absorption of infrared (IR) light. 102Alternatively, indium oxide layers doped with tungsten (IWO), 103 cerium (ICO), 104 or hydrogen (IO:H) 104,105 could be used for improved sheet resistance of those layers compared with conventional ITO layers, and in some cases, where a reduced parasitic absorption of IR light is also observed due to the improved carrier mobility, leading to improvements in the IR light management and increases in short-circuit current density.However, none of these is currently known to provide a sufficiently low resistivity that can support the use of the layer below 3 nm.In addition, the extremely thin thicknesses required below 10 nm is likely to be challenging for scalable production and may also show a likelihood of forming ITO layers as isolated islands or becoming amorphous, leading to remarkable deterioration in the electric properties [106][107][108] of ITO layers.Moreover, reducing the thickness of ITO layers could also lead to significant increases in the contact resistivity due to changes in current pathways and current crowding effect.As such, this essentially rules out the use of a single ITO or other doped indium oxide layers as a TCO for solar cells.Even tandem devices with a relaxation of series resistance due to the low J mp /V mp ratio by a factor of 6 compared to conventional SHJ solar cell, the value is likely still too high to allow the use of indium-based TCOs in tandem solar cells.The severe restrictions in ITO thickness due to lateral conductivity also rule out the option of using any indium-free dielectric layers in a stacked configuration together with an indium-containing ITO layer.However, the use of stacked layers with, for example, 20 nm of ITO capped by a non-indium-based TCO would greatly increase the sustainable manufacturing capacity compared to the present implementation for SHJ solar cells relying solely on ITO.
0][111] As a result, the overall optical reflection can be potentially minimized by adjusting the thickness or refractive index of additional anti-reflection coating layers for any given thickness of ITO layers.However, such layers would either need to be conductive to allow effective electrical contact between the chosen anti-reflection coating and the metallization scheme or patterned to enable contact between the ITO layer and metallization scheme.However, any additional layers and processes can increase the cost and complexity for the cell fabrication process.
The fabrication of back-contact SHJ solar cells could also approximately halve the indium consumption by only requiring ITO on one surface.Such structures are responsible for the highest efficiency of 26.7% reported for a silicon solar cell to date. 56Similarly, the development of TCO-free SHJ solar cell structures is being explored utilizing the bulk conductivity of silicon wafers.A recent study by Li et al. achieved an efficiency of 22.3% whereby no TCO was used on the front surface, and a SiN x layer was used as an anti-reflection coating. 112However, again, these approaches, if still requiring ITO for one surface, fall well short of that required for sustainable terawatt-scale manufacturing of SHJ solar cells.
To enable sustainable manufacturing of SHJ solar cells and futuristic tandem devices at the terawatt scale, the use of indium-free TCO layers must be explored to completely overcome limitations imposed by the indium supply.Aluminiumdoped zinc oxide (AZO), as one of the very few potential candidates, has attracted significant attention due to its lowcost and abundant nature in material and capability of achieving comparable efficiencies to ITO-based SHJ solar cells.9][120] Research on tandem devices must also focus on using indium-free TCO layers such as AZO.Without widespread adoption of indium-free TCO layers, SHJ and future tandem technology will only be suitable for niche applications.
Bismuth consumption in PV industry
In the crystalline silicon solar cell industry, Bismuth-based alloys provide a promising low-temperature alternative to the conventional Sn/Pb solders.The lead-free nature of Bi-based solders presents a more environmentally friendly option to the PV industry, which has long been criticized for the use of ribbon coating and soldering pastes containing lead, against the industry's credentials of providing clean, green energy.In addition, Bi-based alloys can be soldered at a much lower temperature, typically below 150 1C, comparing to the soldering temperature above 200 1C needed for the conventional Sn/Pb solders. 121The low-temperature Bi-based alloys could help to avoid cell breakage, cell bowing, and the formation of microcracks by reducing the thermal-induced stress caused by the mismatch of the thermal expansion coefficients of Cu ribbon wires and Si substrate, especially in solar cells fabricated on thinner and larger silicon wafers, a trend that is likely to continue in the future. 10In addition, low-temperature soldering is particularly beneficial for SHJ solar cells, of which the surface passivation quality of amorphous silicon layers could be
Energy & Environmental Science Analysis
jeopardized by any high-temperature thermal processing. 58,122The use of low-temperature alloys will likely also be important for future tandem solar cell technologies involving perovskites, again with temperature restrictions.The Bi-based low-temperature interconnection methods such as the busbar-less SmartWire technology and the MBB approach in conjunction with bismuth-coated wires/ribbons for SHJ solar cells will potentially become the standard technology of SHJ modules, which can also help to reduce the silver consumption with narrower fingers and reduced laydown For such applications, a thin coating of SnBi in the range of 3-5 mm thick may cover the copper wires. 121It is noted here that the MBB technology can also be used for PERC and TOPCon solar cells without requiring bismuth due to the relaxed thermal constraints for soldering.In terms of silver consumption, due to the absence of silver busbars and tabbing regions, the busbar-less SmartWire technology would be preferred for SHJ solar cells over the MBB technology in terms of silver consumption as it is currently used requiring busbars and tabbing regions.
While bismuth has many benefits and advantages, especially for SHJ with severe thermal constraints, the sustainability of bismuth consumption at the TW scale may be difficult to achieve or maintain.Given that the annual production of bismuth (21 000 tonnes per year) is smaller than that of silver (28 000 tonnes per year), it is likely that the bismuth consumption per cell must be smaller than that of silver.The allowed annual production capacity of solar cells using Bi-based interconnection wires for different Bi consumption levels can be found in Fig. 12.For the current busbar-less SmartWire configuration, with 24 wires and a typical wire diameter of 300 mm, each solar cell (210 Â 210 mm 2 ) would consume approximately 144 mg of Bi, corresponding to a bismuth consumption of 13.0 mg W À1 with 25.11% cell efficiency.This architecture is equivalent to using more than 60% of the global Bi supply in 2019 to produce B1 TW of solar cells.When limited to using 20% of global supply.On the other hand, when MBB schemes are used with low-temperature Bi-based wire coatings, B40% lower bismuth consumption can be expected for the 12BB configuration assuming a ribbon diameter of 350 mm and coating layer thickness of 5 mm.This will allow a maximum annual production capacity of 560 GW, still well short of a TW target.The limited increase in efficiency for tandem solar cells at 30% compared to 25% for SHJ solar cells falls well short of that required to enable a sustainable bismuth consumption with the current number of wires, wire diameter, and SnBi coating thickness.For example, 1 TW production of 30% efficient tandem modules with 24 wires would still consume 51% of the global bismuth supply.
For a multi-TW market (e.g., 3 TW), bismuth consumption must be reduced to less than 3.5 mg W À1 if 50% of the global bismuth supply were available to the PV industry.In a more realistic but restricted scenario where 20-25% of the bismuth supply being used in PV, a bismuth consumption of no more than 1.4-1.75mg W À1 is required.Fig. 13 shows the calculated Bismuth consumption per cell in mg W À1 as a function of the number of wires and the wire diameter, assuming each wire is coated with 3 mm thick layer of SnBi (58% Bi in weight).For 300 mm diameter wires, to have a consumption of less than 1.4 mg W À1 , only 2-3 wires can be used per side, in contrast to the standard of 24 wires for SmartWire approach and 12 ribbons for MBB approach as is currently used.
Prospects for reduction in bismuth consumption
Reducing the number of wires, although reducing bismuth consumption, will significantly increase the resistive loss along fingers and along wires, which may defeat the advantage of the consumption of the SmartWire and MBB technologies using bismuth over conventional interconnection technologies.For instance, finger resistance losses will be increased by over 16-36 times compared to the current industrial 12BB configuration if only 2-3 wires can be used per side.This will likely not be feasible for any single-junction technology, including PERC, TOPCon, and SHJ solar cells.As such, the use of SmartWire or MBB interconnection with SnBi coatings as a method to reduce Ag consumption in screen-printed SHJ solar cells will also face challenges for sustainable manufacturing at the TW level.
Bi-based interconnection technologies for SHJ solar cells must substantially reduce bismuth consumption.One option is to use tin-bismuth coatings with substantially lower bismuth contents, although this will increase the melting temperature of the allow.Alternative abundant low-temperature solder alloys must also be investigated.The use of electrically conductive adhesives should also be considered, which have been used for the interconnection of shingled solar cells, 88,91 provided that such adhesives do not contain silver or other scarce elements.
For bismuth, tandem devices again have a unique opportunity to greatly reduce consumption through inherently lower resistive losses.For example, with a finger silver consumption of 5 mg W À1 , by reducing the number of wires per side from 24 to 12 on a 210 mm cell, the sustainable manufacturing capacity of 29.1% efficiency tandem solar cells effectively doubled from 325 GW to 650 GW.Meanwhile, although the finger resistance losses will be increased by a factor of 4, finger resistance losses of such a tandem solar cell (0.41% rel ) will still be lower than a SHJ solar cell with 24 wires per side.
Conclusion and future outlook
As one of the key renewable energy resources in the future global energy supply, sustainable manufacturing of solar PV will become a growing concern as the industry rapidly heading towards a multi-TW scale.Copper, steel, and aluminium are of no significant supply risks given their abundant nature and large global production scale.In addition, continued efficiency increases in solar panels will substantially reduce the consumption of these materials in terms of mg W À1 over time.However, some technology has higher material intensity comparing to the efficiency gain such that the advantage of the higher efficiency may not compensate the higher material consumption.
The primary concern in heading towards sustainable PV manufacturing at the TW scale comes from silver due to its widespread use in all major industrial solar cell technologies and that it contributes a significant fraction of the non-wafer fabrication cost of the solar cell.To enable a 3 TW market, the silver consumption level must be reduced to less than 2 mg W À1 .The current consumption of silver for industrial PERC solar cells is approximately 15.4 mg W À1 , while that for TOPCon and SHJ solar cells are approximately double at 25.6 mg W À1 and 33.9 mg W À1 , respectively, due to the reliance of silver on both the front and rear contacts.This would result in respective silver-limited sustainable manufacturing capacities of 380 GW, 230 GW, and 170 GW, given 20% of the 2019 global silver supply.Although ITRPV projections expect a 50-60% reduction in silver usage over the coming decade for each of these mainstream technologies, the expected values in 2031 are 8.5 mg W À1 , 13.8 mg W À1 and 14.3 mg W À1 , respectively, still well above the 2 mg W À1 target.For PERC, the expected sustainable manufacturing capacity in 2031 would be 680 GW, approximately double that of TOPCon and SHJ solar cells.This again highlights that as long as industrial TOPCon and SHJ solar cells rely on silver-screen printed contacts on both the front and rear of the solar cells, in alignment with the projections in the ITRPV, the limited efficiency gains of TOP-Con and SHJ solar cells over PERC do not justify a transition away from PERC.
With industrial screen-printing technology, as it is done today, achieving the long-term target of 2 mg W À1 will be challenging.Firstly, the typical 12 busbars and soldering pad configuration already has a silver consumption of more than 4 mg W À1 , which rules out the option of using existing interconnection schemes such as the MBB technology.As such, the development and deployment of new interconnection technologies with significantly reduced or zero silver consumption, such as Al/Cu busbars or busbar-less technologies, are urgently required.Replacing silver metallization schemes with aluminium or copper will not cause any supply issues for those materials, given that the consumption at the cell level is negligible compared to the existing usage in balance of systems components.Secondly, the current finger design will also face challenges from increased finger resistance power losses and a much smaller allowable cross-sectional area as the finger silver consumption is reduced.Therefore, exploring alternative materials or novel metallization designs will also be of great importance for screen printing technology.Despite a significant deviation from current industrial mainstream metallization approaches based on screen printing, solar cell technologies incorporating copper plating must also be strongly considered as a pathway to reduce silver consumption.Copper plating technology compatible with sustainable TW-scale manufacturing is already available and has been successfully deployed for large-scale production by numerous companies.
Emerging module and interconnection technologies can not only enhance the power output on the module level, but also present additional challenges and opportunities in silver reductions on the cell level.For instance, with shingled module design, the omission of the traditional soldering process eliminates the need of soldering tabs, and could also result in reduced silver consumption in busbar regions.However, the use of any silver-based ECAs may raise a new concern and needs to be carefully evaluated.One drawback of shingled cell is the significantly increased finger length, leading to a higher finger series resistance loss and imposing addition challenges to the reduction of finger silver consumption.
Indium does not pose a challenge for the mainstream PERC or emerging TOPCon solar cell technologies.Indium only poses a potential challenge for the PV industry if it deploys technologies requiring TCO layers, such as SHJ solar cells and futuristic tandem devices.Current SHJ solar cells with 200 nm of ITO (100 nm on both surfaces) consume approximately 10.7 mg W À1 of indium.This provides an extremely small This journal is © The Royal Society of Chemistry 2021 sustainable manufacturing capacity of less than 40 GW.To enable a 3 TW market of solar cells using indium, the consumption must be reduced to 0.38 mg W À1 .This would equate to no more than 3 nm-thick of ITO layers that can be tolerated for a 30% efficient tandem solar cell.As such, the accelerated development and deployment of indium-free TCO layers is critical for current SHJ solar cells as well as to replace interlayers in future tandem devices.Essentially no solar PV technology requiring indium can be manufactured at scale sustainably.
Similarly, bismuth does not currently pose a challenge for existing PERC or TOPCon technologies.However, the benefit of reduced soldering temperature of Bi-based solders is not only attractive for SHJ solar cells, but also for PERC and TOPCon technologies as a possible replacement for lead-based solders.In addition, the reduced soldering temperature also has the advantage of minimized damage from the thermal mismatch between ribbons and busbars, particularly beneficial for solar cells with larger and thinner silicon wafers.As a result, the use of Bi-based solders could be potentially expanded to all of PERC, TOPCon, and SHJ solar cells in the future.With a typical SmartWire configuration (24 wires on 210 Â 210 mm 2 cells, 300 mm diameter, and 3 mm thick coating layers), 20% of 2019 global bismuth supply can support less than 300 GW of production with such technologies.As such, Bi-based interconnection technologies must substantially reduce bismuth consumption and investigate the possibility of using more abundant low-temperature solder alloys.For example, investigating the use of tin-bismuth coatings with substantially lower bismuth contents, or alternatively the use of electrically conductive adhesives, which have been used for the interconnection of shingled solar cells. 88,91ollectively, while the current implementations of industrial TOPCon and SHJ solar cells do not create an opportunity for substantially increased sustainable manufacturing capacity over PERC, two-terminal tandem devices are exciting highefficiency solar cell structures, which feature a unique opportunity to provide improved sustainability over the current dominant PERC solar cell technology.Firstly, there are natural benefits of significantly higher solar cell efficiencies in the vicinity of 30%.Secondly, and more importantly, owing to the property of low current density output and high voltage output through spectrum splitting, power losses contributed from series resistance components tend to be intrinsically lower in tandem structure, by a factor of 5-6 compared to that of PERC.This feature enables a unique opportunity for tandem devices to reduce the consumption of silver and bismuth simultaneously without introducing excessive resistive losses.
Fig. 1
Fig. 1 Schematic diagrams of (a) PERC solar cell (b) TOPCon solar cell (c) SHJ solar cell (d) two-junction two-terminal tandem solar cell with SHJ bottom cell.Images of H-pattern grid with (e) Ag fingers, Ag busbars, and Ag soldering tabs (relevant for the front surface of PERC, and both the front and rear contacts of TOPCon and SHJ solar cells).(f) Al fingers, Al busbars, and Ag soldering tabs (relevant for the rear surface of PERC).
Fig. 3
Fig. 3 Finger silver consumption for 23.8% efficient PERC cells as a function of front finger spacing and finger cross-sectional area.The wafer area is assumed to be 210 Â 210 mm 2 .The hashed region has the finger silver consumption above the 2 mg W À1 target.Contour lines represent different finger silver consumption levels.
Fig. 5
Fig. 5 Silver consumption as a function of printed metal coverage area and height in typical PERC solar cells.The assumed cell efficiency and the area are 23.83% and 210 Â 210 mm 2 .Contour lines represent different silver consumption levels.
Fig. 6
Fig. 6 Schematic diagram of busbars and fingers in the conventional H-pattern grid.
Fig. 8
Fig. 8 JV curves and external quantum efficiencies (EQE) of a typical 21.9% efficient industrial PERC solar cell (black), a 28.1% efficient twojunction two-terminal Si-based tandem solar cell 73 (red), the top (blue) and the bottom cell (green) of the same tandem device.
Fig. 9
Fig.9Relative finger resistance losses with different ratio of J mp /V mp and line resistivity of fingers.The same metallization pattern (1 mm finger spacing with 12 busbars) is assumed for all data points.Solid contour lines represent relative power losses from finger series resistance.
Fig. 10
Fig. 10 Relative finger resistance losses as a function of finger silver consumption in various cell structures.All values are calculated for the 12BB configuration on 210 Â 210 mm 2 cells.Assumed efficiencies of PERC, TOPCon, SHJ, tandem on SHJ, and tandem on PERC are 23.83%,24.58%, 25.11%, 29.15%, and 27.70%, respectively.Filled squares: estimated losses with the current metallization design as shown in Table3.Filled circles: total 5 mg W À1 consumption with silver being used in fingers, busbars, and tabs.Filled triangles: total 5 mg W À1 consumption with silver being used in fingers and tabs.
Fig. 11
Fig. 11 The allowable annual production capacity as a function of the percentage of 2019 global Indium supply and the Indium consumption (mg W À1 ) per cell in typical SHJ solar cells.The assumed cell efficiency is 25.11% with an area of 210 Â 210 mm 2 .Shaded regions represent the range of indium consumption in current SHJ solar cells shown in Table7.
Fig. 12
Fig. 12 The allowable annual production capacity as a function of the percentage of global Bi supply based on 2019 and the bismuth consumption per cell.The wafer size is assumed to be 210 Â 210 mm 2 .
Fig. 13
Fig. 13 Calculated bismuth consumption as a function of the number of wires per side and the wire diameter assuming a 3 mm coating of SnBi and cell efficiency of 25.11%.
Table 1
Mass fraction, global reserves, and supply for silver, indium, and bismuth
Table 2
2020 silver consumption and efficiencies of typical industrial solar cells and projections in 2031 from ITRPV.10The reference cell size is assumed to be 166 Â 166 mm2
Table 3
Estimated silver consumption in fingers, busbars, and tabs of different solar cell technologies.The assumed cell area is 210 Â 210 mm 2 with 12BB per cell and 18 soldering tabs on each busbar.Assuming 23.83% efficiency for PERC, 24.58% for TOPCon, 25.11% for SHJ, 27.7% for tandem/PERC, 29.15% for tandem/SHJ
Table 4
2he estimated metal coverage area of fingers, busbars, and tabs in typical PERC, TOPCon, and SHJ solar cells.The cell area is assumed to be 210 Â 210 mm2
Table 7
ITO and indium consumption per cell and per generated power for SHJ solar cells | 22,113 | 2021-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Materials Science"
] |
Using Virtual Reality for Teaching Kinematics
Simulations have been used for decades to teach physics concepts. Virtual Reality (VR) opens new avenues: the benefits of acting out physis (embodiment) can be combined with the affordances of a simulated environment. This paper aims to demonstrate how to create physics-education simulations in VR with comparatively small effort beyond 2D-simulations, using the Unity game development environment in connection with consumer-grade VR gear.
Introduction
Simulations have proven their effectiveness for physics teaching and learning, as they provide opportunities for unrestricted engaged exploration [1,2], without the risk of destroying equipment ormuch worse -bodily harm.Otherwise unaffordable laboratory equipment can be made available virtually at scale, and immaterial concepts such as fields can be made tangible [3].
Moving beyond two-dimensional screen-based simulations, which can be developed using a variety of standard tools, the challenge of writing VR simulations can seem daunting.This challenge used to start with the availability of VR environments, once the domain of "caves" [4], and included the complications of interfacing with the VR equipment, but this changed with the advent of affordable gear for gaming purposes.This paper aims to show how to construct physics-education simulations in VR with comparably small effort beyond 2D-simulations, using the Unity game development environment [5,6] in connection with software libraries that allow for unified development across platforms and equipment, as well as consumer-grade VR equipment such as HTC Vive [7] or Oculus [8].VR gear usually includes a headset and hand controllers; these sets currently cost a few hundred dollars, but admittedly also require an almost equally as expensive graphics card in the computer running the simulation.
The left panel of Figure 1 shows a typical example of a setup: the student is wearing VR-glasses and uses two hand controllers; in the background is one of the two so-called "lighthouses", which are used for positional triangulation (more modern setups do not need this feature anymore, and more modern glasses are less bulky and wireless).In this example, HTC Vive (right panel of Figure 1) is used in connection with an NVIDIA GeForce GTX 1070 (8 GB) card in a laptop, but the development environment Unity used here hides and automatically manages these hardware details, HTC Vive has the advantage that it does not need a Facebook-account (the company is now called Meta, and more recent devices need to sign on to the "Metaverse" -something that might be problematic for privacy reasons).The small star-shaped device in Figure 1 is a so-called Trackable Object, which could be attached to real-world object as a means to capture their position and rotation, so a real object could be interactively integrated into the virtual world.
Using VR in Connection with Unity
Unity is a leading game development platform, which has found its way into several other areas of industry and academia for simulation and visualization purposes.The platform is free for noncommercial purposes (however, authors should carefully study the license as it applies to publication beyond classroom use, even free-of-charge).Unity is a professional environment, also used for a large number of commercially available games, and learning to use it for any purpose entails a steep learning curve.However, at the same time, the environment hides a lot of intricacies that would otherwise slow down the development of immersive environments: rendering objects in the virtual world is abstracted away by consequently treating them as software objects (similar to VPython [9]) and by providing a "camera"-object that "looks" at what the player sees on the screen.Lighting, obstruction, rendering of textures, as well as basic physics such as gravity, momentum conservation, friction, etc., are also all taken care of by the underlying engine, so developers can focus on the objects themselves and their interactions.Figure 2 shows the development environment of Unity.
Unity is best learned using the tutorials provided in the "Learning" section of their website [5].The tutorials come with downloadable setups and assets; videos and text materials work through the essential steps of writing immersive, interactive content.Readers familiar with VPython [9] have a large advantage, but there are some differences worth pointing out: Unity is even more strongly objectoriented, in that there is usually no one script controlling all objects (this would be possible but bad style); instead, event-driven object-orientation is consistently implemented by attaching small behaviour scripts directly to the objects that usually get triggered when events like collisions occur (see the right panel in Figure 2).These scripts are most frequently written in C#, but other languages are possible, such as JavaScript.The second large difference is the built-in physics engine, which automatically applies all of Newtonian physics to objects (if desired; this is the "RigidBody" component in the right panel of Figure 2).The third large difference is the graphical scene editor, which allows to put objects into their initial places without calculating their coordinates (this is the upper left panel in Figure 2).While interfacing with graphics cards and standard input devices such as mice, keyboards, and joysticks is a given, one might still expect that interfacing with the VR equipment is the most complex task when moving to VR; fortunately, the SteamVR package (available for free from the Unity Asset Store) abstracts away the equipment (including differences between vendors).Thus, when writing VR applications, it turns out that learning Unity in the first place is the large step, while the addition of VR can be accomplished in two small steps.
The essential first step is replacing Unity's "Main Camera" (what a player would see on the computer screen) by SteamVR's "Player" (what the player will see in their headsets; see the object list in the upper left panel of Figure 2), which in one fell swoop lets the user move around in the scene in a natural and intuitive way -essentially, the player's eyes become the "camera" in the virtual space.If virtual scenes simply need to be observed by looking and walking around, one would be finished now; in most cases though, the second step would be the literal manipulation of objects: grabbing them, moving them around, or throwing them.
As this second step, all game objects that the hand controllers should manipulate need to be linked to SteamVR's input system; this, however, is also fairly simply accomplished by adding a script component to those objects.The SteamVR library provides several ready-to-use scripts for this purpose that can be attached to objects, for example a "Throwable" behaviour (right panel of Figure 2): the user can intuitively grab the object, move it around, and throw it (with the Unity physics engine again taking over).From then on, Unity can be programmed in the same way as for non-VR games.
A Kinematics Simulation
Students frequently struggle with the fundamental kinematics concept of position, velocity, and acceleration, as all three may point in different directions [10].The concept of motion is readily accessible (arguably, hardly any physics concept is more easily embodied), but the concept of describing motion through vectors seems abstract to many learners -these vectors have no physical manifestation.In this example, the idea is to give these vectors manifestations in a simulation, by showing the velocity and acceleration vectors of a "throwable" ball.These travel along with the ball and can be readily observed in the virtual world.
The ball-object in this case is a sphere from the standard Unity library of simple geometric objects, covered with a metallic texture from the Unity Asset Store; the "Throwable" behaviour script comes from SteamVR library and is attached to the sphere, and finally the ball has a physics ("RigidBody") component from the standard Unity library that makes it fall, bounce, and roll with friction.Figure 3 shows screenshots of the simulation on the computer screen, which displays a clipped-out section of what the user sees in the VR glasses.
Figure 3. Screenshots of the simulation on the computer screen, which shows a clipped-out section of the user's headset display [11].
The left panel of Figure 3 shows the "throwable" ball and the hand controllers, which appear as gloves (again, these "hands" come with SteamVR).The user can grab the ball and move it around; the middle panel shows the user holding the ball and swinging it around in a horizontal circle (while holding the ball, the gloves disappear, which may appear unintuitive, but in actuality does not impact immersion).The arrows, in this case, are standard Unity cylinders, but the length is calculated in a selfwritten C#-script based on kinematics.The green arrow shows the instantaneous velocity (during this circular motion in tangential direction), and the red arrow shows the centripetal acceleration.The user can also throw the ball; the right panel of Figure 3 shows the ball in such a freefall trajectory, still on the way up.The simulation includes a cage, from which the ball would bounce off, so it does not get lost out of reach of the user (without the cage, the ball could roll away in the virtual world to places where in the real world there are walls or furniture in the way).During these collisions, the instantaneous acceleration vector can be observed.
A big challenge of Virtual Reality (besides finding enough open floorspace) is, that only one student at a time can wear the glasses.However, the clipped-out view of Figure 3 can be projected for all students to observe what is happening in virtual space and give directions to the user.In another implementation, a Trackable Object could be attached to a real ball, which students could interact with directlythe student could see the real ball in the virtual world, but with the vectors attached to it.A problem would be the integration of real surfaces (walls, floor, ceiling, furniture) into the virtual world.
Beyond Simulations
Embodiment is one possible application of VR systems; another is data collection for movement in three dimensions.In addition to the already tracked hand controllers and the headset, most systems also provide various types of robust Trackable Objects like the one in Figure 1, which can be attached to real objects.By default, the HTC Vive system collects fifty position and rotation datapoints per second for all tracked objects, with an absolute spatial resolution in the centimetre range.Individual datapoints are a bit noisy, so for the simulation presented here, a 10-datapoint running average was implemented to provide a solid base for the smooth rendering of the calculated velocities and accelerations.Finally, while it is instructional to have students interact with simulations, it can be even more instructional to have them write simulations [12,13]; while having used the somewhat simpler VPython in introductory physics courses [14], the author has been using Unity and VR in more advanced seminar courses.
Conclusion
Virtual Reality has become mainstream as a result of high-end gaming.As this particular market segment might have outlived its initial hype, and as VR headsets have become more mainstream up to the point where they are available in supermarkets, this development made the technology affordable for other applications, including education, where its immersive and interactive nature allows for the embodiment of otherwise abstract concepts.The same gaming applications also gave rise to packages like SteamVR that make the development of VR simulations relatively easy in standard game development platforms like Unity; these packages abstract away the intricacies of VR development, so authors can focus on the physics functionality of their simulations.
Figure 1 .
Figure 1.Student wearing VR glasses (left panel) and a typical (but by now older) VR system (right panel) including glasses, hand controllers, and a Trackable Object.
Figure 2 .
Figure 2. Screenshot of the Unity development environment. | 2,664.2 | 2024-03-01T00:00:00.000 | [
"Physics",
"Computer Science",
"Education"
] |
Existence of Nontrivial Solutions of p-Laplacian Equation with Sign-Changing Weight Functions
Ghanmi Abdeljabbar Département de Mathématiques, Faculté des Sciences de Tunis, Campus Universitaire, 2092 Tunis, Tunisia Correspondence should be addressed to Ghanmi Abdeljabbar<EMAIL_ADDRESS>Received 30 September 2013; Accepted 9 December 2013; Published 12 February 2014 Academic Editors: E. Colorado, L. Gasinski, and D. D. Hai Copyright © 2014 Ghanmi Abdeljabbar.This is an open access article distributed under theCreativeCommonsAttribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper shows the existence and multiplicity of nontrivial solutions of the p-Laplacian problem −Δ pu = (1/σ)(∂F(x, u)/∂u) +
Inspired by the work of Brown and Zhang [10], Nyamouradi [11] treated the following problem: where is positively homogeneous of degree * − 1.
ISRN Mathematical Analysis
In this work, motivated by the above works, we give a very simple variational method to prove the existence of at least two nontrivial solutions of problem (1).In fact, we use the decomposition of the Nehari manifold as vary to prove our main result.
Before stating our main result, we need the following assumptions: We remark that using assumption (H 1 ), for all ∈ Ω, ∈ R, we have the so-called Euler identity: Our main result is the following.
This paper is organized as follows.In Section 2, we give some notations and preliminaries and we present some technical lemmas which are crucial in the proof of Theorem 1. Theorem 1 is proved in Section 3.
Some Notations and Preliminaries
Throughout this paper, we denote by the best Sobolev constant for the operators 1, 0 (Ω) → (Ω), given by where 1 < ≤ * .In particular, we have with the standard norm Problem ( 1) is posed in the framework of the Sobolev space = 1, 0 (Ω).Moreover, a function in is said to be a weak solution of problem (1) if Thus, by (6) the corresponding energy functional of problem (1) is defined in by In order to verify ∈ 1 (, R), we need the following lemmas.
Lemma 4 (See Proposition 1 in [13]).Suppose that (, )/ ∈ (Ω × R, R) verifies condition (12).Then, the functional belongs to 1 (, R), and where ⟨⋅, ⋅⟩ denotes the usual duality between and * := −1, (Ω) (the dual space of the sobolev space ).As the energy functional is not bounded below in , it is useful to consider the functional on the Nehari manifold: Thus, ∈ if and only if Note that contains every nonzero solution of problem (1).
Moreover, one has the following result.
Lemma 5.The energy functional is coercive and bounded below on .
Proof.If ∈ , then by (16) and condition (A) we obtain So, it follows from (8) that Thus, is coercive and bounded below on .Define Then, by ( 16) it is easy to see that for ∈ , Now, we split into three parts Lemma 6. Assume that 0 is a local minimizer for on and that 0 ∉ 0 .Then, ( 0 ) = 0 in −1 (the dual space of the Sobolev space E).Proof.Our proof is the same as that in Brown-Zhang [10, Theorem 2.3].
From now on, we denote by 0 the constant defined by then we have the following.
Proposition 12. (i)
There exist minimizing sequences { + } in + such that (ii) There exist minimizing sequences Proof.The proof is almost the same as that in Wu [14, Proposition 9] and is omitted here.
Proof of Our Result
Throughout this section, the norm is denoted by ‖ ⋅ ‖ for 1 ≤ ≤ ∞ and the parameter satisfies 0 < || < 0 .
Theorem 13.If 0 < || < 0 , then, problem (1) has a positive solution + 0 in + such that Proof.By Proposition 12(i), there exists a minimizing sequence { + } for on + such that Then by Lemma 5, there exists a subsequence { } and + 0 in such that Next, we will show that By Lemma 3, we have where = /( − 1). | 949.2 | 2014-02-12T00:00:00.000 | [
"Mathematics"
] |
Role of Corticotropin Releasing Factor in the Neuroimmune Mechanisms of Depression: Examination of Current Pharmaceutical and Herbal Therapies
Approximately 3% of the world population suffers from depression, which is one of the most common form of mental disorder. Recent findings suggest that an interaction between the nervous system and immune system might be behind the pathophysiology of various neurological and psychiatric disorders, including depression. Neuropeptides have been shown to play a major role in mediating response to stress and inducing immune activation or suppression. Corticotropin releasing factor (CRF) is a major regulator of the hypothalamic pituitary adrenal (HPA) axis response. CRF is a stress-related neuropeptide whose dysregulation has been associated with depression. In this review, we summarized the role of CRF in the neuroimmune mechanisms of depression, and the potential therapeutic effects of Chinese herbal medicines (CHM) as well as other agents. Studying the network of CRF and immune responses will help to enhance our understanding of the pathogenesis of depression. Additionally, targeting this important network may aid in developing novel treatments for this debilitating psychiatric disorder.
INTRODUCTION
Depression, also termed as clinical depression or major depressive disorder (MDD), is a common but serious mental disorder affecting the quality of human life. Depression is characterized by discrete episodes of more than 2 weeks' durations with distinct changes in cognition, and neurovegetative functions and inter-episode remissions (American Psychiatric Association, 2013). Depression is one of the most common mood disorders currently affecting approximately three percent of the world's population (GBD 2015 Disease and injury incidence and prevalence collaborators, 2016), and is one of the leading contributors to the global burden of diseases. Depression shows gender specificity in which women have a lifetime incidence of depression two times greater than men. Also depression is shown to be associated with elevated risk of cardiovascular, cerebrovascular disease and other forms of disease-related mortality (Steffens et al., 1999;Bradley and Rumsfeld, 2015). In addition, patients with depression have higher suicidal tendency which makes it a potentially lifethreatening mental disorder .
Corticotropin releasing factor (CRF) was originally identified by Vale et al. (1981). CRF is a key regulator of the hypothalamicpituitary-adrenal (HPA) axis, which is the most important neuroendocrine system mediating the stress response. Upon stress exposure, CRF is released from the hypothalamus and it stimulates the production of a series of down-stream stress hormones, including adrenocorticotropin (ACTH) and glucocorticoids (Belvederi Murri et al., 2014). Glucocorticoids in turn inhibit the endocrine activity of the hypothalamus and pituitary gland, forming a negative feedback loop. This feedback loop is vital for the regulation and homeostasis of the stress response system (Slominski et al., 2013). Dysregulation of the HPA axis has extensive effects on the body, and triggers a series of behavioral, physiological, and metabolic responses (Bao and Swaab, 2010;Swaab et al., 2005). HPA axis hyperactivity is a common finding in the pathology of depression (Kinlein et al., 2015). In depression patients, overproduction of CRF was found in parallel with changes in other components of the HPA axis (Lightman, 2008). Therefore, CRF is believed to contribute to the symptoms of depression by regulating activity of the HPA axis.
The immune system serves as the first line of defense against multiple harmful stimuli from the environment. In mammals, the immune system can be divided into two anatomically distinct components: the neuroimmune system and the peripheral immune system. The peripheral immune system consists of different immune cells mainly derived from multipotent hematopoietic stem cells in the bone marrow, such as lymphocytes, mast cells, phagocytes, macrophages, neutrophils, dendritic cells, and natural killer cells (Hashimoto et al., 2011;Hodes et al., 2015). The primary residential immune cells of the neuroimmune system are glial cells (Gimsa et al., 2013;Beardsley and Hauser, 2014). Disorders of the immune system are associated with several chronic diseases (O'Byrne and Dalgleish, 2001), and interactions between the nervous system and the immune system play an essential role in depression (Wohleb et al., 2016). Previous studies have shown that CRF receptors are widely expressed in T cells and glial cells (Stevens et al., 2003;Chatoo et al., 2018). Also, immune cell dysfunction has been observed in depression and chronic exposure to CRF and glucocorticoids inhibits T-cell proliferation (Oh et al., 2012;Jin et al., 2016). Additionally, the expression levels of glial fibrillary acidic protein (GFAP), a marker of astrocytes, is found to be decreased in patients suffering with depression (Miguel-Hidalgo et al., 2000). Cytokines, including interleukin-6 (IL-6), interleukin-1 beta (IL-1β), tumor necrosis factor alpha (TNFα) and interleukin-10 (IL-10), can induce the secretion of CRF upon exposure to stress, and CRF can in turn mediate the level of these cytokines (Kariagina et al., 2004;. Thus, it is suggested that the CRF is a key regulator of immune responses in depression. Novel antidepressants can be developed based on the regulatory role of CRF in depression. For example, a large number of Chinese herbal medicines (CHM) hold potential for treating depression because of their abilities to suppress inflammation and normalize elevated CRF levels. Drugs directly modulate CRF signaling and HPA axis activity, such as CRF1 antagonists, can also be potent antidepressants. This review summarizes the evidence highlighting the role of CRF in the neuroimmune regulation of depression and provides a biological basis for developing effective treatments for this psychiatric disorder.
NEUROBIOLOGY OF DEPRESSION
Depression is a disorder with complex pathogenesis which is not well understood because of highly variable pathophysiological course. Familial studies suggest that depression is a heterogeneous mental disease (Belmaker and Agam, 2008). Besides genetic factors, environmental adversities like overall health status, emotional abuse and social problems are also risk factors that lead to depression . At the moment, there is no established mechanism for the interaction between the genetic and environmental factors involved in the onset and development of the depression (Otte et al., 2016).
The mammalian stress response is a complex biological process driven by interactions between the brain and peripheral systems such as the immune and cardiovascular systems (McEwen, 2007). Preclinical and clinical studies have demonstrated that stress and depression are associated with neuroplasticity which is change in the morphology of neurons, alterations in the connectivity and activation of neural networks in a regionally dependent manner (Duman, 2014). Atrophy and loss of neurons and glial cells are seen in the brains of depressed subjects and a reduced volume of hippocampus and cortical brain regions is observed in the pathogenesis of depression (Otte et al., 2016). Changes in dendritic spine density, dendritic length and branching patterns have been described in the hippocampus, amygdala, and prefrontal cortex in response to stress (Davidson and McEwen, 2012). Besides impaired neuroplasticity, decreased neurogenesis in the dentate gyrus (DG) of the hippocampus has also been found in brain of depressed patients (Samuels and Hen, 2011). Looking at the above-mentioned evidence it can be stated that depression affects an individual by changing the neural structures and networks.
CORTICOTROPIN-RELEASING FACTOR (CRF) AND HPA AXIS: AN OVERVIEW
Corticotropin-releasing factor (CRF), also termed as corticotropin releasing hormone (CRH), is a 41-amino acid polypeptide. The CRF family also includes three urocortins apart from CRF which are urocortin 1, urocortin 2, and urocortin 3 (Keck, 2006). Members of the CRF family bind to two type of receptors: Corticotropin-releasing factor receptor 1 (CRF1) and Corticotropin-releasing factor receptor 2 (CRF2) which are expressed differently in the nervous system and peripheral tissues. CRF1 is highly expressed in the brain, cerebellum, and pituitary, with a lower expression in peripheral tissues such as skin and adrenal gland (Potter et al., 1994). The expression of CRF2 in CNS is more limited being restricted primarily to subcortical areas such as the hypothalamus and amygdala (Reul and Holsboer, 2002a). CRF2, however, is widely expressed in peripheral tissues including heart, lung, adrenal gland, ovaries and testes (Naughton et al., 2014;Ketchesin et al., 2017).
Corticotropin releasing factor is a key component of the HPA axis. The HPA axis is composed of the hypothalamus, the pituitary gland and the adrenal glands and is a major regulator of endocrine stress response (Keck, 2006). Different brain regions are involved in the stress response system, such as amygdala, hippocampus and the prefrontal cortex (PFC) (Bao and Swaab, 2010). During the stress state, the neuronal activation in these regions converges on the hypothalamus and activates the endocrine stress response (Waters et al., 2015). Typically, CRF is secreted by the median paraventricular nucleus (PVN) in the hypothalamus (as a response to various stressors) and released from the terminals of secretory neurons. CRF is transported by the local vascular system, and stimulates the pro-opiomelanocortin (POMC) transcription and adrenocorticotropic hormone (ACTH) release (also named corticotropin) by binding to CRF1 in the anterior pituitary gland (Lightman, 2008;Slominski et al., 2013). ACTH acts on the adrenal cortex resulting in the synthesis and release of glucocorticoids (cortisol in humans and corticosterone in rodents), which have broad biological effects in the body (Arborelius et al., 1999;Slominski et al., 2013). Glucocorticoids are the main end effectors of HPA activation and also exert negative feedback effects on the hypothalamus and the pituitary gland to inhibit CRH and ACTH production, respectively. Two types of glucocorticoid receptors have been identified: the mineralocorticoid receptor (MR) and the glucocorticoid receptor (GR). Glucocorticoids act on these two kinds of receptors to terminate the stress response (Bao and Swaab, 2010).
DYSFUNCTION OF CRF AND HPA AXIS IN DEPRESSION
The HPA axis mediates the endocrine stress response in both basal and pathological conditions. Hyperactivity of HPA axis has been observed as one of the most fundamental mechanisms in the pathophysiology of psychiatric disorders, including depression (Vreeburg et al., 2009). Increased concentrations of CRF in cerebrospinal fluid and CRF mRNA expression in the PVN have been observed in depression patients (Nemeroff et al., 1984;Raadsheer et al., 1995). ACTH and cortisol levels increase in parallel with the hypersecretion of CRF and result in adrenal hypertrophy (Lightman, 2008;Wang et al., 2017). The hyperactivity of HPA axis is accompanied by an impaired HPA negative feedback, and result in hypercortisolemia (de Kloet et al., 2005). Long-lasting abnormal HPA axis activity disrupts endocrine system homeostasis, resulting in a series of physiological, behavioral and mental consequences, and drives the pathogenesis of psychiatric disorders including depression (Bao et al., 2012).
The role of the HPA axis in depression is age-dependent. HPA axis hyperactivity is a common finding in younger patients (Murphy, 1991;Vreeburg et al., 2009). However, results of studies focusing on older patients are mixed. Consistent with findings in younger adults, high cortisol levels were also found in some older depressed subjects (Gotthardt et al., 1995;O'Brien J. T. et al., 2004). Inversely, decreased serum and urinary cortisol levels were observed in other older patient samples (Morrison et al., 2000;Oldehinkel et al., 2001). These findings suggested that both hyper as well as hypoactivity of the HPA axis are implicated in late-life depression (Ancelin et al., 2017). The hypocortisolemia could be due to the chronic exhaustion of the HPA axis (Bremmer et al., 2007). With increasing age, patients with depression show a greater change on the HPA axis activity compared to people without depression, especially in circulating cortisol and ACTH levels (Stetler and Miller, 2011;Belvederi Murri et al., 2014). the HPA axis gets more and more vulnerable to dysregulation with increasing age (Ancelin et al., 2017). This change may be caused by age-related changes in different elements of the HPA axis, such as increasing instability of MRs and biosynthetic dissociation of adrenocortical secretion (Ferrari et al., 2001;Berardelli et al., 2013).
Interestingly, the prevalence of depression in women is several times greater than that in men (Kessler et al., 1993). Sex differences in CRF receptors have been found in almost all brain regions (Weathington et al., 2014). Given the association between CRF and depression, it has been hypothesized that CRF receptors may mediate the gender-dependent prevalence of depression (Waters et al., 2015). In adult rats, CRF1 binding in females is overall greater than that in males, with higher binding in accumbens (ACC), dorsal CA3 and subregions in basal forebrain such as nucleus accumbens shell (AcbS) and olfactory tubercle (OT) (Weathington et al., 2014). Females also have higher CRF2 binding in lateral septum, whereas in other brain regions, such as posterior bed nucleus of the stria terminalis (BST) and ventromedial hypothalamus, males have greater CRF binding (Weathington et al., 2014;Beery et al., 2016). The sex differences of CRF receptors may be a result of evolutionary change to adapt the different adult social behavior that benefits the reproductive success (Weathington et al., 2014). Similar gender bias has also been observed in the key symptoms of depression such as hyperarousal and inability to concentrate, and this bias has been associated with gender differences in CRF regulation (Bangasser et al., 2016). Therefore, according to the published literatures, gender differences in CRF regulation and symptoms of depression strongly support the involvement of CRF in depression.
The CRF system also has a vital role both in stress responses and depression. In depression, excess glucocorticoid levels, caused by hyperactivity of the HPA axis, result in neuronal damage and immune disturbances (Reul and Holsboer, 2002b;Koutmani et al., 2013). CRF stimulates neurogenesis and attenuates the neuronal damage on neural stem/progenitor cells caused by glucocorticoids in mice (Koutmani et al., 2013). Increased numbers of CRF expressing neurons and elevated CRF mRNA expression were found in the PVN of hypothalamus of patients with depression (Raadsheer et al., 1994(Raadsheer et al., , 1995. The dysregulation of CRF caused extensive negative effects on the body, such as reduction in appetite, stressinduced analgesia, sleep disturbances, and anxiety (Swaab et al., 2005;Bao and Swaab, 2010). These effects can be mimicked in experimental animals by intracerebroventricular injection of CRF (Holsboer et al., 1992;Holsboer, 2001). CRF overexpression in CNS of mice caused stress-induced hypersecretion of stress hormones and depression-like behaviors (Lu et al., 2008). CRF acts through CRF1 and CRF2 receptors to regulate the depressive-like behaviors, and these receptors play different roles in stress-induced HPA response. Restraint stress induced a rapid and strong down-regulation of hippocampal CRF1 receptor mRNA, while CRF2 receptor mRNA was upregulated in the same region (Greetfeld et al., 2009). Mice lacking CRF1 receptor showed impaired stress-induced HPA response (Muller et al., 2000). In contrast, CRF2-deficient mice showed increased depression-like behaviors (Bale and Vale, 2003;Todorovic et al., 2009), and this effect may be due to elevated hippocampal CRF1 receptor activity caused by MEK/ERK pathway activation in the absence of CRF2 (Todorovic et al., 2009). CRF1 receptor has an essential role in mediating the effect of CRF on HPA axis. A study in rats proved that chronic forced swim stress-induced depressivelike behaviors required the activation of CRF/CRF1 signaling in the basolateral nucleus of the amygdala (Chen L. et al., 2018). Mice lacking CRF2 receptor showed early termination of HPA response, which indicates that CRF2 receptor may be involved in the maintenance of HPA drive (Coste et al., 2000). CRF receptors are widely expressed in the CNS. Therefore, the CRF driven regulation of stress-coping behaviors can be independent of the HPA axis activity. Decreased anxiety was observed in a mouse model where CRF1 was inactivated in anterior forebrain and limbic brain structure while functioning normally in the pituitary (Muller et al., 2003). Taken together, these findings suggest a homeostatic role for CRF in the nervous system. Dysregulation of CRF may cause a series of stress-related diseases which include depression as well. The above discussed roles of CRF1 and CRF2 receptors toward CRF regulation, which leads to the development of depression, might help in better understanding this stress-related psychiatric disorder. Therefore, normalizing the abnormal CRF secretion or blocking the CRF receptors can be effective strategies for the treatment of depression.
NEUROIMMUNE SYSTEM AND DEPRESSION
The earliest indication demonstrated that depression is likely to be associated with inflammation, as it is reported that patients treated with recombinant human interferon alpha developed psychiatric complications (Renault et al., 1987). Subsequently, immune variations have also been observed in depressed subjects. The degree of neutrophilia, monocytosis, and leukocytosis is positively related with the severity of depression, which indicates that an inflammatory cascade might be linked to depression (Maes et al., 1992). However, mitogen-induced lymphocyte proliferation and natural killer cell activity were found to be inhibited in depression patients (Herbert and Cohen, 1993). Furthermore, elevated serum levels of several pro-inflammatory cytokines, such as TNFα, IL-1β, and IL-6, have also been detected in patients with depression (Howren et al., 2009;Dowlati et al., 2010). Therefore, the possibility of depression resulting from inflammatory processes, cannot be ruled out.
The involvement of the immune system in the pathogenesis of depression is also indicated by high comorbidity rates between depression and other diseases associated with chronic inflammation, such as diabetes, cardiovascular disease and cancer (Evans et al., 2005). The chronic inflammation underlying in these disease states is a possible mediator or driver of the progression of depression (Wohleb et al., 2016). Besides the systemic diseases, psychosocial or environmental stress is another important contributor to depression (Christoffel et al., 2011). A study on C57BL/6 mice demonstrated that social defeat stress can lead to depressive-like behavior (Iniguez et al., 2014). Cytokine profiles for different animal models of depression indicates that various forms of stress exposure induces the release of pro-inflammatory cytokines such as INF-γ, IL-1β, and IL-6 (Hodes et al., 2015), which implicates immune responses as an underlying mechanism of depression caused by stress.
CRF, CYTOKINES, AND IMMUNE CELLS IN DEPRESSION
The peripheral immune system and the neuroimmune system are two distinct compartments of the immune system. Bidirectional molecular pathways have been described between the peripheral immune system and the neuroimmune system which enable the immune communication (Wohleb et al., 2016). The blood-brain barrier (BBB) mediates the trafficking of peripheral immune cells into the CNS and the exchange of cytokines between the blood and the CNS (Erickson et al., 2012). Cytokines produced in peripheral immune cells, like IL-6 and IL-1β, can act on glial cells and neurons in the CNS (Hodes et al., 2015).
Corticotropin releasing factor and HPA axis activity are known to be modulated by cytokines (Pan et al., 2006). Cytokines and their receptors are expressed in both CNS and PNS (Hopkins and Rothwell, 1995). Lipopolysaccharide (LPS) injection into experimental animals induced the synthesis of peripheral pro-inflammatory cytokines such as IL-1, IL-6, and TNFα. These cytokines can cross the BBB and regulate the activity of the HPA axis through multiple cytokine receptors (Utsuyama and Hirokawa, 2002). Depression is associated with the pro-inflammatory cytokine (IL-1, IL-6, and TNFα) via regulation of CRF (O'Brien S. M. et al., 2004). IL-1 and TNFα stimulate the secretion of IL-6, which in turn exerts negative feedback regulation on the production of IL-1 and TNFα (O'Brien S. M. et al., 2004). IL-6, IL-1β, and TNFα stimulate the secretion of CRF and results in hyperactivity of the HPA axis (Dentino et al., 1999;Kariagina et al., 2004). A CRF1 antagonist (SSR125543) can block the effects of inflammatory cytokines on stress-related behaviors (Knapp et al., 2011). Moreover, CRF can induce the release of TNF-α in glial cells (Wang et al., 2003). Another study demonstrated that intraperitoneal injection of CRF increased the expression of TNF-α and IL-6 . These results imply that during depression, proinflammatory cytokines stimulate the secretion of CRF, and CRF activation may in turn facilitate secretion of proinflammatory cytokines. Anti-inflammatory cytokines play different roles in CRFdriven regulation of depression. IL-10, an anti-inflammatory cytokine produced in lymphocytes and CNS structures such as pituitary and hypothalamus, plays a key role in limiting immune responses and further inhibiting the production of cytokines (Smith et al., 1999;Kiecolt-Glaser and Glaser, 2002). IL-10 attenuates the proinflammatory state produced by LPS (Hennessy et al., 2011). In clinical studies, patients with depression treated with four antidepressants (venlafaxine, L-5-hydroxytryptophan, fluoxetine, and imipramine) showed increase in the production of IL-10 ( Kubera et al., 2000Kubera et al., , 2001. Under conditions of stress, IL-10 production by lymphocytes or hypothalamus is increased along with the levels of ACTH and CRF (Smith et al., 1999). IL-10 has been suggested to prevent the passive behavior caused by CRF injection (Hennessy et al., 2011). As IL-10 can stimulate the secretion of ACTH, this preventive effect may be partly due to the ACTH mediated short feedback loop inhibition of CRF (Smith et al., 1999). Another clinical study showed that CRF treatment suppresses IL-10 production in both Alzheimer's disease (AD) patients and healthy controls, and this process was regulated by T cells (Oh et al., 2012). Both proinflammatory and anti-inflammatory cytokines can enhance the production of CRF. However, the effects of CRF on proinflammatory and anti-inflammatory cytokines are opposite. CRF stimulates the secretion of proinflammatory cytokines while it suppresses the secretion of anti-inflammatory cytokines. Taken together, interactions between CRF and cytokines play a crucial role in the pathology of depression and targeting the network of cytokines and CRF may be an effective therapeutic strategy for this mood disorder.
Peripheral immune cells such as T cells play an important role in the stress-induced immune response (Haczku and Panettieri, 2010). The immunomodulatory effect of CRF is not restricted to the nervous system as CRF also exerts peripheral regulatory effects on skin, the gastrointestinal tract and the cardiovascular system (Slominski et al., 2013). CRF receptors are expressed by a variety of immune cells, such as mast cells, dendritic cells, B cells, and T cells (Chatoo et al., 2018;Harle et al., 2018). Chronic exposure to CRF and glucocorticoids results in immune dysregulation such as a reduction in T-cell proliferation (Oh et al., 2012;Jin et al., 2016). One primary function of T cells in the immune system is to produce cytokines. CRF suppresses the antiinflammatory cytokine IL-10 in regulatory T (Treg) cells, a kind of T cells that contribute to stress-related exacerbation in AD (Oh et al., 2012). A recent study demonstrated that CRF can disturb the immunosuppressive effect of Treg cells on CD4 + T cells via suppressing a protein named dedicator of cytokinesis 8 (DOCK8), and this effect may contribute to stress-induced aggravation of AD (Jin et al., 2016). Interestingly, lymphocytes like T cells, and B cells also have the ability to secrete CRF (Kravchenco and Furalev, 1994). The interactions between T cells and CRF in depression are yet to be explored.
Accumulating evidence suggests that glial cells, a major cellular component of the neuroimmune system, are also involved in the pathology of depression. Oligodendrocytes, astrocytes, and microglia are some of the most common types of glial cells in the CNS (Miller and O'Callaghan, 2005). Loss of glial cells in amygdala and subgenual prefrontal cortex has been reported in depressed subjects (Ongur et al., 1998;Hamidi et al., 2004). A decrease in expression of GFAP, a marker of astrocytes, was observed in depression patients (Miguel-Hidalgo et al., 2000). In addition, glial ablation in the pre-frontal cortex induced depressive-like behaviors in rats (Banasr and Duman, 2008). These findings suggest a crucial role of glial cells in depression, and glial cell dysfunction may contribute to progression of this disorder. Microglia belongs to macrophage populations, and plays a key role in CNS homeostasis (Perry and Teeling, 2013). Microglia are in a resting state under basal conditions. They can undergo morphological changes and modulate into phagocytic cells once activated (Vilhardt, 2005). Activated microglia and astrocytes produce pro-inflammatory cytokines such as TNFα, IL-1, and IL-6, resulting in neuroinflammation (Lee et al., 2000;Zhu et al., 2010). Intracerebroventricular administration of LPS induced an up-regulation of proinflammatory cytokines along with an increase in reactive glial markers, and resulted in depressivelike behaviors (Huang et al., 2008). In the CNS, inflammasomes regulate neuroinflammation by mediating the maturation and secretion of pro-inflammatory cytokines (Singhal et al., 2014). Activation of inflammasomes has been found in depression patients (Alcocer-Gomez and Cordero, 2014). In depressed rats, proinflammatory cytokine-related inflammation is mediated by nucleotide binding oligomerization domain-like receptor family pyrin domain-containing 3 (NLRP3) inflammasome (Pan et al., 2014). Chronic stress failed to induce depressive behaviors in the absence of NLRP3 inflammasome (Alcocer-Gomez et al., 2016). Activation of NLRP3 inflammasome in glial cells could also induce depressive-like behaviors in rats (Yue et al., 2017). Furthermore, glial cells mediate the neuroinflammatory process and are involved in the pathogenesis of depression . Both CRF1 and CRF2 receptors are expressed in both microglia and astrocytes (Stevens et al., 2003). The activation of microglia and astrocytes in neuroinflammation is mediated by CRF, and this process may be a underlying mechanism of several neurological diseases, including depression (Kritas et al., 2014). Abnormalities in oligodendrocytes have been described in several mood disorders, such as schizophrenia, bipolar disorder, and depression (Aston et al., 2005). A reduction in total glial cells and oligodendrocytes has been found in amygdala of the brains of depressed subjects while no significant difference in astrocytes or microglia density was observed (Hamidi et al., 2004). There is no direct evidence of the presence of CRF receptors in oligodendrocytes, but CRF elevates cyclic adenosine monophosphate (cAMP) level in these cells (Wiemelt et al., 2001). Thus, CRF receptors may also be expressed in oligodendrocytes as CRF1 is the primary mediator of increase in cAMP in response to CRF stimulation (Stevens et al., 2003). Further studies are needed to elucidate the relationship between oligodendrocytes and CRF.
Cumulatively, CRF regulates the immune responses in the CNS by mediating cytokine production and activation of peripheral immune cells and glial cells (Figure 1). The CRF-mediated immune responses play a crucial role in the pathogenesis of a series of neurological diseases, including depression. However, a recent study reported that chronic highdose captopril (CHC) administration can induce a specific form of depressive-like behavior. This effect is caused by Treg reduction and microglial activation with unaltered CRF levels and HPA axis activity (Park et al., 2017). This finding suggests that the activation of immune cells as a response to depression can also be independent of CRF and the HPA axis regulation.
POTENTIAL APPLICATION OF CHINESE HERBAL MEDICINES IN TREATING DEPRESSION
The links between immune responses and depression have inspired the application of anti-inflammation therapies in the treatment of depression. In depressed patients who also suffer from coronary artery disease, statin treatment can downregulate IL-1β expression and function as an anti-inflammation therapy of depression (Ma et al., 2016). Another study demonstrated that chronic treatment with the non-steroidal anti-inflammatory drug (NSAID), celecoxib, reversed the depressive-like behavior in stressed rats by inhibiting cyclooxygenase (COX)-2 expression (Guo et al., 2009). Ginseng total saponins (GTS) are effective in attenuating lipopolysaccharide-(LPS) induced depressionlike behavior because of its peripheral anti-inflammatory activity (Kang et al., 2011). Ethyleicosapentaenoate (EPA) has been used to treat depression, and such an activity likely originates from suppression of inflammation and upregulation of nerve growth factor (NGF) (Song et al., 2009). Besides the use of drugs, other approaches that suppress inflammation may also be a potential treatment strategy for depression. A recent clinical study suggests that transcutaneous auricular vagus nerve stimulation (taVNS) can alleviate multiple symptoms of depression and one of the possible underlying mechanisms is that taVNS may inhibit inflammatory responses and relieve stress (Kong et al., 2018).
It is worth noting that many CHM have been long used for anti-inflammatory properties. The biologically active components of CHM has been reported to inhibit proinflammatory pathways (Pan et al., 2011). Anti-depression effects have been found in a vast number of CHM such as Tianshu capsule, Danggui-Shaoyao-San, and Kai-Xin-San (Xu et al., 2011;Zhu et al., 2016;Sun et al., 2018). Thus, these CHM hold potential as anti-depression medications. In rats, tribulus terrestris saponins (TTS) treatment significantly reduced chronic mild stress (CMS) induced increase of serum CRF (and CORT) and depressive-like symptoms, which indicates that antidepressant effects of TTS may be attributed to downregulation of HPA axis hyperactivity by CRF regulation (Wang et al., 2013). Salidroside (SA) showed antidepressant activities in olfactory bulbectomized rats by reversing the elevated CRH expression in hypothalamus and serum CORT level, and the normalization of HPA axis hyperactivity by SA may be due to its anti-inflammatory properties (Yang et al., 2014). Oral administration of saikosaponin A, one of the main constituents of Chai hu, restored the elevated pro-inflammatory cytokines levels and CRF level in depressed rats (Chen X. Q. et al., 2018). However, direct intracerebroventricular injection of saikosaponin A failed to affect CRF levels, while saikosaponin D, another major component of Chai hu, increased CRF mRNA level in the hypothalamus in the same study (Dobashi et al., 1995). Therefore, instead of directly affecting CRF levels, saikosaponin A may regulate CRF levels by suppressing neuroinflammation (Chen X. Q. et al., 2018). These findings suggest that antidepressants, including CHM can restore HPA axis hyperactivity by decreasing CRF levels, and such effect may be due to direct regulation of CRF levels, or indirect regulation of neuroimmune mechanisms.
Although many CHM have shown promising antidepressantlike effects, their exact mechanisms of action remain unclear. Future studies are needed to find out their direct targets in depression treatment. Depression is a multifactorial disease. Most CHM act through multiple mechanisms simultaneously. Therefore, they have advantages over other single-target drugs in depression treatment. In addition, the compatibility of CHM may have better therapeutic effects than using a single drug in treating complex diseases, such as depression. Developing novel plantbased medicines against depression is an important imperative to strengthen the public health and enrich our knowledge about the potential use and value of CHM.
CRF1 ANTAGONISTS AND OTHER ANTIDEPRESSANTS
Corticotropin releasing factor exerts its effect on various tissues via acting on CRF receptors. As CRF/CRF1 signaling involved in the pathogenesis of depression, blocking CRF1 receptor may be an effective therapeutic approach. Several CRF1 receptor specific antagonists with potent antidepressant-like effects have been developed (Zoumakis et al., 2006). For example, a selective CRF1 receptor antagonist E2508 shortened immobility time in the rat forced swim test (Taguchi et al., 2016). Besides treating depression, CRF1 receptor antagonists may have many other applications because of the multifaceted actions of CRF/CRF1 system. For example, potential clinical applications of CRF1 receptor antagonists include the treatment of anxiety, allergy, autoimmune inflammatory disorders, epilepsy and so on (Grammatopoulos and Chrousos, 2002). In aged rats, two CRF1 receptor antagonists, R121919 and antalarmin, prevented chronic stress-induced anxiety-related behavioral and memory deficits (Dong et al., 2018). Although CRF1 receptor antagonists show promising effects in rodents, their clinical efficacy is mixed. FIGURE 1 | Schematic illustration of CRF regulation of the endocrine and immune system in depression. CRF mediates the activity of the HPA axis and the neuroimmune system. It also exerts regulatory effect on other peripheral tissues such as skin, gastrointestinal tract and cardiovascular system. Chronic exposure to stress results in CRF hypersecretion and HPA axis hyperactivity. Elevated CRF level stimulates the production of pro-inflammatory cytokines by peripheral immune cells, these peripheral cytokines can cross the blood-brain barrier and activate astrocytes and microglia in the CNS. CRF can also directly activate astrocytes and microglia. The activated astrocytes and microglia secrete more pro-inflammatory cytokines. These astrocytes-and microglia-derived cytokines have a broad effect on the CNS, drive neuroinflammation and produce depression-like behavioral alterations.
GSK561679, BMS-562086, GSK561679, and GW-876008 yield negative results in clinical trials in patients with depression and anxiety disorders (Griebel and Holsboer, 2012;Dunlop et al., 2017). In contrast, two clinical trials with pexacerfont and verucerfont showed positive effects in treating withdrawal symptoms and stress-induced alcohol craving (Schwandt et al., 2016;Morabbi et al., 2018). One possible reason of these failures might be the heterogenous response to CRF1 receptor antagonists treatment (Licinio et al., 2004). These individual differences may be caused by genetic variability of CRHR1, the gene encoding for CRF1 receptor, or different activity in CRF-CRF1 systems (Spierling and Zorrilla, 2017;Davis et al., 2018). Further studies can focus on developing personalized treatment plans for depression. Evaluating genetic or non-genetic markers may aid in developing specific CRF1 antagonists for specific patient subgroups. Besides CRF1 receptor antagonists, activation of CRF2 with two selective agonists, urocortin 2 (UCN 2) and urocortin 3 (UCN 3), reversed depression-and anxiety-like behaviors (Bagosi et al., 2016). The development of selective antagonists of CRF1 receptor or agonists of CRF2 receptor may aid in developing novel treatments to a wide array of stressrelated diseases, including depression.
Several other antidepressants have been used for the treatment of depression, such as triple uptake inhibitors, monoamine oxidase inhibitors and selective monoamine reuptake inhibitors (de Oliveira et al., 2018). Overall, the efficiency and the therapeutic window of anti-depressants are limited. Only about 50% of all patients receiving anti-depressants have complete remission (Nestler et al., 2002). Moreover, the mechanism of action of anti-depressants is usually much more complex than expected. As a result, anti-depressant medications generally cause a variety of side effects. Therefore, it is extremely important to develop novel anti-depressants having high efficiency and less side effects.
CONCLUSION
Depression is a very complex neurological disorder. The normal functioning of the brain is carried out by intricate interactions between CNS and peripheral systems such as gastrointestinal tract, cardiovascular system, and immune system. Dysregulation of any key mediators in these systems may break the homeostasis and subsequently result in neurological diseases. CRF affects the various biological processes in human body, and an increasing volume of data suggests a crucial role for CRF in the immune regulation of depression. CRF is a key regulator of the HPA axis, which is a common pathway of stress response involved in the pathogenesis of a variety of neurological diseases and it can also regulate the neuroimmune system by mediating cytokine production and neuroinflammation. CRF receptors are expressed in peripheral immune cells, glial cells and neurons. Dysregulation of CRF caused by external and internal factors can result in neuronal and endocrinal consequences and drives depressive behaviors. It is notable that bidirectional regulation is a common feature of the interactions between CRF, immune cells and cytokines. Further studies are required to establish a deeper understanding of the complex network of CRF-mediated immune crosstalk in depression.
In conclusion, this review provides a basis for the crucial role of CRF in the neuroimmune regulation of depression. Studying the interaction of CRF and immune responses can help enhance our understanding of the pathogenesis of depression. Furthermore, targeting this network may facilitate new therapeutic approaches to counteract depression, and other stress-related diseases.
AUTHOR CONTRIBUTIONS
YJ and TP drafted and wrote the manuscript. YJ, TP, UG, MS, PL, WQ, ZC, YZ, and WZ did the critical revision of the manuscript. | 7,936.6 | 2019-07-02T00:00:00.000 | [
"Psychology",
"Biology",
"Medicine"
] |
Mechanical Tension of Biomembranes Can Be Measured by Super Resolution (STED) Microscopy of Force-Induced Nanotubes
Membrane tension modulates the morphology of plasma-membrane tubular protrusions in cells but is difficult to measure. Here, we propose to use microscopy imaging to assess the membrane tension. We report direct measurement of membrane nanotube diameters with unprecedented resolution using stimulated emission depletion (STED) microscopy. For this purpose, we integrated an optical tweezers setup in a commercial microscope equipped for STED imaging and established micropipette aspiration of giant vesicles. Membrane nanotubes were pulled from the vesicles at specific membrane tension imposed by the aspiration pipet. Tube diameters calculated from the applied tension using the membrane curvature elasticity model are in excellent agreement with data measured directly with STED. Our approach can be extended to cellular membranes and will then allow us to estimate the mechanical membrane tension within the force-induced nanotubes.
■ INTRODUCTION
Cellular membranes are found to attain a multitude of morphologies and often exhibit highly curved segments with certain functionality. In particular, highly curved membrane nanotubes are involved in several cellular functions such as cell migration, 1 signaling, 2 remote communication and motility, 3 and cell spreading. 4 Tunneling membrane nanotubes also play an important role in transfer of cellular content (small molecules, proteins, prions, viral particles, vesicles, and organelles) in a variety of cell types 5−9 as well as electrical signals. 10 During migration, tubular membrane protrusions (also referred to as retracting fibers) are formed behind the migrating cell and are responsible for releasing cellular content. 11 In all of these examples, when not supported by the underlying substrate, membrane shape is modulated by membrane tension which affects the membrane surface area and morphology. 12,13 Membrane tension thus provides a link between membrane mechanics, morphology, as well as mechanical transduction in the cell, for example, via tensionsensitive membrane channels. However, how cellular tension is regulated and mechanobiological cues are perceived by the cell is poorly understood. 14 In principle, plasma membrane tension can be indirectly inferred from nanotube pulling experiments where the membrane diameter and force of pulling could be used to extract membrane mechanical parameters such as tension, bending rigidity, and spontaneous curvature. 15−19 Tensionsensitive probes with fluorescence decay lifetimes depending on tension have also been recently introduced. 20 However, it is unclear how curvature and local probe concentration increase due to sorting mechanisms affects the dye performance. Apart from providing the means to assess the membrane bending rigidity and tension, tube pulling experiments also allow the study of cellular processes that take place at highly curved membranes. 21−26 In these experiments, the cell or vesicle is immobilized or more often aspirated by a micropipette setting the membrane tension, and a tube is typically pulled by means of optical-tweezer manipulation of a bead attached to the membrane. For a fixed bending rigidity of the membrane, the tube radius depends on membrane tension and thus measuring the radius allows assessing this mechanical parameter. However, membrane nanotube diameters are not directly accessible via diffraction-limited microscopy imaging and these limitations obstruct progress in the field. 27 Here, we measure for the first time the diameter of membrane nanotubes directly using stimulated emission depletion (STED) nanoscopy as a function of membrane tension in a controlled reconstituted system. To form membrane nanotubes, we employ giant unilamellar lipid vesicles (GUVs). 28 GUVs represent a popular model system of cellular membranes as their response to external factors as well as thermodynamic state can be visualized under the optical microscope. 29, 30 In addition, they are amenable to micromanipulation (see Chapters 11 and 16 of ref 28). Pulling a membrane nanotube (also referred to as tether) provides such a micromanipulation protocol, in which a cylindrical membrane segment with a diameter ranging between 20 nm to few hundreds nanometers is extruded from the GUV. Controlled membrane nanotubes can be generated by hydrodynamic flow 31−34 (both inward and outward tubes with respect to the vesicle body can be pulled 34 ), gravity, 35 micromanipulation, 31,36,37 and magnetic or optical tweezers 15,38−40 (see also overview in ref 41), whereby tube formation is enforced by a localized pulling force.
We use micropipettes to aspirate and hold the vesicle in place and to modulate the membrane tension by adjusting the aspiration pressure. The imposed tension mimics cellular conditions corresponding to the cortical tension. 42 To pull the nanotube from the vesicle, a sticky microsphere, trapped by the optical tweezers, is used as a handle. For a GUV being aspirated at a suction pressure, at which the GUV tongue inside the micropipette is longer than the micropipette radius, the total membrane tension is given by 41 where R v and R p are the respective radius of the vesicle and micropipette, ΔP asp is the aspiration pressure of the micropipette, m is the membrane spontaneous curvature, and κ is the bending rigidity of the membrane. The first term in eq 1 is the aspiration tension 43 for which we will use the notation Σ asp . The trapped bead, located at the terminal of an outward nanotube pulled from the aspirated GUV, experiences a force 13,41 which contains two terms that depend on the spontaneous curvature m, because the total membrane tension Σ̂depends on the spontaneous curvature as well, see eq 1. In our experiments, the spontaneous curvature is so small that we can ignore these two m-dependent terms; further below, we will justify this condition for the system we explore. As a consequence, the total membrane tension Σ̂reduces to the first term in eq 1, which represents the aspiration tension Σ asp . Furthermore, for negligible spontaneous curvature, the aspiration tension becomes equal to the mechanical tension which can now be deduced from the aspiration geometry and the aspiration pressure. In addition, the last term in eq 2 can be ignored because of the small mean curvature of the vesicle.
To perform tube pulling experiments, we switch between confocal fluorescence and bright-field imaging: the membrane nanotube is visualized in confocal mode and the force on the trapped object is obtained from bead position recorded in bright-field mode. At a fixed tube length, the force f t , acting on the trapped bead, can be estimated from the bead off-center displacement from the trap axis, Δx, as f t = κ tr Δx, where κ tr is the trap constant determined independently, see Experimental Section. From the dependence f t versus asp Σ , one can deduce the bending rigidity. Alternatively, this material property could be assessed from the membrane nanotube radius, if the latter could be measured: the radius of a cylindrical tube R t depends on the aspiration tension through the relation 41,44 In general, the tube radius is smaller than the resolution of the optical microscope (∼200 nm), which makes it impossible to measure it directly. AFM imaging could be used when the tubes adhere to a substrate 45 but, as a result of this adhesion, the tube morphology will be deformed into a noncylindrical shape which can no longer be described by the tube radius alone. In other studies, the tube radius is estimated indirectly from correlating the fluorescence intensity count. 46−48 However, it is unclear how curvature influences the dye performance. Moreover, dye sorting taking place in membrane nanotubes 16,49 necessarily affects the tube fluorescence intensity and thus the measurement of the nanotube diameter. The radii of spontaneously formed tubes (not pulled by tweezers) can be also roughly inferred from spontaneous curvature measurements 17,50,51 (see pages 9−11 in ref 52 for a review of approaches to measure the membrane spontaneous curvature), but an assumption has to be made for the shape of the tube (cylindrical or necklace-like).
With the advent of super resolution microscopy, the optical microscope resolution has been improved to a few tens of nanometers 53,54 and thus, in principle, can be used to measure membrane nanotube diameters. Previously, STED microscopy has been used to study the membrane heterogeneity 55−57 and to assess dimensions of endoplasmic reticulum structures using point spread function fitting. 58 It has advantages over other super resolution microscopic techniques such as photoactivated localization microscopy (PALM) 59 and stochastic optical reconstruction microscopy (STORM), 60 which require the acquisition of a high number of images (typically a few thousand frames) 61 and are thus slower imaging techniques. In this Letter, we report direct measurement of membrane nanotube diameters with unprecedented resolution using STED microscopy. For this purpose, we have integrated an optical tweezers setup in a microscope equipped for STED imaging.
■ EXPERIMENTAL SECTION
Vesicle Preparation and Characterization. GUVs were grown from 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidylcholine (POPC) doped with 0.1 mol % biotinyl cap phosphatidylethanolamine (PE) (both from Avanti Polar Lipids) and 0.5 mol % ATTO 647N dye (AttoTech) using electroformation in 100 mOsm/kg sucrose solution; for details see Section S1 in the Supporting Information (SI). Occasionally, we also explored vesicle membranes containing cholesterol (Chol), namely at POPC/Chol 9:1 molar ratio. An optically trapped streptavidin-coated bead of diameter ∼2 μm adhered to the vesicles due to biotin−streptavidin bonding. The vesicles were diluted in isotonic medium of 40 mM glucose and 30 mM sodium chloride solution. This external solution was chosen (i) to enhance the optical contrast of the vesicle in phase-contrast observation but avoid vesicle deformation by gravity, (ii) to ensure strong biotin−streptavidin binding, which requires the presence of sodium chloride, 40 and (iii) to establish conditions of low asymmetry across the membrane so that the spontaneous curvature is negligible 51 (at these Nano Letters pubs.acs.org/NanoLett Letter conditions the spontaneous curvature is comparable to the mean curvature of the GUVs and the last two terms in eq 2 can be ignored). The bending rigidity of the membrane was measured from fluctuation analysis according to previously published protocol, 62 see Section S2 in the Supporting Information. All experiments were performed at ∼22°C. MATLAB (2014a) and Origin 2015 were used for the image and data analysis. Experimental Setup. In our experiment, a membrane nanotube is extruded from an aspirated GUV using an optically trapped microsphere (Figure 1). The setup includes three parts (see Section 3 in the SI): micropipette system to hold and aspirate GUVs, optical tweezers to extract the membrane nanotube, and confocal and STED scanning for fluorescence imaging. The setup is based on an inverted microscope (IX83, Olympus Inc., Japan), which is a part of a commercial STED system (Abberior GmbH, Germany). For optical trapping, we have established home-built tweezers by introducing a continuous wave TEM 00 mode 1064 nm laser beam (YLR-10-LP, IPG Photonics Corp.) from the microscope back port (SI Figure S1). The laser beam is tightly focused using a 1.2 numerical aperture (NA) water immersion objective (UPLSA-PO60, Olympus Inc., Japan with a working distance of 0.28 mm) to form the optical trap. The objective is also used for the fluorescence imaging. Bright-field images were collected using a CCD camera positioned at the back port of the microscope (SI Figure S1). To quantify the trap stiffness, we employed the viscous drag method: the sample was displaced at a constant velocity while trapping a bead and monitoring its off-center displacement, see SI Section S3.1. To hold and aspirate the GUVs, a micropipette was inserted into the sample chamber using a three-dimensional micromanipulator system (Narishige Corp., Japan) clamped on the microscope (SI Section S.3.2). For fluorescence imaging, a 640 nm pulsed laser was used for excitation and another pulsed 775 nm laser beam was used for emission depletion. A spatial light modulator placed in the STED beam path enabled 3D STED; for comparison between 2D and 3D STED, see SI Section S3.3).
■ RESULTS AND DISCUSSION
GUVs with diameters typically between ∼20 to 25 μm were aspirated via micropipettes with diameters of 3 to 5 μm. A floppy GUV was chosen (see Movie S1 for an example), aspirated by the micropipette at a low aspiration pressure and brought into the contact with streptavidin-coated bead trapped by the optical tweezers. The low vesicle tension allowed us to achieve a larger contact area of the bead with the membrane (occasionally the vesicle was displaced so that the position of the bead was well inside the GUV interior but still engulfed by the membrane). After waiting for few seconds, the aspirated GUV was moved away from the trapped bead and a membrane nanotube was extruded from the vesicle due to strong biotin− streptavidin noncovalent bonding. In all experiments, we kept the length of the enforced nanotube to be between 8 and 10 μm. By doing so, the hydrodynamic contribution arising from the vesicle wall is minimized. 63 If the tube is shorter, not only the noncylindrical part of the vesicle and the fluorescence from it can affect the measurements but also the high power STED beam can destabilize the trapped bead and affect the trapping efficiency. Much longer tubes were also avoided as the whole GUV with its aspirated tongue would be out of the field of view. In addition, trapping potential in the outer region of the imaging field could be affected by spherical aberration of the microscope objective.
The membrane nanotube was visualized under the confocal microscope (Figure 1b). For the ease of the experiments, the center of the GUV spherical portion outside the pipet and the center of the trapped bead (and thus also the membrane nanotube) were kept in the same plane, which was fixed to 20 μm above of the cover glass surface; this condition ensured no hydrodynamic effects on the trapped bead and constant trapping efficiency. 63 The micropipette and GUV diameters were measured from the confocal image.
To measure the tube diameter, we recorded a kymograph of a line scan perpendicular to the nanotube axis (y-axis in Figure 1b). Even though the signal-to-noise (S/N) ratio of 2D STED images was higher than those of 3D STED images (see Figure S5 in the Supporting Information), the out-of-focus signal arising from the nanotube practically reduces the resolution in the former images. This effect is even more pronounced when comparing confocal and 3D STED scans, see Figure 1c−h. The pixel dwell time was adjusted to 20 μs to obtain 3D STED Nano Letters pubs.acs.org/NanoLett Letter images with significant S/N ratio and higher effective resolution compared to 2D STED imaging. The STED resolution was measured using 20 nm beads and found to be <40 nm in both x-and y-axes (SI Section S3.3 and Figures S3 and S4). Therefore, in a STED line scan across the tube, the two wall-crossings of a membrane nanotube with radius larger than 20 nm should be, in theory, resolvable under these system settings. However, due to the inherent vibrations of the micropipette (∼31 nm, over six measurements of the positional fluctuations of the micropipette tip), the thermal motion of trapped beads (∼15 nm, obtained from the trap stiffness, which was measured to be 74 ± 2 pN/μm, see SI Section S3.1), and the GUV itself, the membrane nanotubes are found to laterally fluctuate with an amplitude of the order of few hundred nanometers in the y-direction (see Figure 2a,b), which is in the range of the expected tube diameter. As a result, in a major fraction of the line scans in a kymograph, instead of two clearly defined peaks (as sketched in Figure 1g), we detect several noisy maxima. The appearance of multiple peaks was reduced to some extent by adjusting the pixel size to 20 nm (at lower pixel size, the scans were significantly noisier, see SI Figure S6). Larger pixel sizes were not explored as the resolution of the STED microscope was found to be <40 nm in both x-and y-axes while the pixel size is typically kept about half of STED resolution as a rule of oversampling. 64−66 To reduce contributions from nanotube fluctuations, the line scans were aligned (Figure 2c, see also Section S4 in the Supporting Information). Subsequent averaging allowed identifying two clearly resolved fluorescence maxima arising from the two tube wall-crossing of the line scan; see Figure 2d. STED line scans that did not show two clear peaks after applying all of the above-mentioned steps were discarded from the analysis; these discarded line scans represented approximately 56% of all scans collected and result from out-of-focus displacement, micropipette vibration, and membrane fluctuations. We denote the tube diameter determined from this interpeak distance as 2R t, STED and measured it for different aspiration pressures ranging between 15−140 × 10 −6 N/m; see Figure S7 in the Supporting Information.
To avoid excessive photobleaching and decrease in fluorescence signal of the pulled nanotubes, each measurement at a given tension was performed only once. To estimate the precision of the image processing, we performed three repeat measurements on a single tube and found the standard deviation to be 11 nm.
We then aimed at comparing the tube diameter measured from the STED images, 2R t, STED , with the tube diameter, 2R t , indirectly assessed from the applied aspiration pressure following eq 3. For this, a precise knowledge of the membrane bending rigidity is required. The bending rigidity was measured with two independent methods. Analysis of the thermal fluctuations of free GUVs using a previously established protocol 62 yielded for the bending rigidity 23 ± 2 k B T as assessed on five different vesicles (see Section S2 in the Supporting Information). We also measured this membrane elastic property from tube pulling experiments using eq 2 which gave a bending rigidity value of 23 ± 5 k B T as assessed from measurements on different vesicles (see Section S5 in the Supporting Information). The results from the two approaches are in excellent agreement and are consistent with previous data. 67 Using the obtained value for the bending rigidity (κ = 23 k B T), we compared the diameters of tubes directly measured from the STED images, 2R t, STED , with the respective tube diameters, 2R t , independently assessed from the imposed membrane tension following eq 3. For the vesicles made of POPC/Chol we took κ = 32.5 k B T corresponding to the linearly proportional increase in the bending rigidity upon the incorporation of 10 mol % cholesterol in POPC membranes as reported in ref 68. The comparison shown in Figure 3 demonstrates that the experimental STED data and estimates from the elastic sheet model (with independently measured bending rigidity) are in excellent agreement. Presumably, for tube diameters approaching 50 nm and below, the accuracy of the STED measurements does not allow precise determination because the tube diameter reaches the size of a couple of pixels.
■ CONCLUSIONS
We have shown for the first time that super-resolution microscopy like STED can be used to directly measure the membrane nanotube diameter. For membrane tubes pulled in a controlled fashion from GUVs aspirated in micropipettes, the tube diameter measured microscopically is in excellent agreement with estimates inferred from knowledge of the membrane tension and membrane rigidity. Thus, we provide the first direct evidence for the validity of the widely used curvature elasticity model for nanotubes down to tube diameters of 50 nm. STED imaging of tubes pulled from vesicles of known bending rigidity offers a means of assessing the membrane tension without the need of operating micromanipulation setups such as micropipette aspiration.
In the current paper, we were able to measure the three quantities that enter eq 3 independently: the tube radius by STED, the aspiration tension Σ asp from the aspiration pressure, Nano Letters pubs.acs.org/NanoLett Letter and the bending rigidity by tube pulling. In this way, we were able to confirm eq 3 directly, see Figure 3. As it stands, eq 3 is based on the implicit assumption that the mechanical membrane tension is laterally uniform and that the mechanical tension Σ t within the tube is equal to the aspiration tension. The latter assumption is, however, unnecessary. In fact, the mechanical balance within the nanotube leads to a slightly modified and more general form of eq 3 for which the aspiration tension is replaced by the tube tension Σ t . As a consequence, the tube tension is given by Σ t = κ/(2R t 2 ). The latter relation can be used to obtain the mechanical tube tension from the measured values of the bending rigidity and the tube radius. It will be interesting to use this more general form of eq 3 to estimate the tube tension of plasma membranes, combining the previously obtained bending rigidity of these membranes 69 with the tube radius as measured by STED.
Furthermore, assessing the tube tension from superresolution imaging as introduced here could be applied, for example, to study (i) the dynamics of migratory cells leaving behind membrane nanotubes from which migratosomes with signaling material will be released, 11 (ii) curvature coupling of proteins to highly bent membranes, 46 as well as (iii) flows and tension propagation in cells 48 (it has already been shown that under the right conditions, STED microscopy can be applied to live-cell imaging without inducing substantial photodamage 70 ) The analysis described here was limited to symmetric membranes with zero spontaneous curvature. Comparing the minimal forces needed to pull out tubes and the second term in eq 2 suggests that our approach can be extended to asymmetric membranes for which the magnitude of the spontaneous curvature is comparable to or larger than 1/(240 nm). The imaging methodology developed here based on measuring tube diameters with STED offers access to direct measurements of material characteristics such as tension and rigidity of cell membranes.
Vesicles preparation and observation; fluctuation spectroscopy; setup; optical trapping and calibration; micropipette manipulation; STED imaging and image analysis; bending rigidity data from tube pulling (PDF) Fluctuating floppy vesicle suitable for micropipette aspiration (real time, 10 s at 24 frames per second) (AVI) Figure 3. Plot of membrane nanotube diameters as directly measured using STED (R t, STED ) versus tube diameter estimated using eq 3 (R t ). Different colors correspond to measurements on different vesicles. Solid symbols represent data measured on POPC vesicles and open symbols are data collected on vesicles made of POPC/Chol 9:1 (molar ratio). The green line is a linear fit, y = a + bx, with a = 8.86 nm and b = 0.91, and the light green band represents a 95% confidence interval. The orange line with slope 1 is included for comparison. | 5,187 | 2020-04-22T00:00:00.000 | [
"Biology",
"Engineering"
] |
Genome editing abrogates angiogenesis in vivo
Angiogenesis, in which vascular endothelial growth factor receptor (VEGFR) 2 plays an essential role, is associated with a variety of human diseases including proliferative diabetic retinopathy and wet age-related macular degeneration. Here we report that a system of adeno-associated virus (AAV)-mediated clustered regularly interspaced short palindromic repeats (CRISPR)-associated endonuclease (Cas)9 from Streptococcus pyogenes (SpCas9) is used to deplete VEGFR2 in vascular endothelial cells (ECs), whereby the expression of SpCas9 is driven by an endothelial-specific promoter of intercellular adhesion molecule 2. We further show that recombinant AAV serotype 1 (rAAV1) transduces ECs of pathologic vessels, and that editing of genomic VEGFR2 locus using rAAV1-mediated CRISPR/Cas9 abrogates angiogenesis in the mouse models of oxygen-induced retinopathy and laser-induced choroid neovascularization. This work establishes a strong foundation for genome editing as a strategy to treat angiogenesis-associated diseases.
V ascular endothelial growth factor (VEGF) plays a critical role in angiogenesis, the process by which new blood vessels grow from pre-existing vessels [1][2][3] . Among the VEGF receptors 1, 2, and 3 (VEGFR1, 2, and 3), VEGFR2 mediates nearly all known VEGF-induced output, including microvascular permeability and neovascularization (NV) 4 . NV is critical for supporting the rapid growth of solid tumors beyond 1-2 mm 3 and for tumor metastasis 5 . Abnormal angiogenesis is also associated with a variety of other human diseases such as proliferative diabetic retinopathy (PDR) 6,7 , retinopathy of prematurity (ROP) 8 , and wet age-related macular degeneration (AMD) 9,10 . PDR accounts for the highest incidence of acquired blindness in the working age population 6,7 ; ROP is a major cause of acquired blindness in children 8 ; AMD represents the leading cause of blindness in people over the age of 65 afflicting 30-50 million people globally 10 . Preventing VEGF-stimulated activation of its receptors with neutralizing VEGF antibodies (ranibizumab and bevacizumab) and the extracellular domains of VEGFR1 and 2 (aflibercept) is currently an important therapeutic approach to angiogenesis in these eye diseases but requires chronic treatment 8,10 . Although these anti-VEGF agents can reduce neo-vascular growth and lessen vascular leakage, there are still therapeutic challenges to a significant number of patients with these eye diseases 11 .
Adeno-associated viruses (AAVs) are small viruses that are not currently known to cause any disease, and their derived vectors show promise in human gene therapy 12,13 . The clustered regularly interspersed palindromic repeats (CRISPR)-associated DNA endonuclease (Cas)9 in Streptococcus pyogenes (SpCas9) processes pre-crRNA transcribed from the repeat spacers into CRISPR RNAs (crRNA) and cleave invading nucleic acids on the guidance of crRNA and trans-activating crRNA (tracrRNA) 14,15 . A single guide RNA (sgRNA) engineered as the crRNA-tracrRNA chimeric RNA can direct sequence-specific SpCas9 cleavage of double-strand DNA containing an adjacent "NGG" protospaceradjacent motif (PAM) 14 . This CRISPR/Cas9 system is a powerful tool for the targeted introduction of mutations into eukaryotic genomes and subsequent protein depletion 16,17 .
In this study, we employed the AAV-mediated CRISPR/ Cas9 system to edit genomic VEGFR2 in vivo and showed that editing of VEGFR2 abrogated angiogenesis in two mouse models of oxygen-induced retinopathy (OIR) and laser-induced choroid NV (CNV).
Results
CRISPR/Cas9-mediated depletion of VEGFR2 in vascular ECs in vitro. Recombinant AAV (rAAV) vectors are at present the leading candidates for virus-based gene therapy thanks to their broad tissue tropism, non-pathogenic nature, and low immunogenicity 13 . In this study, we adapted a dual-AAV vector system packaging SpCas9 and SpGuide 16 . To identify an appropriate AAV serotype that could transduce vascular endothelial cells (ECs), we replaced the GFP promoter (phSyn) in the AAV-SpGuide vector 16 with a promoter of cytomegalovirus (CMV) (Fig. 1a) 15 .
A major goal of gene therapy is the introduction of genes of interest into desired cell types. To circumvent targeting VEGFR2 in photoreceptors of eye tissues 18 , an endothelial-specific promoter is designed to drive expression of SpCas9. Thus, we substituted the Mecp2 promoter in the AAV-pMecp2-SpCas9 vector 16 for an endothelial-specific promoter of intercellular adhesion molecule 2 (pICAM2) 19 (Fig. 1b).
Recombinant adeno-associated virus serotype 1 (rAAV1) has been shown to transduce vascular ECs in high efficiency 20 . We next examined whether rAAV1 was able to deliver the CRISPR-Cas9 into ECs 20, 21 . As shown in Fig. 1c, rAAV1 was able to infect human primary retinal microvascular ECs (HRECs), human primary umbilical vein ECs (HUVECs) as well as human primary retinal pigment epithelial cells (hPRPE). Subsequently, we transduced these cells with rAAV1-pICAM2-SpCas9 (rAAV1-SpCas9) for testing if the ICAM2 promoter was able to drive SpCas9 expression in ECs specifically. Western blot analysis of the transduced cell lysates indicated that SpCas9 was expressed in HRECs and HUVECs, but not in hPRPE cells (Fig. 1d), demonstrating that the dual vectors of AAV-SpCas9 and AAV-SpGuide are able to specifically target genomic loci of ECs. Then, a target mouse genomic sequence named as mK22 (Fig. 1a) corresponding to the most efficient sgRNA-targeting human VEGFR2 exon 3 named as K12 among the four target sequences 22 was cloned into the SpGuide vector.
To assess the editing efficiency of our dual-vector system in vitro, we infected C57BL/6 mouse primary brain microvascular ECs (MVECs) using rAAV1-SpCas9 with rAAV1-mK22 or rAAV1-lacZ. After 4 days post infection, the genomic DNA was isolated for PCR. Sanger DNA sequencing results showed that there were mutations around the PAM sequence of PCR products from MVECs transduced with rAAV1-SpCas9 plus -mK22 but not from those with rAAV1-SpCas9 plus -lacZ ( Fig. 1e), suggesting that the mK22-guided SpCas9 cleaved the VEGFR2 locus at the expected site in MVECs. To find potential off-targets for the mK22-targeted genes, the "CRISPR Design Tool" (http://crispr.mit.edu/) was used. NGS analysis indicated that mK22 did not influence on the most possible off-target sequence in MVECs. Western blot analysis of the transduced cell lysates indicated that there was an 80% decrease in VEGFR2 from the transduced MVECs with SpCas9/mK22 compared with those with SpCas9/lacZ (Fig. 1f), demonstrating that the AAV-CRISRP/ Cas9 system with mK22 efficiently and specifically induced mutations within the VEGFR2 locus and subsequent protein depletion in MVECs in vitro.
Transduction of ECs with rAAV1 in vivo. Gene delivery to the vasculature has significant potential as a therapeutic strategy for several cardiovascular disorders including atherosclerosis and angiogenesis. However, there is a pronounced challenge in achieving successful gene transfer in vascular ECs in vivo. To determine if rAAV1 was capable of transducing vascular ECs of NV in the C57BL/6 mouse models of OIR 23 and laser-induced CNV 24 , we intravitreally injected rAAV1-CMV-GFP into mouse eyes at postnatal day 12 (P12) with or without experiencing the OIR model and immediately after the post-laser injury to Bruch's membranes of six-week-old mice in the CNV model, respectively. Whole-mount retinas of the P17 mice from the OIR model and the whole-mount choroids of the mice at day 7 after injection from the CNV model were stained with mouse endothelialspecific marker isolectin 4 (IB4)-Alexa 594. The merged images of IB4 with GFP indicated that rAAV1 was able to transduce normal vascular ECs in the retinal (Supplementary Fig. 1) and that preferentially transduced vascular ECs of NV induced by hypoxia and laser injury in the OIR ( Fig. 2 and Supplementary Figs. 2 and 3) and CNV models ( Fig. 2 and Supplementary Fig. 4), respectively.
Editing genomic VEGFR2 abrogated hypoxia-induced angiogenesis. To investigate whether the dual AAV system of AAV-SpCas9 and AAV-SpGuide (mK22) was able to edit VEGFR2 and inhibit pathological angiogenesis in vivo, we intravitreally injected equal amount of rAAV1-SpCas9 and rAAV1-mK22 or rAAV1-lacZ into P12 mouse eyes in the OIR mouse model 23 . In this model, P7 mouse pups with nursing mothers are subjected to hyperoxia (75% oxygen) for 5 days, which inhibits retinal vessel growth and causes significant vessel loss. On P12, mice are returned to room air and the hypoxic avascular retina triggers both normal vessel regrowth and retinal NV named as preretinal tufts, which is maximal at P17 23 . Thus, on P17, the whole-mount retinas were stained with IB4. The results (Fig. 3a-c and Supplementary Fig. 5) showed that there was a dramatic decrease in the number of preretinal tufts and significantly more avascular areas from mice injected with rAAV1-SpCas9/mK22 than those with rAAV1-SpCas9/lacZ, suggesting that genome editing of VEGFR2 by SpCas9/mK22 inhibits retinal NV in this OIR mouse model. Next-generation sequencing results (Fig. 3d) confirmed that there was about 2% insertion/deletions (indels) around the PAM from genomic DNA of the retinas treated with AAV-SpCas9/mK22, but none with AAV-SpCas9/lacZ. In addition, western blot analysis of the retinal lysates showed that there was an about 30% reduction in VEGFR2 from mice treated with rAAV1-SpCas9/mK22 compared with controls ( Fig. 3e, f). Taken together, these data demonstrate that editing genomic VEGFR2 locus with SpCas9/mK22 abrogates hypoxia-induced angiogenesis in this OIR mouse model. In addition, the intravitreal injection of SpCas9/mK22 did not cause detectable damage to the retina morphology and function examined by optical coherence tomography (OCT), electroretinography (ERG), fluorescein fundus angiography (FFA), and whole-mounted retina staining by IB4 at the time point of 4 weeks ( Supplementary Fig. 6). 15 . Graphical representation of the mouse VEGFR2targeted locus. The oligos of mK22 and its compliment were annealed and cloned into the V1 vector by SapI. The PAM is marked in blue. ITR inverted terminal repeat, U6 a promoter of polymerase III, CMV a promoter of cytomegalovirus, GFP green fluorescent protein. b Schematic of AAV-SpCas9 (V3). pMecp2: a neuron-specific promoter for methyl CpG-binding protein in V0 was substituted for pICAM2 19 by XbaI/AgeI. c Transduction of cultured cells with rAAV1. HRECs, HUVECs, and hPRPE cells in a 48-well plate to 50% confluence were infected with rAAV1-CMV-GFP (2 μl/well, 3.75 × 10 12 viral genome-containing particles (vg)/ml). Three days later, the cells were photographed under an immunofluorescence microscope. Three independent experiments showed rAAV1 transduction efficiency in HRECs, HUVECs and hPRPE cells of 85.6 ± 2.2, 88.5 ± 2.3 and 86.8 ± 2.6%, respectively. Scale bar: 200 μm. d pICAM2-driven expression of SpCas9 in ECs. After transduction with rAAV1-CMV-GFP (GFP) or rAAV1-pICAM2-SpCas9 (SpCas9) (2 μl/well, 3.75 × 10 12 vg/ml) in a 48-well plate for 4 days, cell lysates were subjected to western blot analysis with antibodies against Cas9 and β-actin. Data shown are representative of three independent experiments. e Sanger DNA sequencing was conducted on PCR products amplified from the genomic VEGFR2 loci of MVECs, which were transduced by rAAV1-SpCas9 plus rAAV1-lacZ (lacZ) or rAAV1-mK22 (mK22). f Depletion of VEGFR2 expression using AAV-CRISPR/Cas9. Total cell lysates from the transduced MVECs were subjected to western blot analysis with antibodies against VEGFR2 and β-actin. The bar graphs are mean ± SD of three independent experiments. "*" indicates a significant difference between the compared two groups using an unpaired t-test. p < 0.05 VEGFR2 suppressed NV in laser-induced CNV in mice. We also assessed whether the rAAV1-SpCas9/mK22 could inhibit NV in the laser-injury-induced CNV mouse model, which has been used extensively in studies of the exudative form of human AMD 24 . First, we intravitreally injected rAAV1-SpCas9 with rAAV1-mK22 or rAAV1-lacZ into mouse eyes following the laser injury. In this model, NV grows from choroid vessels after laser injury on Bruch's membrane, and on day 7 there is the maximal CNV, which begins to regress spontaneously after 14-21 days 24 . Hence, on day 7, fluorescein was injected into the mice intraperitoneally, and images of fluorescein angiography (FA) were taken. Subsequently, the flat-mount choroids were stained by IB4 for analysis of laser-injury-induced CNV. As shown in Fig. 4a-c, there was less NV in the eyes injected with rAAV1-SpCas9/mK22 than those with rAAV1-SpCas9/lacZ on day 7. To examine if editing genomic VEGFR2 could promote regression of CNV, rAAV1s were intravitreally injected on day 7 in the mouse CNV. On day 14, the images of FA and IB4 staining showed that there was less CNV from the mice injected with rAAV1-SpCas9/mK22 than those with rAAV1-SpCas9/lacZ (Fig. 4d-f). These data indicate that editing the genomic VEGFR2 locus with SpCas9/mK22 suppresses NV in this laser-injury-induced CNV model. Taken together, our data establish a strong foundation for genome editing as a novel therapeutic approach to angiogenesis-associated diseases.
Discussion
We report that rAAV1 preferentially transduced vascular ECs of pathological vessels in both mouse models of OIR and laserinjury induced CNV ( Fig. 2 and Supplementary Fig. 3) while also transducing normal vascular ECs in the retina ( Supplementary Fig. 1). The preferential transduction of ECs in pathological vessels may be due to the fact the neovessels are less mature than normal vessels, and have incomplete basement membrane and weaker intercellular junctions. To date, AAV vectors have been used in a number of clinical trials such as for Leber's congenital amaurosis [25][26][27] and congestive heart failure 28 and has been approved for treatment of lipoprotein lipase deficiency in Europe 29,30 . While anti-VEGF agents (e.g., ranibizumab and aflibercept) can reduce NV growth and vascular leakage-associated eye diseases (e.g., PDR and wet AMD), therapeutic challenges remain, including the need for chronic treatment and a significant number of patients who do not respond 11 ; gene therapy targeting genomic VEGFR2 using AAV-CRIPSR/Cas9 may provide a novel alternative approach. While other genes, such as MMP9 31,32 , have been linked to various proliferative retinopathies, none has been shown to drive new vessel disease to the extent seen VEGFR2.
Success translation of genome editing technologies to the clinic must address some major obstacles, primarily in terms of the safety and efficacy; genetic modifications are permanent, and deleterious off-target mutations could create cells with oncogenic potential, reduced cellular integrity, and or functional impairment 33,34 . Our results demonstrate that expression of VEGFR2 was depleted by 80% in vitro (MVECs) (Fig. 1) and by 30% in vivo (retina) (Fig. 3) by the AAV-CRISRP/Cas9 (mK22), in which SpCas9 was driven by an endothelial cell-specific promoter pICAM2 (Fig. 1). In addition, NGS analysis indicated that there was only about 2% indels around the PAM in the PCR products amplified from the treated P17 mouse retinas, and there was a significant decrease in NV in both mouse models of OIR (Fig. 3) and CNV (Fig. 4) after treatment with AAV-CRISPR-Cas9-targeting genomic VEGFR2 in comparison to targeting control lacZ. In summary, our studies show that precise and efficient gene editing of VEGFR2 using CRISPR-Cas9 systems has the potential to treat angiogenesis-associated diseases.
Both synthesis of primers and oligos and sequencing of PCR products and clones were done by Massachusetts General Hospital (MGH) DNA Core Facility (Cambridge, MA).
Production of AAVs. The recombinant AAV2/1 (rAAV1) vectors were produced as described previously 17 in the Gene Transfer Vector Core in Schepens Eye Research Institute of Massachusetts Eye and Ear (Boston, MA). Briefly, triple transfection of AAV package plasmid (AAV2/1), transgene plasmid (pAAV-pICAM2-SpCas9: AAV-SpCas9, pAAV-U6-mK22-CMV-GFP: AAV-mK22 or pAAV-U6-lacZ-CMV-GFP: AAV-lacZ) and adenovirus helper plasmid were performed in a 10-layer hyper flask containing confluent HEK 293 cells. At day 3 post transfection, the cells and culture medium were collected and enzymatically treated with Benzonase (EMD Millipore). After high-speed centrifugation and filtration, the cell debris was cleared. The viral solution was concentrated by running through tangential flow filtration, and then loaded onto an iodixional gradient column. After one round of ultracentrifugation, the pure vectors were separated and extracted, then ran through an Amicon Ultra-Centrifugal Filter device (EMD Millipore) for desalting. Both vectors were titrated by TaqMan PCR amplification (Applied Biosystems 7500, Life Technologies), with the primers and probes detecting the transgene. Sodium dodecyl sulfate-polyacrylamid gel electrophoresis (SDS-PAGE) was performed to check the purity of the vectors, which were named rAAV1-SpCas9, rAAV1-mK22, and rAAV1-lacZ.
Transduction of cultured cells. MVECs, HRECs, HUVECs, and hPRPE cells grown to 50% confluence in a 48-well plate were changed into the fresh cultured media and added either with rAAV1-mK22, rAAV1-lacZ, rAAV1-SpCas9 Fig. 3 Editing genomic VEGFR2 abrogated hypoxia-induced angiogenesis. a Litters of P12 mice that had been exposed to 75% oxygen for 5 days were injected intravitreally with 1 μl (3.75 × 10 12 vg/ml) containing equal rAAV1-SpCas9 and rAAV1-lacZ (lacZ) or rAAV1-mK22 (mK22). On P17, whole-mount retinas were stained with IB4. lacZ and mK22 indicate retinas from the rAAV1-SpCas9/lacZ and mK22-injected mice, respectively. b Analysis of avascular areas from the IB4-stained retinas (n = 6). c Analysis of NV areas from the IB4-stained retinas (n = 6). d NGS analysis of indels. The DNA fragments around the PAM sequences were PCR amplified from genomic DNA of the rAAV1-SpCas9/lacZ or -mK22-injected retinas, and then subjected to NGS. e, f The lysates of the rAAV1-SpCas9/lacZ or -mK22-injected retinas were subjected to western blot analysis using indicated antibodies. The bar graph data are mean ± SD of three retinas. "*" indicates significant difference using an unpaired t-test. p < 0.05 individually or both of rAAV1-SpCas9 with rAAV1-mK22 or rAAV1-lacZ (2 μl/ well for each rAAV1, 3.75 × 10 12 viral genome-containing particles (vg)/ml). Three days later, the cells were photographed under an immunofluorescence microscope for determining the rAAV1 transduction efficiency. After 4 days, the cells were lysed with 1× sample buffer for western blotting analysis or collected for genomic DNA isolation.
DNA sequencing. Cells were collected for genomic DNA extraction using the QuickExtract DNA Extraction Solution (Epicenter, Chicago, IL), following the manufacturer's protocol. In brief, the pelleted cells were re-suspended in the After laser injury of Bruch's membrane, fundus images (day 0) were taken using the Micron III system, and the mice were injected intravitreally with 1 μl (3.75 × 10 12 vg/ml) containing equal rAAV1-SpCas9 and rAAV1-lacZ or -mK22 right immediately after the laser injury (a) or 7 days of the laser injury (d). Seven days after AAV1 injection, the mice were injected intraperitoneally with fluorescein, and the FA images were taken using the Micron III system. Subsequently, whole mounts of choroids were stained with IB4, and the images were taken under an immunofluorescence microscope. Areas of NV were analyzed based on the images of FA (b, e) and IB staining (c, f) (n = 6). "*" indicates significant difference between the compare two groups using an unpaired t-test. p < 0.05 QuickExtract solution, vortexed for 15 s, incubated at 65°C for 6 min, vortexed for 15 s and then incubated at 98°C for 10 min. The genomic region around the PAM was PCR amplified with high-fidelity Herculase II DNA polymerases. The PCR primers were (forward 5′-GCTCCTGTCGGGTCCCAAGG-3′) and (reverse 5′-ACCTGGACTGGCTTTGGCCC-3′). The PCR products were separated in 2% agarose gel and purified with a gel extraction kit (Thermo Scientific) for Sanger DNA sequencing and NGS 15 . DNA sequencing was performed by the MGH DNA core facility.
A mouse model of OIR. C57BL/6J litters on postnatal day (P)7 were exposed to 75% oxygen until P12 in the oxygen chamber (Biospherix). Oxygen concentration was monitored daily using an oxygen sensor (Advanced Instruments, GPR-20F) 23,37 . On P12, the pups were anesthetized by intraperitoneal injection of 50 mg/kg ketamine hydrochloride and 10 mg/kg xylazine. During intravitreal injections, eyelids of P12 pups were separated by incision. Pupils were dilated using a drop of 1% tropicamide and the eyes were treated with topical proparacaine anesthesia. Intravitreous injections were performed under a microsurgical microscope using glass pipettes with a diameter of~150 μm at the tip after the eye were punctured at the upper nasal limbus using a a BD insulin syringe with the BD ultra-fine needle. One μl of rAAV1-CMV-GFP or both of rAAV1-SpCas9 with rAAV1-mK22 or rAAV1-lacZ (1 μl, 3.75 × 10 12 vg/ml) was injected. After the intravitreal injection, the eyes were treated with a triple antibiotic (Neo/Poly/Bac) ointment and kept in room air (21% oxygen). On P17, the mice were killed and retinas were carefully removed and fixed in 3.7% paraformaldehyde (PFA), and the mice under 6 g were excluded from the experiments. In total, there were three experiments performed in this OIR model. Retinal whole mounts were stained overnight at 4°C with murinespecific EC marker isolectin 4 (IB4)-Alexa 594 (red) 23,38,39 . The images were taken with an EVOS FL Auto microscope (Life Technologies).
Quantification of vaso-obliteration and NV. This was performed previously 23 . Briefly, retinal image was imported into Adobe Photoshop CS4, and the Polygonal Lasso tool was used to trace the vascular area of the entire retina. Once the vascular area was highlighted, the number of pixels was obtained. After selecting total retinal area, the Lasso tool and the "subtract from selection" icon was used to selectively remove the vascularized retina, leaving behind only the avascular area. Once the avascular region was selected, click the refresh icon again to obtain the number of pixels in the avascular area.
When analyzing NV, the original image was reopened. The magic wand tool was selected from the side tool panel on the left side of the screen. On the top tool panel, the tolerance to a level that will pick up NV was set while excluding normal vessels (beginning at 50). Regions of NV were selected by clicking on them with the magic wand tool. The areas of NV fluoresced more intensely than surrounding normal vessels. When neovessels were selected, the area of interest was zoomed in by holding the "Alt" key on the keyboard and scrolling up. When all NV was selected and checked, the refresh icon recorded the total number of pixels clicked in the NV area.
Laser-induced choroid NV in mice. Ten mice (Stock number: 664, C57BL/6J, male and female, 17-22 g, 6-8 weeks old, Jackson Laboratories, Bar Harbor, ME) were deeply anesthetized with an intraperitoneal injection of ketamine/xylazine (120 mg/kg ketamine/20 mg/kg xylazine). Their pupils were be dilated using a drop of 1% tropicamide and the eye were treated with topical proparacaine anesthesia drops. The mice were placed on a specialized stage with the Micron III retina imaging system (Phoenix Research Labs, Pleasanton, CA) using Genteal gel (Novartis, Basel, Switzerland). Under real-time observation, laser photocoagulation were applied to the eyes using a Streampix5 laser system (Meridian AG, Zürich, Switzerland) at 532 nm wavelength (100 μm of diameter, 0.1 s of duration and 100 mW of power). Four lesions located at the 3, 6, 9, and 12 o'clock meridians around the optic nerve were induced. Laser-induced disruption of Bruch's membrane was identified by the appearance of a bubble at the site of photocoagulation. Fundus images were taken on the anesthetized mice using the Micron III retina imaging system with illumina light. Laser spots that did not result in the formation of a bubble were excluded from the studies. Laser spots were also be confirmed by OCT 24,40 . rAAV1 (1 μl, 3.75 × 10 12 vg/ml) was injected into the vitreous using glass pipettes with fine tips after puncturing the sclera 1 mm from the limbus with a 30-gauge needle under an operation surgical microscope. On day 7 or 14, animals were anesthetized as described above. Fundus images were taken using the Micron III retina imaging system with illumina light. Then 0.01 ml of 25% sodium fluorescein (pharmaceutical grade sodium fluorescein; Akorn Inc) per 5 g body weight was injected intraperitoneally. The retinal vasculature filled with dye in <1 min following injection. Images of FA were taken with UV light sequentially at 2 and 5 min post-fluorescein injection. Seven days after rAAV1 injection, the mice were killed, and the mouse eyes were carefully removed and fixed in 3.7% PFA. Wholemount choroids were stained overnight at 4°C with IB4 23,38,39 . The images were taken with an EVOS FL auto microscope.
Examination of toxicity of the dual AAV-CRISPR/Cas9 in mouse eyes. On P12, five pups were anesthetized and underwent intravitreal injections as described above. During injection, One μl of rAAV1-SpCas9 plus rAAV1-mK22 was injected.
After 4 weeks, OCT was performed using a spectral domain (SD-) OCT system (Bioptigen Inc., Durham, NC). Briefly, mice were deeply anesthetized with an intraperitoneal injection of ketamine/xylazine (100-200 mg/kg ketamine/20 mg/kg xylazine). The pupils were dilated with topical 1% tropicamide to view the fundus. After anesthesia, Genteal gel was applied to both eyes to prevent drying of the cornea. The fundus camera in the optical head of the apparatus provided initial alignment for the sample light, to ensure it is delivered through the dilated pupil. Final alignment was guided by monitoring and optimizing the real-time OCT image of the retina, with the whole set-up procedure taking~5 min for each mouse eye.
At week 4, after OCT, ERG (by light/dark adaptation, using a DIAGNOSYS ColorDome containing an interior stimulator) was performed as followed. Following overnight dark adaptation, the animals were prepared for ERG recording under dim red light. While under anesthesia with a mixture of ketamine (100-200 mg/kg i.p.) and xylazine (20 mg/kg i.p.), their pupils were dilated using a drop of 1% tropicamide followed by a drop of 1% cyclopentolate hydrochloride applied on the corneal surface. One drop of Genteal (corneal lubricant) was applied to the cornea of the untreated eye to prevent dehydratation. A drop of 0.9% sterile saline was applied on the cornea of the treated eye to prevent dehydration and to allow electrical contact with the recording electrode (gold wire loop). A 25-gauge platinum needle, inserted subcutaneously in the forehead, served as reference electrode, while a needle inserted subcutaneously near the tail served as the ground electrode. A series of flash intensities was produced by a Ganzfeld controled by the Diagnosys Espion 3 to test both scotopic and photopic response.
The following day after ERG, FFA was performed on the mice. Animals were anesthetized with a mixture of ketamine (100-200 mg/kg i.p.) and xylazine (20 mg/ kg i.p.), and their pupils were dilated using a drop of 1% tropicamide and the eye will be treated with topic anesthesia (Proparacaine drops). A drop of sterile saline was placed on the experimental eye to remove any debris followed by Genteal. Genteal was placed on both eyes to prevent corneal drying. Then 0.01 ml of 25% sodium fluorescein (pharmaceutical grade sodium fluorescein; Akorn Inc) 5 g body weight was injected i.p. The retinal vasculature was filled with dye in <1 min following injection. Photos were taken sequentially at 1, 2, 3, 4, and 5 min postfluorescein injection. A Micron III (Phoenix Research) system was used for taking fundus photographs according to the manufacturer's instructions. The mice were placed in front of the Fundus camera and pictures of the retina taken for monitoring retinal function.
After the mice were killed, retinas were carefully removed and fixed in 3.7% PFA. Retinal whole mounts were stained overnight at 4°C with murine-specific EC marker isolectin 4 (IB4)-Alexa 594 (red) 23,38,39 . The images were taken with an EVOS FL Auto microscope (Life Technologies).
Statistics. The data from three independent experiments in which the variance was similar between the groups were analyzed using an unpaired and two tailored t-test. For animal experiments, at least the data from six mice were used for the statistic analysis. p-values of <0.05 were considered statistically significant. All relevant data are available from the authors.
Data availability. The data supporting the findings of this study are available from the corresponding author on reasonable request. | 6,273.4 | 2017-07-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
KM3NeT Time Calibration
The KM3NeT detectors are large three-dimensional arrays of several thousands digital optical modules under construction in the Mediterranean Sea. The basic detection element of the neutrino telescope is the digital optical module containing 31 three-inch photomultiplier tubes. Each detection unit, composed of 18 digital optical modules, is a mechanical structure anchored to the sea floor, held vertical by a submerged buoy for the detection of Cherenkov light emitted by charged secondary particles emerging from neutrino interactions. Detector calibration, i.e. timing, positioning and sea-water properties, is overviewed in this talk and discussed in detail in this conference. 1 The KM3NeT Detector KM3NeT belongs to a novel generation of under water neutrino telescopes [1]. The infrastructure will consist of three so-called building blocks, each made of 115 lines (or detector units DU) of 18 optical modules, that have 31 photo-multiplier tubes each. KM3NeT is made of KM3NeT/ORCA (Oscillation Research with Cosmics in the Abyss) in Toulon, France and KM3NeT/ARCA (Astroparticle Research with Cosmics in the Abyss) in Capo Passero, Sicily. The main objectives of KM3NeT are the discovery and subsequent observation of high-energy neutrino sources in the Universe and the determination of the neutrino mass hierarchy. Neutrinos can interact with matter inside or in the vicinity of the detector producing secondary particles that can be detected through the Cherenkov light that they produce. Due to the long range in water, the conventional detection channel is given by muons produced in charged current interactions of muon neutrinos. Furthermore, KM3NeT will have significant sensitivity to all the neutrino interactions. The basic detection element of the neutrino telescope is the digital optical module (DOM), a 17-inch pressure resistant glass sphere containing 31 3-inch photomultiplier tubes (PMTs), a number of calibration devices and the read-out electronics. The multi-PMT design provides a large photocathode area, good separation between single-photon and multiple-photon hits and information on the photon direction. DOMs are arranged by modulo eighteen on the flexible lines attached to the sea bottom. Each line has a base module close to the anchor which performs the line power control and the optical link signal amplification. The detector is connected to the shore via the main electro-optical cable. The single fiber in the cable (about 40 km in Toulon, and 100 km in Capo Passero) is common for the base and the DOMs. Each DOM is an IP node in an Ethernet network. The information recorded from a PMT consists of the start time and the Time over Threshold (ToT). The start time is defined as the time at ∗e-mail<EMAIL_ADDRESS>© The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). EPJ Web of Conferences 207, 07001 (2019) https://doi.org/10.1051/epjconf/201920707001
The KM3NeT Detector
KM3NeT belongs to a novel generation of under water neutrino telescopes [1]. The infrastructure will consist of three so-called building blocks, each made of 115 lines (or detector units DU) of 18 optical modules, that have 31 photo-multiplier tubes each. KM3NeT is made of KM3NeT/ORCA (Oscillation Research with Cosmics in the Abyss) in Toulon, France and KM3NeT/ARCA (Astroparticle Research with Cosmics in the Abyss) in Capo Passero, Sicily. The main objectives of KM3NeT are the discovery and subsequent observation of high-energy neutrino sources in the Universe and the determination of the neutrino mass hierarchy. Neutrinos can interact with matter inside or in the vicinity of the detector producing secondary particles that can be detected through the Cherenkov light that they produce. Due to the long range in water, the conventional detection channel is given by muons produced in charged current interactions of muon neutrinos. Furthermore, KM3NeT will have significant sensitivity to all the neutrino interactions. The basic detection element of the neutrino telescope is the digital optical module (DOM), a 17-inch pressure resistant glass sphere containing 31 3-inch photomultiplier tubes (PMTs), a number of calibration devices and the read-out electronics. The multi-PMT design provides a large photocathode area, good separation between single-photon and multiple-photon hits and information on the photon direction. DOMs are arranged by modulo eighteen on the flexible lines attached to the sea bottom. Each line has a base module close to the anchor which performs the line power control and the optical link signal amplification. The detector is connected to the shore via the main electro-optical cable. The single fiber in the cable (about 40 km in Toulon, and 100 km in Capo Passero) is common for the base and the DOMs. Each DOM is an IP node in an Ethernet network. The information recorded from a PMT consists of the start time and the Time over Threshold (ToT). The start time is defined as the time at which the pulse passes beyond a 0.3 p.e. threshold and the ToT is the time the pulse remains above this threshold. Onshore, the physics events are filtered from the background by an online trigger algorithm and stored on disk. Data collected by the PMTs are digitised in the DOMs and sent to shore, where they are filtered by appropriate triggering algorithms. Accurate measurements of the light arrival times and charges and precise real-time knowledge of the positions and orientations of the PMTs are required for the accurate reconstruction of the direction of the secondary particles. DOMs move under the effects of underwater currents, these movements are continuously monitored through an underwater positioning system based on acoustic emitters and receivers. In fact, each DOM is equipped with an internal acoustic piezo detector for the acoustic positioning system [2], while each line base is equipped with an external hydrophone which has better sensitivity compared to the piezo detectors to reference the line position on the sea bed. The acoustic signal transit time is measured periodically and for three or more emitters position of the receiver can be measured with trilateration. The acoustic emitters are installed in the calibration units that will be placed at known positions on the sea bed. In addition, each calibration unit contains a laser beacon (composed of a piezo-ceramic transducer and an integrated electronics board) for interline time calibration and water optical properties monitoring, a long-baseline acoustic beacon for DOMs positioning, hydrophone for their own positioning, and environmental monitoring instruments (conductivity, salinity, temperature, sound velocity, sea currents). The DOM orientation is measured thanks to the internal attitude and heading reference system board.
Time Calibration
For the neutrino event reconstruction with a precision better than 1 • , the optical modules need to be synchronized with sub-nanosecond precision and their position determined with better than meter precision. Each DOM and the base have identical Control Logic Boards which perform the signal processing and transfer, time synchronization and control of the instruments. Time synchronization between different detector components is monitored in situ by light propagation time measurements between light emitters (LED nanobeacons and lasers) and PMTs.
The time calibration at a nanosecond level is necessary to achieve the envisaged muon track angular resolution for a neutrino telescope. For this, the following time offsets have to be determined: 1) Intra-DOM; 2) Inter-DOM; 3) Inter-DU; 4) nanobeacons. Prerequisite for the time calibration is the PMT characterization through the High-Voltage (HV) tuning: all the PMTs in the DOM have to be set to the same gain value. The high voltage tuning is based on the estimation of the ToT duration value of a single detected photon. For this purpose, background runs are used. For one PMT of one DOM, all hits detected during the run are read. A hit is composed of a time stamp for timing purpose and of a ToT duration, for light intensity purpose. The tuning is done with 25 V steps of HV and ToT is calculated by means of a Gaussian fit function to the ToT distribution during 1 min of data taking for each step. The proper HV, corresponding to ToT 26.4 ns, is found from a Gaussian fit to ToT vs HV plot, see Figure 1. This ToT value corresponds to the average ToT value estimated on a subset of PMTs with properly calibrated gain (3 × 10 6 ).
Intra-DOM calibration
The Intra-DOM time offsets between PMTs in the same DOM primarily depend on the PMT transit time. Radioactive potassium decay in the glass and the gel can be simultaneously seen by the neighbour PMTs in the DOM, and can produce up to 150 Cherenkov photons per decay [3]. These decays are the main source of the single PMT rate. A single decay occurring in the vicinity of the DOM has a chance to produce a genuine coincidence between signals of different PMTs, which can be exploited for time calibration of the DOM. This feature is used to verify the PMT mapping and to perform inter-PMT time calibration. The distributions of time differences between signals detected in different PMTs in the same DOM are studied as a function of the angular separation of the PMTs involved. The distribution of hit time differences between all possible combinations of PMT pairs are assumed to follow a Gaussian shape. For each DOM with N = 31 PMTs, a total of N(N − 1)/2 distributions are produced and shown in Figure 2 (left) for DOM 1. In the figure, the numbers of PMT pairs are ordered according to their angular separation. The correlation peak decreases as the angular separation increases due to the limited field of view of each PMT, see Figure 2 (right). These distributions are well fitted by a Gaussian function. The mean values, heights and widths of the Gaussian peaks are related to the time offsets, detection efficiencies and intrinsic time-spreads of all the PMTs. Typically, a FWHM of 7−10 ns is found for all different PMT pairs, mostly reflecting the intrinsic PMT transit time spread of up to 5 ns at FWHM.
Inter-DOM calibration
The Inter-DOM time offsets, between DOMs, primarily depend on the cable lengths. The delay of the laser signal is measured for each DOM to perform the line time calibration. The delay of about 200 ns between DOMs corresponds to about 40 m of the optical fiber connecting the DOMs, while a delay of about 750 ns is the time difference between the base and DOM1 due to the length of the optical fiber and the all samples' time delays. The total delay for the light detection by reference PMTs of each DOM is calculated using the known laser system propagation time and the cable length connecting the line to the test shore station. After the deployment, this delay will be increased by the known delay introduced by the underwater infrastructure. No time offset shifts were observed within constantly powered periods. The values obtained with this procedure are stable to within a few nanoseconds over a stable period of data taking.
Inter-DU calibration
The Inter-DU calibration is based on the measurement of the Round-Trip-Time (RTT) delay of the laser signal between the reference clock (called master) and the DU base, due to the length of the optical fiber (∼ 100 km). We found constant time difference between DU1 and DU2, due to the length difference between the DUs inter-link cables (∼ 100 m × 2).
Calibration after deployment
Several instruments will be used to perform the verification and adjustment of the time calibration in-situ. 40 K decays in water will be used for PMT time delay calibration between PMTs in the same DOM. The nanobeacons installed in each DOM provide light detection from one DOM to the neighbor DOMs for their inter calibration. Vertical atmospheric muons provide alternative inter-DOM calibration. Laser beacons installed on the Calibration Units provide light detection by DOMs of different lines for the time calibration between the lines. Positioning of the DOMs will be performed thanks to the acoustic instruments installed in the DOMs, DU bases and calibration units. To monitor the time calibration after deployment we will use atmospheric muons, atmospheric neutrinos, laser beacons and nanobeacons.
Nanobeacons
Each DOM is equipped with a remotely controlled flashable LED (nanobeacon), installed on the top of the DOM, pointing upward to DOMs higher in the DU. The difference between the time of the detection of the LED light and the emission time of the light by the LED is monitored, by using the PMT time offsets obtained from the dark room and taking into account the travel time of the light in the medium. For more details see [4]
Summary
The KM3NeT detector is under construction. The procedure for time calibration exploiting potassium decays and LED beacons was demonstrated successfully with nanosecond stability during first DU line integration. | 2,924.6 | 2019-07-22T00:00:00.000 | [
"Physics",
"Environmental Science",
"Engineering"
] |
Deductive Verification Method of Real-Time Safety Properties for Embedded Assembly Programs
: It is important to verify both the correctness and real-time properties of embedded systems. However, as practical computer programs are represented by infinite state transition systems, specifying and verifying a computer program is difficult. Real-time properties are also important for embedded programs, but verifying the real-time properties of an embedded program is difficult. In this paper, we focus on verifying an embedded assembly program, in order to verify the real-time safety properties. We propose a deductive verification method to verify real-time safety properties, based on discrete time, as follows: (1) First, we construct a timed computational model including the execution time from the assembly program. We can specify an infinite state transition system including the execution time of the timed computational model. (2) Next, we verify whether a timed computational model satisfies RTLTL (Real-Time Linear Temporal Logic) formulas by deductive verification. We can specify real-time properties by RTLTL. By our proposed methods, we are able to achieve verification of the real-time safety properties of an embedded program.
Introduction
Conventional formal verification is mainly applied to computer hardware and communication protocols. The specifications of these systems are easy to describe, using finite state transition systems. On the other hand, as a practical computer program is represented by an infinite state transition system, specifying and verifying a computer program is difficult. As large-scale computer hardware, such as GPUs and supercomputers, for verifying systems has recently become cheap and as the progress of both abstraction technologies and theorem proof technologies have been remarkable, program verification has also become feasible [1]. Conventionally, verifying embedded systems is important, and embedded program verification is thus also important. Furthermore, real-time properties are important in an embedded program, but verifying the real-time properties of an embedded program is difficult. In this paper, we propose a formal verification method of the real-time safety properties of an embedded assembly program using deductive verification, as follows: 1. First, we construct a timed computational model including the execution time from the assembly program. We can specify an infinite state transition system including the execution time by the timed computational model. 2. Next, we verify whether a timed computational model satisfies RTLTL (Real-Time Linear Temporal Logic) formulas by deductive verification. We can specify real-time properties by RTLTL.
Using our proposed methods, we were able to achieve verification of the real-time safety properties of the embedded program. 1. We have implemented our proposed axiom on the theorem prover Princess [3]. 2. We have demonstrated experiments with real examples, such as the Linetrace program written for the Wheel-type robot nuvo WHEEL controlled by a H8/3687 microcontroller [4]. This robot is very old, but it has the important features of embedded software.
Henzinger, Manna, and Pnueli pointed out, in their famous paper, that two important classes of real-time requirements for embedded systems are bounded response properties and bounded invariance properties, specified using RTLTL (Real-Time Linear Temporal Logic) [5]. We can check the correctness of systems for all input data by formal verification. On the other hand, we can only check the correctness of systems for some input data by testing. For this reason, in this paper, our approach is formal verification. Additionally, in order to correctly compute the execution times of systems, we verify an assembly program [6].
Outline of This Paper
When we develop embedded software, we may first specify it using hybrid automata or timed automata, and then verify it using model checking. Next, we implement it using C program, and verify it using software model checking. In this paper, in order to verify the real-time properties, including hardware-dependent information, we deductively verify the real-time safety properties of an embedded assembly program, as shown in Figure 1.
The embedded software is described in the C language, but the hardware-dependent part is described as an assembly program. Assembly programs are suitable for a prediction of the execution time. Moreover, the syntax of assembly programs are simple, and their analysis is easy. Therefore, in this paper, we verify an assembly program, instead of a C program. The advantages of verification of assembly programs as pointed out by Schlich [7] are as follows: 1. The assembly code is the outcome at the end of the development process. Hence, all errors introduced during the complete development process can possibly be found. These errors include errors not visible in intermediate representations (e.g., re-entrance errors), compiler errors, postcompilation errors (e.g., errors introduced by instrumentation code), and hardware-dependent errors (e.g., stack overflows, arithmetic overflows, interrupt handling errors, and writing reserved registers). 2. Assembly language usually has a clean and well-documented semantics. Vendors of microcontrollers provide documentation describing the semantics of the provided assembly constructs. This makes assembly constructs easier to handle than certain C constructs, such as pointer arithmetic or function calls by pointers. 3. When model checking assembly code, the model checker does not have to exploit the compiler behavior, hardware-dependent constructs can be handled, and the source code (C code) of the software is not required. Hence, even programs that use libraries not available in the source code can be analyzed. 4. Programs consisting of components written in different programming languages can be verified.
When model-checking the source code, only single components can be verified and, for each programming language used, a specific model checker has to be utilized.
As shown in Figure 2, we first encode the assembly program into our proposed timed computational model. Secondly, we propose the deductive verification method for real-time safety properties using SMT (Satisfiable Modulo Theories) [8]. Deductive verification method consists of verification rules; one verification rule consists of a temporal formula derived from the premises of first-order formulas. According to [9], we compactly explain a deductive verification method as follows: When we verify whether an assembly program satisfies a safety property, we first encode the assembly program into a timed computational mode. Then, we derive first-order formulas from the timed computational mode, according to a verification rule. Finally, we check validity of the first-order formulas using an SMT solver. If all first-order formulas are valid, the safety property is satisfied. We propose a timed computational model by assigning the execution time of each instruction to each state.
Using the timed computational model, we can verify the real-time properties of an assembly program. To our knowledge, using a timed computational model that assigns the execution time of each instruction to each state is the first effort in order to verify the real-time properties of an assembly program.
This study is intended for general embedded software, and any RTOS (Real-Time Operating System) is good. We have only adopted H8 as an example In addition, this study is not intended for JAVA programs, as a JAVA program is not an embedded program.
In Section 4, we verify the bounded invariance properties. This example is simple, but we can not verify it by existing methods. Verifying properties related to registers and execution time has been enabled for the first time through our proposed method. We limit this paper to suggesting verification techniques, and more complicated examples are entrusted to future work.
Related Work
In this section, we present related work regarding assembly program verification and formal verification.
Execution time of program
(a) As both logical correctness and real-time properties are important in embedded systems, a large number of studies regarding the execution time, such as estimating WCET (Worst Case Execution Time), have been explored by researchers [10]. However, in general, due to the behavior of the components which influence the execution time (such as memory, caches, pipelines, and branch prediction), the predicted execution time from program analysis becomes slightly longer than the real execution time. Therefore, it is important and meaningful to verify whether a formula always holds true within a certain time. As the certain time becomes slightly longer than the real execution time, the formula holds true within a real execution time if it holds true within the certain time. This fact makes verifying liveness properties difficult. In this paper, we verify the real-time properties of an assembly program, based on program analysis.
Model checking of program
(a) Model checkers of C programs using abstraction and refinement methodologies, such as SLAM, BLAST, and MAGIC, have been explored by a large number of researchers [11][12][13]. However, they have not explored formal verification of the real-time properties. (b) Schlich's [mc]square is famous for the study of the verification of assembly programs [7,14].
The [mc]square system utilizes ELF format execution code information, related C language implementation code, static analysis (CFG) of the target assembly code, specification description using CTL, and a model checker. However, [mc]square cannot verify real-time properties. On the other hand, our study can verify real-time properties using RTLTL. We compute timing information using the execution times of assembly instructions. To the best of our knowledge, our paper is the first study of verifying the timing properties of programs. (c) The importance of the verification of real-time properties has been pointed out in the model checking of a timed automaton [15]. Campos and Clarke have explored symbolic model checking of discrete real-time systems using RTCTL [16]. All transitions of their timed transition graph happened in one time unit, but the times of timed automata and discrete real-time systems are specified by virtual clocks. However, their model is quite different from our model. A study considering the verification of program execution time has not been carried out, so far, to our knowledge.
3. Verification of real-time properties of specification (a) A. Emerson has explored model checking of discrete real-time systems using RTCTL [17].
On the other hand, Henzinger, Manna, and Pnueli have explored a deductive verification methodology of discrete real-time systems using RTLTL [5]; however, they did not explore real-time verification of real-time programs.
Theoretical Background of Program Verification Problem
In general, program verification problems are theoretically undecidable [18]. In short, no algorithms exist for program verification. However, this problem is partially decidable. If the answer to the problem is "yes", the algorithm will eventually halt with a "yes" answer; if the answer is "no", the algorithm may supply no answer at all. In meaningful cases of real program verification problems, then, the algorithm will eventually halt with a "yes" answer [11][12][13].
On the other hand, in this paper, we give a deductive temporal verification system based on a Hoare-style axiom system for deductively verifying assembly programs. Cook proved a relatively complete Hoare-style axiom system for program verification [19]; in other words, he proved the relative completeness of a Hoare-style axiom system. Furthermore, Manna and Pnueli have proved a relatively complete proof system for proving the validity of temporal properties of reactive programs [20].
Our proof rule is the extension of Manna's proof system. If we add SMT (Satisfiability Modulo Theories) [8] into our proof rule, we can completely verify assembly programs.
Embedded Hardware
We show the register set of a H8/3687 processor [22] in Figure 3. In a H8/3687 processor, all the general purpose registers are 32 bits wide. However, the registers can be treated as the concatenation of two 16-bit registers, such as E0 and R0. The 16-bit registers can also be treated as the concatenation of two 8-bit registers, such as RH0 and RL0. On the other hand, control registers consist of a PC (Program Counter), CCR (Condition Code Register), IRR2 (Interrupt Request Register 2), and IENR2 (Interrupt ENable Register 2). In IRR2, when timer B1 overflows, IENTB1 is set to 1. In IENR2, when IENTB1 is set to 1, the overflow interrupt request of timer B1 is admitted.
Computational Model
We propose a timed computational model by assigning the execution time of each instruction to each state. The timed computational model is defined as follows: 1. V = {u 0 , . . . , u n−1 } is a finite set of variables. The set V consists of program variables, a location, the execution time, registers, and a stack. 2. S is an infinite set of states. Each state s ∈ S assigns to each variable u i ∈ V (i = 0, . . . , n − 1) a value. 3. T is a finite set of transitions. Each transition τ ∈ T is a function τ : S → 2 S , which can also be represented by a first-order formula ρ τ (V, V ). Here, the variables in V are the present state variables, and the variables in V are the next state variables. 4. Θ is a satisfiable assertion characterizing all the initial states. 5. TM : S → N is a function assigning to each state s ∈ S the execution time of the natural number. TM is determined by the hardware manual [22], where the execution time of each instruction is described. 6. LAB : S → Label is a function assigning to each state s ∈ S an instruction label ∈ Label, where Label is a set of assembly instructions. If no instruction is executed in a state, label is omitted in the state. Due to the behavior of the components (such as memory, caches, pipelines, and branch prediction) influencing the execution time, the execution time in this paper becomes slightly longer than the real execution time. However, it is meaningful, from the safety point of view, to verify whether a certain property always holds true within a certain time.
In Section 2.3, we will explain a timed computational model with an example.
Encoding from Assembly Program to Timed Computational Model
The encoding from a program to a state transition system has a standard hand-operated technique [23]; in particular, we refer to pages 14-16 in [23]. Let V be the set of variables. We think of the variables in V as the present state variables and the variables in V as the next state variables. We define a state s to be an interpretation of V, assigning to each variable v a value s [v]. Furthermore, we denote a state s to be an interpretation of V as the present state variables, and a state s to be an interpretation of V as the next state variables. We denote by S the set of all states, and by T the finite set of transitions. Each transition τ ∈ T is a function τ : S → 2 S , which is also represented by a first-order formula ρ τ (V, V ).
In this paper, we add both the function TM(s) and a variable time into the set V of variables, where TM(s) expresses the execution time of the assembly instruction in state s, and a variable time expresses the total execution time from the initial state. 4. TM is determined by the hardware manual [22], in which the execution time of each instruction is described. 5. LAB is defined by a function assigning to each state s ∈ S an instruction label ∈ Label, where Label is a set of assembly instructions. 6. time is defined by the total execution time from the initial state.
We show a simple example of part of a timed computational model from an assembly program, as follows. Figure 4. Each state is defined by the values of variables, registers, individual execution times, stack, and total execution time. As shown in Figure 4, according to [23] and other famous papers, when we derive and specify a state transition system, we omit the representing the next state; in particular, we refer to pages 24-26 in [23].
Example 1. (Example of a timed computational model) An example of a timed computational model generated from assembly program is shown in
Here, we define C = (V, S, T, Θ, TM, LAB, time), as in Figure 4, as follows: For example, we describe s 0 and s 1 as follows: For example, we describe τ 1 , from s 0 to s 1 , as follows.
Real-Time Linear Time Temporal Logic RTLTL
In 1991, Henzinger, Manna, and Pnueli explored RTLTL [5]. RTLTL formulas are constructed from state formulas by Boolean connectives and time-bounded temporal operators.
Definition 3. (Syntax of RTLTL)
We inductively define LTL formulae, as follows: 1. Each atomic proposition AP is an LTL formula. In this paper, atomic propositions are propositions such as registers, stack, and execution time. For example, the value of a register is 6. 2. If p and q are LTL formulae, p ∧ q and ¬p are LTL formulae. 3. If p and q LTL formulae, pUq and p are LTL formulae, where p holds at the current step iff p holds at the next moment, and pUq asserts that q does eventually hold and that p will hold everywhere prior to q.
The temporal connective ♦p abbreviates trueU p and p abbreviates ¬♦¬p. The temporal connectives , U, ♦, and of LTL are extended by timing constraints, and the temporal connectives ≤TI ME , U ≤TI ME , ♦ ≤TI ME , and ≤TI ME of RTLTL are defined, where TI ME denotes the constant of execution time.
Next, we define bounded invariance and bounded response properties. In this paper, we focus on bounded invariance.
Deductive Verification Using RTLTL
In this paper, we extend the deductive verification method explored by Manna and Pnueli [9]. As for the axiom of the deductive verification of temporal logic, the temporal logic formula is derived by premises to consist of predicate logic formulas [9] . This is the most important work of Amir Pnueli, which won the ACM A.M. Turing Award in 1996 [24]. This study expands on A. Pnueli's study by adding execution time, and develops the axiom of deduction verification using RTLTL.
In this paper, as shown in Figure 5, we construct a part of our verification axiom ≤TI ME q as (q ∧ (time ≤ TI ME)) over a timed computational model. In Figure 5, we introduce a variable time to measure execution time.
In consideration of Figure 5, we define the verification axiom ≤TI ME q, as shown in Figure 6. In Figure 6, if Premises B1 and B2 are valid, ≤TI ME q is obviously valid. Therefore, this axiom is sound. Furthermore, our timed transition model is the same as Henzinger's timed transition system [5] when the minimal delay is equal to the maximal delay. Therefore, we can prove that our verification axiom is relatively complete by Henzinger's proof technique [5].
When we verify an assembly program using our verification axiom, a set of first-order formulae are constructed. We can verify whether each formula is valid or not using an SMT solver [3]. Premise B1 requires that the time is set to TM(s 0 ) at an initial state and the initial condition Θ implies (q ∧ (time ≤ TI ME)). Premise B2 requires that the time is set to TM(s i ) + time and all transitions preserve (q ∧ (time ≤ TI ME)).
Experiments of Deductive Verification of Real-Time Properties
We try to deductively verify embedded an assembly program. We used the Linetrace program written, for the Wheel-type robot nuvo WHEEL controlled by a H8/3687 microcontroller [4]. The Linetrace program acquires values from a sensor, and operates a robot from the values. The robot has three sensors and a motor: the sensors can distinguish black from white, and output either 0 or 1 by color; the motor is controlled by PID control. When a timer overflow interrupt of timer B1 occurs, H8/3687 acquires the value from a sensor, and sets the new current targeted value from the value. When a timer overflow interrupt of timer V occurs, H8/3687 performs PID control from the current targeted value and the current value, and outputs the value in the motor.
In this section, we verify a timer interrupt function _int_tim_b1. If _int_tim_b1 is executed, it acquires the value of the sensor and decides the current targeted value. It returns to processing before the interrupt.
An assembly program of a timer interrupt function _int_tim_b1 is shown in Figure 7. We show a timed computational model of a timer interrupt function _int_tim_b1 in Figure 8. In Figure 8, we describe only the values that have changed from the previous state in the current state.
1. First, a state is defined by the values of stacks, flags, variables, timers, and execution times.
Execution time is the number of states, and one state is 0.05 microseconds. 2. Next, we verify whether ≤75 (E1 = R1) holds true. When ≤75 (E1 = R1) holds true, we also check whether the program has reached an error state (time = 76 and E1 = R1).
We checked whether the above first-order formula is valid using the SAT/SMT solver Princess [3], as shown in the Appendix A. The above first-order formula is valid.
We checked whether the above first-order formula is valid using the SAT/SMT solver Princess [3]. The above first-order formula is valid.
Finally, when ≤75 (E1 = R1) holds true, the function does not arrive at an error state, in which case time = 76 and E1 = R1 hold true.
Conclusions and Future Work
We have proposed a deductive verification method in order to verify the real-time safety properties of an embedded assembly program, in the following manner: We input the program codes shown in Figure A1 into Princess. From lines 17-30, the current state is specified. From lines 38-52, the state transition is specified. Following this, Princess proves the formula and outputs the verification result shown in Figure A2. Here, Sat is the output. If the quantifier-free formula, such as stack = {} ∧ CCR.I = 1 ∧ CCR.N = 0 . . ., is satisfiable, the formula is valid. Therefore, the first-order formula is valid. | 5,055.4 | 2019-10-14T00:00:00.000 | [
"Computer Science"
] |
Nanopore-Based DNA Analysis via Graphene Electrodes
We propose an improvement for nanopore-based DNA analysis via transverse transport using graphene as transverse electrodes. Our simulation results show conspicuous distinction of tunneling current during translocation of different nucleotides through nanopore. Applying the single-atom thickness property of graphene, our findings demonstrate the feasibility of using graphene as transverse electrodes in future rapid and low-cost genome sequencing.
Introduction
Nanopore-based DNA analysis is a fast rising star as a singlemolecule technique, and most apt to be the next generation of DNA sequencing [1].With longitudinal electric field, DNA molecule is forced to pass through the pore with nanometer scale, presenting obvious current blockage signal by blocking the ion flux through the pore.Pioneer works have been done with biological pores and channels, such as α-hemolysin [2], which provides a predefined, repeatable, and precise pore structure atomically and a series of limitations, like short lifetime and sensitivity to environment conditions, as well.To overcome these disadvantages, solidstate nanopore [3] was introduced with the developing technology of nanofabrication [4], which shows great durability, possibility for geometry controlling, and compatibility with semiconductor industries [5].Real-time DNA sequencing is still a big challenge nowadays because the method of longitudinal current detection only focused on the distinctive geometry and structure of four kinds of nucleotides, and great challenge remains such that the nanopore is too thick to realize single-base resolution and translocation of single base is too fast for recording (microseconds per base) [6].An alternative approach in measuring the transverse current to get single-base resolution was put forward by Zwolak and Ventra in 2005 [7].The basic idea is when the bases are passing one by one through a voltage-biased tunnel gap inside a solid-state nanopore, they will alternately change the tunneling current based on how the localized base states contribute to the tunneling current since different bases have different local electronic densities of states with different spatial extent owing to their different chemical composition [8].Intensive calculation work had been reported based on transverse current of different electrodenucleotide couplings, spreading from the influence of noise [9] and environment [10] to the modification of electrodes [11].However, most of them encountered the problem of interference of adjacent nucleotides when they are in the nanopore simultaneously.Since the length of DNA molecule is 0.32 nm per nucleotide, much smaller than the thickness of most available materials for transverse electrodes, it is hard to distinguish the neighboring nucleotides, even using the electrodes comprised of 3 × 3 gold atoms arranged as a (111) surface [8,12].
Graphene is a two-dimensional hexagonal carbon lattice that was recently discovered [13] and has attracted intensive research attention due to its unique mechanical and electric properties [14].Its single-atom thickness, ability to survive large transmembrane pressures, and intrinsic conducting properties [15] make it particularly attractive in DNA sequencing field since it holds hope for enabling transverse conductance measurements with single-base resolution.DNA translocation experiment through graphene nanopores has been reported recently by three independent groups [16][17][18], while, in their study, graphene only acts as supporting membrane for nanopores, instead of the transverse electrode to measure tunnelling current.Making graphene as transverse electrode to resolve DNA conductance with single-base resolution experimentally is still very challenging nowadays.
A numerical simulation was presented for achieving DNA transverse conductance via graphene nanogap [19], making grapheme-based transverse conductance a promising candidate for robust DNA sequencing.However, nanogap may allow several DNA sequences to pass the gap simultaneously, which may lead to interference problems.Herein, we propose and theoretically demonstrate a quantum transport simulation using graphene to comprise the transverse electrodes embedded in an SiN nanopore, which can avoid the problem of simultaneous DNA sequence translocation from the nanogap since nanopore has less dimension size compared with nanogap.SiN is used here to act as supporting membrane for graphene.From the simulation results, there exists a perfect exponential relation between the transport current and transverse voltage applied to the electrodes within an appropriate range, and, fortunately, significant distinction between different nucleotides has been demonstrated, implying the feasibility of this method in rapid genome sequencing.
Model and Method
Figure 1(a) presents the working principle of a nanoporebased DNA analysis: ssDNA is forced to translocate a nanopore by a longitudinal electric field (E L ), and the transverse current is measured via graphene transverse electrodes which are applied on defined voltage.Figure 1(b) shows the cross-section view of our simulation system: single-layer graphene electrodes are embedded in a 6 nm SiN layer and covered with a 1∼2 nm insulating layer, which are hidden in the graph.The graphene electrodes are 7.4 Å wide and 12.1 Å apart, which is also the diameter of the nanopore in the SiN membrane.SiN membrane is used because it is most commonly used in DNA translocation experiment in solid-state nanopore field.The diameter is wide enough for a ssDNA to pass through and narrow enough to hamper two ssDNAs to translocate simultaneously and achieve a measurable tunneling current.Figure 1(b) only shows part of the graphene electrodes and one nucleotide of a DNA sequence, since the rest parts of the nanopore device do not affect the tunneling effect and the thickness of electrodes is much smaller than the scale of the nucleotide molecule.
The simulation process is implemented as follows.First, the position of the nucleotides were set by manual control, in order to get an appropriate coupling of the maximum tunneling current with the electrodes' atoms.The relative positions of four kinds of nucleotides to the graphene electrodes were set up the same, located by the same part, the deoxyribose, of the nucleotides.Then the system of this time section, in which the base is in the flat location of the graphene electrodes, providing the strongest and most characteristic signals of tunneling current in the entire translocation process, was calculated by Atomistix ToolKit (ATK), a quantum transport simulation software, within the extended Hückel method, a semiempirical quantum chemistry method, developed by Hofmann in 1963 considering both pi and sigma orbitals [20].Using this method, we can get the electronic distribution of the system and deduce the corresponding electric properties, such as the transmission spectra and so on.Finally, transverse currents from the transmission spectra were calculated via nonequilibrium Green's function method [21] and the current curves were contrasted to make our final conclusion.Regarding the issue of strand orientation as it passes through the nanopore, previous calculations showed that the proposed graphene nanopore device is essentially insensitive to strand orientation [22].Therefore, we did not pay special attention to strand orientation issue during simulation.
Figures 2(a)-2(d) show the transmission spectra of adenosine, thymidine, cytidine, and guanosine under the same bias voltage of 2.4 V (±1.2 V on each graphene transverse electrode), respectively.Because the transmission coefficient of each nucleotide, as well as the tunneling current, is quite different from each other, nearly one or two orders, to hold the maximums of four diagrams the same, we use different scale in each figure.In Figure 2, we could easily pick out the unique and characteristic resonance levels for each kind of nucleotide.We see clearly the characteristic resonance levels for each kind of nucleotide, which results from their unique base types.This could be the foundation of real sequencing in the future.Here, the "windows" of the The transmission spectra of adenosine, cytidine, guanosine, and thymidine under the same bias voltage of 2.4 V (±1.2 V on each electrodes, which determine the "window" of the transmission), respectively.Because the transmission coefficient of each nucleotide, as well as the tunneling current, is quite different from each other, nearly one or two orders; to hold the maximums of four diagrams the same, we use different scale in each figure .spectra are determined by ε R and ε L , the bias voltages applied on the graphene electrodes.Consider the well-known formula for the transverse current, as follows [23]: in which, L, R imply the left and right graphene electrode of our system, V = ε L − ε R is the bias voltage, T LR is the transmission coefficient, shown in Figure 2, which is calculated from the Hamiltonian matrices of the system, and f (ε − ε (L/R) ) is the Fermi distribution function.
The equation shows that the transverse electrical current is proportional to the sum of the transmission peaks inside the bias voltage range, which is called the "window."By this method, the tunneling current could be calculated easily with a Python script of ATK, using the extended Hückel method.
Simulation Results and Discussion
Figure 3(a) is V-ln I curve of adenosine base pair with bias from 0.0 V to 3.0 V with a step of 0.2 V. Here, when voltage is below 1.0 V, the tunneling current is very small, almost hard to detect, as demonstrated in ln I value.ln I from −45 to −42 means a null current, which is verified by a simulation carried out on non-load electrode system, where no DNA molecule was used.The current curve of non-load electrodes, shown in Figure 3(a), gave all points below 10 −19 A. The V -ln I curve of adenosine clearly shows that, under 1.0 V, the current approximates zero, suggesting the close of the tunneling.The current quickly rises from 1.0 V to 1.4 V, and, when the voltage is above 1.4 V, the current behaves a perfect exponential relation with the bias voltage.This reveals a great deal of useful information: first, there exists a threshold voltage at about 1.2 V, above which the tunneling effect starts to show; second, only within an appropriate range of bias, Figure 3: (a) The V -ln I curve of adenosine from 0 V to 3.0 V with a step of 0.2 V, and the current curve of non-load electrodes system from 0.0 V to 2.0 V with a step of 0.1 V. (b) Four distinguishable V -ln I curves of four nucleotides, A, C, G, T, respectively.All of which express linear relationship from 2.0 V to 2.6 V with the same step of 0.2 V.At most voltages, they keep a difference of one or two orders to each other.
which is from 1.4 V to 3.0 V in our case, the conclusion before could be held.Our next simulation with four different nucleotides was carried out in this voltage range.Figure 3(b) represents the transverse current ln I-V curves of four nucleotides, ranging from 2.0 V to 2.6 V with the same step of 0.2 V.
All four curves express linear relationship within this voltage range and show conspicuous distinction between different nucleotides, nearly one or two orders to each other, which agrees with the results by Zwolak and ventra [7], although in a different arrangement of the current intensities.This may result from the different coupling of nucleotides with graphene and nucleotides with Au, which Zwolak used as transverse electrodes in his simulation.The distinct difference in ln I at the same bias voltage for each nucleotide indicates a potential sequencing approach by using graphene transverse-electrode-based nanopore: we can utilize transverse current to characterize electronic signatures of each nucleotide.
Such transverse current distinction between different nucleotides may result from the geometry dimensions of the different base types.We could see clearly that the current has a same arrangement as the sizes of the four base types (i.e., G, T, C, A).Because the nanopore is just a little bigger than the nucleotides, the tunneling effect is very sensitive to the distance between transverse electrodes.So the small difference in base size will lead to great discrepancy in transverse current.The second reason may be the different interaction between graphene electrodes and base atoms.For instance, the guanosine and cytidine with three hydrogen bonds in the bases have much higher currents than the thymidine and adenosine with two ones.The tunneling current may come from the hydrogen bond on the base, which has been discussed before [24].Because the thickness of graphene is only single atom, the distance between the electrode and the translocating nucleotide (less than 1 Å) is much less than that of the neighboring ones (more than 3 Å), as well as the interaction between them.Thus we need not to consider the interference from the neighboring nucleotides in our calculations.
At last, we could conceive the experimental realization of the proposed method.First, transfer and locate a graphene ribbon, narrow enough to compare with the diameter of the nanopore we expect, on the SiN membrane.Then use transmission electron microscope to fabricate a nanopore on the membrane [25], where the graphene ribbon is located.When the pore is fabricated, the graphene ribbon is cut off into two segments, just on the two sides of the nanopore, and can act as transverse electrodes.After being covered with an insulating layer, the nanopore device with graphene transverse electrodes is fabricated.Here, the most challenge maybe the graphene ribbon is very difficult to manipulate at such narrow scale.
Summary
In summary, quantum transport simulation using graphene as transverse electrodes was carried out in nanopore-based-DNA analysis, and the obtained ln I-V curves show exponential relation between the current and the voltage.Moreover, distinguishable distinctions in ln I under same bias voltage between different nucleotides were demonstrated, indicating the feasibility of the nanopore-based DNA analysis via graphene electrodes.Such findings are fundamentally useful toward the ultimate goal of inexpensive and fast DNA sequencing.
Figure 1 :
Figure 1: Principal diagram of nanopore-based DNA analysis: ssDNA is driven through a nanopore with two graphene electrodes by a longitudinal electric field; the transverse graphene electrodes are applied on defined voltage and used to measure the transverse current.The graphene electrodes are 8.6 Å wide and 12.1 Å apart.The SiN membrane and insulating cover layer are hidden in the graph.
Figure 2 :
Figure 2: ((a)-(d))The transmission spectra of adenosine, cytidine, guanosine, and thymidine under the same bias voltage of 2.4 V (±1.2 V on each electrodes, which determine the "window" of the transmission), respectively.Because the transmission coefficient of each nucleotide, as well as the tunneling current, is quite different from each other, nearly one or two orders; to hold the maximums of four diagrams the same, we use different scale in each figure. | 3,241 | 2012-01-01T00:00:00.000 | [
"Chemistry"
] |
Scalar CFTs and Their Large N Limits
We study scalar conformal field theories whose large $N$ spectrum is fixed by the operator dimensions of either Ising model or Lee-Yang edge singularity. Using numerical bootstrap to study CFTs with $S_N\otimes Z_2$ symmetry, we find a series of kinks whose locations approach $(\Delta^{\text{Ising}}_{\sigma},\Delta^{\text{Ising}}_{\epsilon})$ at $N\rightarrow \infty$. Setting $N=4$, we study the cubic anisotropic fixed point with three spin components. As byproducts of our numerical bootstrap work, we discover another series of kinks whose identification with previous known CFTs remains a mystery. We also show that"minimal models"of $\mathcal{W}_3$ algebra saturate the numerical bootstrap bounds of CFTs with $S_3$ symmetry.
Introduction
Scalar field theories are useful in studying phase transition and critical phenomenon. A large number of these models has been applied to different condense matter systems to extract the critical exponents [1,2]. The simplest example among them, the φ 4 theory could be used to study phase transition concerning Z 2 symmetry breaking, which includes the Ising model [3].
The critical exponents calculated in field theories are usual based on certain perturbation method, such as -expansion [3] or large N expansion (see [9] and references therein). As a non perturbative method, conformal bootstrap program [4,5] has been proven to be useful in studying two dimensional conformal field theories. It has played an important rule in the classification of two dimensional "minimal models" [6]. In higher dimensions, a significant progress was made in the seminal work of [7]. There has been a revival of this program after that. An incomplete list of work on conformal bootstrap and related topics is [8].
Numerical bootstrap is applicable even in regions where neither -expansion or large N works very well. For three dimensional Ising model, it has provided the most precise the critical exponents so far [10,11,12]. For the perturbative regions, the bootstrap result was also shown to agree with the field theory result. For example, the numerical bootstrap for the scaling dimension of operators in critical O(N ) vector model [13,14] agrees perfectly with the large N calculation based on the scalar theories [9]. The Borel-resummation of expansion series for scaling dimension of operators in critical Ising model also agrees with the bootstrap result [15].
We will study scalar field theories admitting conformal fixed points whose large N behaviour is controlled by another CFT with central charge of order one. The specific models that we would study are closely related to the continuum limit of the Potts model [16]. We would like to first consider a scalar theory with quartic interaction in 4 − dimensions. The model was referred as "restricted Potts model", and was used as an intermediate steep to study the continuum limit of the Potts model [17]. This model was recently revisited in [18]. Its Lagrangian is given by The scalars φ i transform in the n = N − 1 dimensional representation of the symmetric group S N . The totally symmetric tensor d ijk is invariant under the action of S N . The name "restricted Potts model" is due to the fact that besides S N , it also preserves an extra Z 2 symmetry under which all the scalars change signs. Its symmetry group is therefore slighter bigger than the S N symmetry of the original Potts model. Suppose one turns on the a trilinear interaction 1 3! d ijk φ i φ j φ k , the Z 2 symmetry is broken and one get the model which describe the continuum limit of the Potts model. The second model that we will consider is a φ 3 theory in 6 − 2 dimensions, given by It could also be used to study the Potts model. As at close to six dimensions, quartic interaction of scalars are irrelevant, and the φ 4 terms in (1.1) could be neglected. Nstate Potts model is known to undergo first order phase transitions for large enough N . In accord with this fact, this φ 3 theory is known to have a non-unitary fixed point at imaginary couple g. The model (1.1) is known to have two extra fixed points other than the free theory points and a O(N ) invariant point where symmetry got enhanced [17]. In section 2, we look at their operator spectrum to set up the background for later numerical bootstrap study. Taking the large N limit of the -expansion series for anomalous dimensions and compare with the corresponding series in the Ising model, it can be seen that the scaling dimensions of all the operators that we have studied approach a limit fixed by the scaling dimension of operators in the critical Ising model. The non-unitary fixed point of (1.1), on the other hand, has a large N limit whose operators spectrum is fixed by the Lee-Yang edge singularity.
We then employ numerical bootstrap method to study CFTs with S N ⊗ Z 2 global symmetry. We observe that in three dimensions, there indeed exist a series of kinks, whose location at large N approach a point given by the scaling dimension of of the spin operator σ and the thermal operator in critical Ising model. This confirms the predicted large N behaviour from -expansion. Setting N = 4, we were able to observe the famous cubic anisotropic fixed point [19,20,21,22] with three component spins. Interestingly, the scaling dimension of ∆ φ agree with with its corresponding value in O(3) invariant Heinsberg model, consisting with the prediction in [23]. As a byproduct of our numerical bootstrap study, we also discover a series of new kinks. We were however not able to identify them with any CFTs with Lagrangian descriptions. By doing numerical bootstrap with S 3 symmetry in two dimensions, we have also shown that the "minimal models" of W 3 algebra saturates the numerical bootstrap bound. These results are presented in section 3.
2 Renormalization of scalar theories 2.1 "Restricted Potts Model" → Ising Model For the restricted Potts model (1.1), the invariant tensor d ijk could be constructed explicitly according to [17], it is possible to define a set of "vielbeins" e α i with α = 1 . . . N and i = 1 . . . N − 1 through a recursion relation. These vielbeins tell us how a hypertetrahedron with N vertices could be embedded in N − 1 dimensional space. From group theory point of view, the N -dimensional representation is reducible, N = 1 ⊕ n. Take N = 3 as an example, the three vielbeins form a equilateral triangle, the symmetric group S 3 consists of all O(2) rotation that keeps this triangle invariant. Using e α i , the totally symmetry tensor could be defined as 2) The details of the two loop calculation of (1.1) is summarised in Appendix A, which is based on the general formula in [24]. It is in principle easy to extend the result to three loop using the result of [18]. We will however only focus on the two loop results. The beta function of this model have in total four fixed points free theory : critical O(n) point : g 1 = 0, g 2 = 0, P 1 : g 1 = 0, g 2 = 0, The O(n) invariant fixed point is also present. We will focus on the two extra new fixed points P 1 and P 2 . The scaling dimensions of the operators we have studied are given in Table 1 and Table 2. The quadratic operators falls into various irreps of the symmetry group S N (they are clearly Z 2 odd), as The irreducible representation n appears as a result of the existence of invariant tensor d ijk . It is interesting to observed that for both of the fixed points, the scaling dimensions of low lying operators at the large N limit could be expressed in terms of the Ising model spectrum.
Operator Table 1: Scaling dimensions of low lying operators at the fixed point P 1 . Operator
The spectrum of (de)coupled CFTs
We should mention that the large N behaviour could already be partially inferred from combining the result of [18] and much earlier work of [25,26] on cubic anisotropic systems. We will explain this point in the present section, and try to better understand the large N limit.
In [18], another φ 4 theory was studied, the model was obtained by replacing the d ijm d klm in (1.1) with The model has a long history of being studied [19,20,21,22,27,28,29], and certain critical exponents are known up to six loops [30]. This model preserves a symmetry group which is the generalized symmetric group S(2, N ) = S N ⊗ Z N 2 . Like (1.1), it has also four fixed points free theory : critical O(N) point : g 1 = 0, g 2 = 0, cubic anisotropic point : N copies of decoupled Ising models : g 1 = 0, g 2 = 0. (2.6) It was shown in [18] that certain numbers that appear in the renormalization calculation of both models have the same large N limit (see Section 5.1.2), and therefore the two models approach the same limt at N → ∞. The fixed point P 2 approaches N copies of decoupled Ising models, it is therefore not surprising that it spectrum are given by the scaling dimensions of operators in the Ising model.
It is straightforward to work out the spectrum of the decoupled CFTs. Suppose a certain CFT preserves symmetry group G, then N decoupled copy of this CFT preserves the symmetry group G S N = S N ⊗ G N . The symbol " " stands for wreath product, which can be viewed as a short hand notation of the symmetry group. The group G N acts independently on each copy of the CFTs, while S N interchange them. We will consider only operators which are invariant under the full group G S N . Suppose the component CFT has the following conformal primaries operators which are invariant under the action of G, The decoupled model then has the following operators which are also invariant under S N permutations, where space-times indices are supressed for simplicity. The indices i denote which copy of the CFTs does O i belong to. Picking two operators from same CFT copy, take O 1 and O 2 as an example, one could easily make S N invariant operators of the following form The coefficient in front of the operators is due to normalization. For operators with spin, the space time indices need to be arrange properly for them to have definite spin. The condition i = j makes sure that the composite operator is made of two operators from different copies of CFTs, so that it would not be renormalised. The summation over i = j pairs makes it S N invariant. If O 1 and O 2 are scalars, we could also construct the following operators . . . . (2.10) We have borrowed the notation [O 1 O 2 ] n,l for double trace operator in AdS/CFT context [31,32,33,34]. The scaling dimensions of these operators are simply ∆ = ∆ 1 + ∆ 2 + n + l. The derivatives acting on the operators are arranged so as to ensure that they are conformal primaries. The procedure of choosing an appropriate derivatives structure is exactly the same as constructing conformal primaries for "generalized free fields", as studied in [33]. One could also follow it to constructed "double trace" conformal primaries operators with higher spin and twist. Even though we are not aware of it appearing anywhere in the literature, a similar procedure should exist for constructing double trace operators made of operators with non-zero spins. It is also interesting to look at the 4-pt function consist of identical scalar operators, The condition i = j = k = l in the second line makes sure that O i and O j come form the same copy of CFT, while O k and O l comes from a different copy. Its contribution to the four point function therefore reduce to to two point functions. The leading term in 1 N expansion clearly factorises into disconnected two point functions, which are the four point function of "generalized free fields". It is equivalent to the dual boundary four point function of a free massive scalar with AdS mass m 2 L 2 = −∆ Ising (D − ∆ Ising ) [31,33]. The sub-leading behaviour receives contribution from both a disconnected piece and a connected piece which are given by the four point function of the component the CFT, as denoted by OOOO .
Specialising to the Ising model, the first three operators with spin-0 and lowest scaling dimensions are They have the same scaling dimension as the S-channel operators 1 at fixed point P 2 . See table 2. At the cubic anisotropic fixed point of (2.5), the coupling constants become [20], The action of the model becomes N copies of Ising model actions plus certain O( 1 N ) corrections. It can be shown that this is also true for the fixed point P 1 of model (1.1). We will sometime refer these large N CFTs as coupled Ising models for obvious reasons. At large N , the renormalization is clearly dominated by the Ising model coupling, which explains why the scaling dimensions of Ising model operators appear in the spectrum. These fixed points fit into the class of models studied by Victor Emery in [26]. Their critical exponents are related to Ising critical exponents by [26,20,35] ). (2.14) Translated into operator dimensions, this means 15) agreeing exactly with the Table 1. What's more, operators like self average as in the critical O(N ) vector model [9]. Its four point function are expected to factorise at large-N limit as in (2.11). The spectrum of S-channel operators should be exactly the same as the decoupled Ising point, and also fall into the categories of "single trace operators", "double trace operators" and so on. The only modification that one need to make is the replacement ∆ Ising → D − ∆ Ising (2.17) This is again supported by the calculation in Table 1, where an operator with D − ∆ Ising is found to be accompanied by a "double trace" operator with the scaling dimension 2 × (D − ∆ Ising ).
Potts Model → Lee-Yang Singularity
Before closing this section, we briefly mention the large N behaviour of the scalar model (1.2), the continuum limit of N -state Potts models. The theory has a non-unitary fixed point at generic N . It was pointed out in [36] that the N = 1 limit of N -state Potts models gives the percolation model. Therefore people have been using (1.2) to calculate the critical exponents of the percolation problem [37,38]. The three loop renormalization for operator dimensions is summarised in Table 3. (See Appendix A.2 for more details.) By taking the N → ∞ limit, it is clear that the scaling dimensions of operators is fixed by the spectrum of Lee-Yang edge singularity CFT. It can also be shown that the coupling constant at large N is given by By the same argument as in previous section, operators that are invariant under S N should fall into the categories of "single trace operators", "double trace operators" and so on. The single trace spectrum is given by the spectrum of Lee-Yang edge singularity, with the replacement ∆ Lee-Yang The operator next to the ones listed in Table 3 should have scaling dimension 2 × (D − ∆ Lee-Yang φ ).
3 Numerical bootstrap for CFTs with S N symmetry 3.1 The fixed point P 1 from numerical bootstrap In this section, we will show that the fixed point P 1 studied in previous section could be observed in numerical bootstrap. Conformal bootstrap is based on the crossing symmetry and unitarity. Crossing symmetry means that the following two ways of computing its four point functions should lead to equivalent result The lines connecting the operators denote how operator product expansion (OPE) is performed. This is true for any conformal field theories. Unitarity on the other hand requires all the OPE coefficients λ O 1 O 2 O 3 to be real. By assuming certain conditions on the spectrum of operators that appears in the OPE and test the positivity of λ 2 φφO , one could then check whether such an assumption is consistent with unitarity and crossing symmetry. We will leave the details of how this method was implemented in Appendix B. The conditions that we have assumed for the spectrum are: • the external operator φ i has scaling dimension ∆ φ , • the first spin-0 operator in the n-channel has scaling dimension greater than or equal to ∆ n , • all the other operators that appear in φ i × φ j has scaling dimensions greater than or equal to the unitarity bound.
We have scanned a certain region of the (∆ φ , ∆ n ) plane and the result is presented in Figure 1. The result is obtained by setting Λ = 19, with the range of spin chosen to be l ∈ {1, . . . 25} ∪ {49, 50}. The region above the curves are excluded, which means there is no unitary CFTs with the assumed spectrum. For large enough N , a clear kink could be observed in the numerical bootstrap curve. The appearance of kinks in numerical bootstrap is a strong indication of the existence of a conformal field theory. More interestingly, as N increase, the location of the kink approaches the point (∆ Ising σ , ∆ Ising ), as denoted by the black cross in Figure 1. This confirms the prediction from from previous section. From Table 1 and 2, it is clear that (∆ φ , ∆ n ) should approach (∆ Ising σ , ∆ Ising ) for both fixed points P 1 and P 2 . We therefore need to determine which one of them corresponds to the kink in Figure 1. This could be achieved by introducing one extra condition in the assumed spectrum • The first spin-0 operator in the S-channel has scaling dimension greater than or equal to ∆ n + 0.1.
At large enough N , this assumption would clearly exclude point P 2 , while preserves P 1 , remember D − ∆ Ising ≈ 1.5874, while ∆ Ising ≈ 1.4126. We have checked that the S 100 curve does not change after introducing this condition, therefore proves that the kink corresponds to fixed point P 1 . The N = 4 case deserves some special attention, the symmetry group S 4 ⊗ Z 2 is isomorphic to S 3 Z 2 = S 3 ⊗ Z 3 2 [17]. The two group clearly has the same order as 4! × 2 = 3! × 2 3 = 48. This means that the "restricted Potts model" (1.1) with N = 4 is equivalent to the cubic anisotropic model (2.5) at N = 3. From Figure 1 itself, it is not clear whether there is a CFT saturating the bootstrap bound or not, since there is no clear kink on the N = 4 curve. One could study this case more carefully by changing the assumptions of the spectrum into • The external operator φ i has scaling dimension ∆ φ , • the first spin-0 operator in the n-channel has scaling dimension greater than or equal to ∆ n = ∆ Max n − 0.002, • the second spin-2 operator in the S-channel has scaling dimension greater than or equal to ∆ S,l=2 (note the first spin-2 operator needs to be the energy momentum tensor), • all other operators that appear in φ i × φ j OPE have scaling dimensions greater than or equal to the unitarity bound.
Notice ∆ n is chosen to be sightly below the maximal allow bound from Figure 1. This method was introduced in [10] to study the scaling dimension of operator in critical Ising model (see Figure 6). We could similarly carve out the allowed region of (∆ φ , ∆ S,l=2 ). This is presented in Figure 2. The dashed lines are the scaling dimension of φ i in O (3) invariant Heisenberg model from Monte Carlo simulation [39]. The reason that we could compare the scaling dimension of operators in cubic anisotropic model with operators in O (3) invariant Heisenberg model is that an analysis of the six loop calculation of both models shows that there are surprising cancellations in the different between their critical exponents [23], (1), ν Cubic − ν Heisenberg = −0.0003 (3). [39].
at the location of the dashed lines. This curve resembles the bounds for ∆ obtained in [10] for Ising model. We therefore conclude that the cubic fixed point is located at around ∆ φ ≈ 0.5187 and saturate the numerical bootstrap bound in Figure 1.
A long standing question concerning cubic fixed point and the Heinsberg fixed point is the relative height between the two of them along renormalization group flow. This is important experimentally, since the IR CFT is the one that governs the phase transitions. Because of the cancelation mentioned already, it is very hard to distinguish the two CFTs by measuring either η or ν. However, the critical exponent corresponding to ∆ n , if could be measured, is probably a good candidate. Notice in our case ∆ n ≈ 1.292, while at the Heinsberg point, ∆ T ≤ 1.22 is required by numerical bootstrap [14].
Other bootstrap results: unidentified kinks
The study in previous section was focused on the region where ∆ φ is close to the unitarity bound, it is straight forward to extent the result to the region with much higher ∆ φ . This is presented in Figure 3. Surprisingly, for large enough N , we could again observe some kinks in the numerical bootstrap curve. Unlike those CFTs in previous section, we are not able to find some Lagrangian description for them. Instead, we will show that these kinks pass some consistency checks for them to actually be CFTs. For any full-fledge conformal field theories, it necessarily contains energy momentum tensor in its spectrum. There should be a spin-2 operator saturating the unitarity bound. If the kinks we observed correspond to actual CFTs, they should not survive when a gap is introduced for the spin-2 operators in S-channel. This fact is tested by considering adding the following condition in the assumed spectrum • the first spin-2 operator in the S-channel has scaling dimension greater than or equal to 3.05, Taking the N = 10 curve as an examples, the allowed region for (∆ φ , ∆ n ) is presented in Figure 4. The solid line corresponds to the result without the above condition, while for the dashed line, above condition is included. Clearly, when the gap for spin-2 operator is imposed, the curve moves downward, showing that energy momentum tensor is present in the spectrum.
Other bootstrap results: "minimal models" of W 3 algebra
The crossing equations we derived in Appendix B apply to CFTs with S N ⊗ Z 2 symmetry. It could also be easily generalized to study CFTs with S N symmetry, this is simply achieved by change the assumed spectrum to be • The external operator φ i has scaling dimension ∆ φ , • the first spin-0 operator in the n-channel has scaling dimension ∆ φ , while the second spin-0 operator in the n-channel has scaling dimension greater than or equal to ∆ n , • all other operators that appear in φ i × φ j has scaling dimensions greater than or equal to the unitarity bound.
Notice since d ijm is an invariant tensor of S N group (which is not invariant under S N ⊗Z 2 ), scalar operator φ i would appear in its own OPE, We have studies the allowed region of (∆ φ , ∆ n ) for CFTs with S 3 symmetry in two space-time dimensions. This result is presented in Figure 5. We found that "minimal models" of W 3 algebra, as classified in [40], saturate the unitarity bound. W 3 algebra is an extension of the Virasoro algebra introduced by Zamolodchikov in [41]. It contains the Virasoro algebra as a subalgebra. Besides the usual spin-2 operators L n , W 3 algebra contains spin-3 operators W n which satisfies non-trivial commutation relations with L n and among themselves. Like for the Virasoro algebra, "minimal models" here means the fusion rules of the models consist of finite number of irreducible representations of W 3 . It was shown in [40] that all these models have a global Z 3 symmetry, taking into account the complex conjugation of complex scalars one get the symmetric group S 3 . The central charges of these models and the scaling dimensions of their W 3 irreducible representations Figure 5: Numerical bootstrap bound on the scaling dimension of the second n-channel scalar operators in CFTs with S 3 symmetry. The crosses correspond to minimal models with W 3 algebra. The first cross to the left is 3-state Potts model. are given by, where m, n, m , n and p are positive integers whose range are n + n ≤ p − 1, m + m ≤ p and p ≥ 4. The horizontal and vertical axis in Figure 5 corresponds to operators with (3.6) respectively. They satisfy which saturates the numerical bootstrap bound. It was discovered in [42] that minimal models of the Virasoro algebra also saturate the numerical bootstrap bound for CFTs with Z 2 symmetry. It is interesting to observe that W 3 algebra also share the same feature. It would be interesting to extend this result to other W-algebras.
Discussion
We have shown that there exist two series of conformal fixed points approach (de)coupled Ising model and Lee-Yang edge singularity respectively at the large N limit. It would be interesting to understand whether it is possible to replace the large N limit by other CFTs such as XY-model, Heinsberg model and etc. A naive guess is the following. The CFTs that approach Lee-Yang edge singularity has the symmetry group S N ⊗ 1, while the CFTs that approach Ising model has S N ⊗ Z 2 . It is therefore natural to consider scalar models with symmetry group S N ⊗ G, as a candidate for large N CFTs that approach a CFT with symmetry group G. We leave this for future investigation. In section 3.1, we have shown that one could observe the fixed point P 1 in the numerical bootstrap curve, it would be interesting study its spectrum more carefully. The best way to do this is probably by first studying the possibility of isolating this fixed point using mixed correlator bootstrap, along the line of research in [43,14,44,12]. For the N = 4 special case, a further comparison with experiment or Monte Carlo study would also be interesting. It is also desirable to try to extract the O(1/N ) corrections to the operator dimensions and compare them with our numerical bootstrap result. Since the O(1/N ) effect receives contribution from all order in -expansion, a proper resumption is necessary. What's more, it would be more interesting to investigate the possibility of performing a proper large-N calculation like in O(N ) vector model (see [9] for a review).
Before we close, let's think about the large N (de)coupled CFTs in the context of AdS/CFT correspondence. As explained in section 2.2, the large N spectrum of (de)coupled CFTs naturally fall into the categories of "single trace operators", "double trace operators" and so on. The replacement (2.17), as famously pointed out by Witten [45], corresponds to the change of boundary conditions for the dual AdS scalar, and does not change the dual AdS mass as M 2 AdS L 2 = −∆(D − ∆). The exact same phenomenon happens for O(N ) vector models. At the free theory limit, the scaling dimension of the first O(N ) singlet operator is given by ∆[ i φ i φ i ] = 1, while at the critical O(N ) point, its dimension is given by D − 1 = 2 plus 1 N corrections. It is not yet clear what are the necessary and sufficient conditions for a CFT to have weakly coupled dual description in AdS [31,32,34,46,47,48]. As conjecture in [31], besides large N factorization, any CFT with Einstein like local bulk dual description must also have a large gap for all single trace operators with spin higher than 2. This is clearly not the case for the large N limit of decoupled CFTs. As shown in Section 2.2, the operators that could be interoperated as "single trace" operators are simply the S-channel operators of the component CFT, which clearly contains operators with arbitrary spin. If the dual theory indeed exist, it should be more similar to Vasiliev's higher spin theory [49,50]. However, since the CFT operators does not saturate the unitarity bound, higher spin symmetry is clearly broken in this case.
A.2 3-Loop Renormalization of generic φ 3 theory in 6 − 2 Dimensions
Three loop renormalization of generic φ 3 theory in D = 6 − 2 was studied by [51,37]. Four loop result was obtained more recently in [38,52], where they have also studied the renormalization of the Potts model and the Lee-Yang edge Singularity. The authors did not present the result for N -state Potts model with generic N , but rather focus on the N → 1 limit to study percolation problem. For the reader's convenience, we will record the generic N result here. Plug (A.4) into the formulas in [38], one could easily get We also record here the renormalization for Lee-Yang edge singularity for comparison, setting It is not necessary to present the dimension of ∆ φ 2 , since it is the conformal descendent of φ. As a result of equation of motion φ ∼ φ 2 , its dimension is fixed to be ∆ φ 2 = ∆ φ + 2.
B Bootstrap with S N symmetry
Using the "villeins" e α i , beside d ijk defined in (2.2), one could also define the following invariant tensor carrying four indices They satisfy The production of two n-dimensional representation can be decomposed as, Compare with the production rule for rotational group O(n), n ⊗ n → S ⊕ A ⊕ T, the T representation of O(n) group is further decomposed into n⊕ T , due to the existence of d ijk . One could also defines the following linear independent invariant tensors Suppose v i 1 and v i 2 are two vectors carrying indices in n-dimensional representation of S N , the tensors P where dim I stands for the dimension of representation I. A four point function in CFTs with S N global symmetry can be written as Here I ± denotes operators with even(odd) spin and transforms in irreducible representation "I" of S N . See [53] for the reason behind the spin choice. g ∆ O ,l O (u, v) is the conformal block which encodes all the kinematics of conformal field theories, which is universal for any CFTs. The dynamical information which are specific to each CFT, on the other hand, are widely believed to be encoded in the OPE coefficients and the spectrum. An analytical expression for conformal block in even dimension was calculated in [54,55]. Operator product expansion are convergent for conformal field theories, and four point functions should not depend on how OPE is preformed, so From this equality we get the following crossing equations (B.11) Here F and H are short for F ∆,l and H ∆,l , defined by The logic for numerical bootstrap is to look for a linear functional α such that α( V ∆,l ) ≥ 0 , for ∆ ≥ l + D − 2 , (l = 1, 3, 5, 7, 9 . . . ) . (B.13) This realise the conditions imposed on the operator spectrum in section 3.1 to study conformal field theories with S N ⊗ Z 2 symmetry. If such a functional could be found, then there is no way for (B.16) to be satisfied with all the λ 2 O 's being positive. Therefore we conclude that a unitary CFT with S N ⊗ Z 2 symmetry and ∆ φ must have at least one scalar operators whose dimension is less than ∆ n . For readers interested in the implement of numerical bootstrap, we refer them to [56] and reference therein. The numerical computations in this work are performed using the SDPB package [56]. For the approximation of the conformal blocks, we partially used the code from JuliBoot [57].
Before proceeding, let's recall the dimensions of each representations to be, dim S = 1, dim n = n, dim A = n(n − 1) 2 , dim T = n(n + 1) 2 − 1 − n. (B.14) For n = 2, hence S 3 group, dim T = 0, one can check that P These are exactly the same crossing equations that were used for bootstraping O(2) invariant CFTs in [13]. However, when studying conformal field theories with S 3 symmetry, since d ijm is an invariant tensor of S 3 group (which is not invariant under SO(2) group), scalars φ i would appear in its own OPE, φ i × φ j ∼ d ijk φ k . We need to search for a linear functional α satisfying (B.13) plus one extra condition α( V (n + ) ∆ φ ,0 ) ≥ 0. (B.18) This is the numerical bootstrap program used in section 3.3. | 7,915.6 | 2017-12-04T00:00:00.000 | [
"Mathematics"
] |
Laboratory and Numerical Analysis of Steel Cold-Formed Sigma Beams Retrofitted by Bonded CFRP Tapes
In this paper, the retrofitting method of thin-walled, cold-formed sigma beams using bonded carbon fibre reinforced polymer (CFRP) tapes is proposed. The effectiveness of the presented strengthening method is investigated by the means of laboratory tests and numerical analysis conducted on simply supported, single-span beams made of 200 × 70 × 2 profile by “Blachy Pruszyński” subjected to a four-point bending scheme. Special attention is paid to the evaluation of possibility to increase the load capacity with simultaneous limitation of beams displacements by appropriate location of CFRP tapes. For this purpose, three beams were reinforced with CFRP tape placed on the inner surface of the upper flange, three with CFRP tape on the inner surface of the web, three beams with reinforcement located on the inner surface of the bottom flange, and two beams were tested as reference beams without reinforcement. CFRP tape with a width of 50 mm and a thickness of 1.2 mm was used as the reinforcement and was bonded to the beams by SikaDur®-30 adhesive. Precise strain measurement was made using electrofusion strain gauges, and displacement measurement was performed using two Aramis coupled devices in combination with the Tritop machine. Numerical models of the considered beams were developed in the Finite Element Method (FEM) program Abaqus®. Experimental and numerical analysis made it possible to obtain a very high agreement of results. Based on the conducted research, it was proved how important is the impact of the applied reinforcement (CFRP tapes) in thin-walled steel structures, with respect to the classic methods of strengthening steel building structures.
Introduction
The development of thin-walled steel structures is related primarily to the technical progress in manufacturing and assembly as well as efforts to minimize material consumption. According to the definition of Vlasov, the creator of the theory of thin-walled open cross-sections, a bar can be considered thin-walled if the wall thickness is at least eight times smaller than the longest distance measured along the centre line between two extreme points located on the bar cross-section contour, and this, in turn, is at least eight times smaller than the bar length.
Compared to traditional design solutions, cold-formed elements have one of the highest rate determining the ratio strength to weight of the material used for their production. The increasing use of thin-walled steel elements as the main structural elements simultaneously necessitates the development
Laboratory Tests
The first step of this work was to determine the appropriate length of CFRP tapes. The issue is quite complex because there is no strict and universal recommendation describing the procedure of adoption of CFRP effective anchorage length. Therefore, based on the conducted thorough literature review in this study, the effective anchorage length was assumed in accordance with [19] and shown in Figure 1, where L CFRP is the CFRP tape length, and L z is the effective anchorage length of the CFRP tape. The length of the CFRP tape was of 175 cm and the effective anchorage length was of 15 cm. More information on the anchorage length tests is given in [20].
Four-point bending laboratory stand was developed in order to perform full-scale tests. Experiments were carried out on a steel thin-walled, cold-formed 200 × 70 × 2 profile made by "Blachy Pruszyński" company. The load spacing of 135 cm and supports spacing of 270 cm was assumed ( Figure 2). In order to determine the strength properties of the steel material, laboratory coupon tests were carried out on five samples cut out from sigma steel profiles. The shape and size of samples used for laboratory tests complied with the requirements of PN-EN ISO 6892-1: 2009 [21]. Measurements of the longitudinal and transverse deformation of the sample were carried out using a biaxial extensometer. Based on obtained results, material characteristic of profile such as the Young's modulus E = 201.8 GPa, Poisson's ratio ν = 0.28, and the yield strength of steel f y = 418.5 MPa were Materials 2020, 13, 4339 3 of 14 specified. Sika CarboDur S carbon fibre tapes (CFRP), 1.2 mm thick and 50 mm wide, were used in the tests. Composite CFRP tapes used in the study consist of unidirectional arrangement of carbon fibres embedded in an epoxy matrix. One of the inherent features of this structure is anisotropy. Composite tape in different directions is characterized by different stiffness and strength. In the longitudinal direction, the stiffness and strength are very high, while transversely, the stiffness and strength are much weaker. The transverse modulus of a unidirectional laminate is only two to three times greater than that of the adhesive matrix itself, as is the strength, and in some cases even lower. On the basis of material tests of CFRP tapes, the Poisson's ratio ν = 0.308 and Young's modulus E = 165 GPa were determined. More information on the tape strength parameters is described in [22]. SikaDur ® -30 adhesive was used in course to bond CFRP tapes to the beams. This adhesive was characterised by minimum compressive strength of 75 MPa after 7 days, a modulus of elasticity under compression of 9600 MPa, a minimum tensile strength after 7 days of 26 MPa, a deboning strength from steel after 7 days minimum 21 MPa, shear strength minimum 16 MPa, and shrinkage of 0.04%. Based on the tests presented in [19], the adhesive thickness was chosen to be equal to 1.3 mm. The author of this article [19], concerning the reinforcement of I-section steel beams with CFRP tape, examined three thicknesses of the adhesive layer-0.65 mm, 1.3 mm, and 1.75 mm. During the test, beams reinforced with CFRP tape with an applied adhesive layer of 1.3 mm achieved the highest value of the debonding force. Beams reinforced with CFRP tape with an adhesive layer of 1.75 mm achieved a lower value of the debonding force, which is due to the fact that in all samples the tape detached. Four-point bending laboratory stand was developed in order to perform full-scale tests. Experiments were carried out on a steel thin-walled, cold-formed ∑200 × 70 × 2 profile made by "Blachy Pruszyński" company. The load spacing of 135 cm and supports spacing of 270 cm was assumed ( Figure 2). In order to determine the strength properties of the steel material, laboratory coupon tests were carried out on five samples cut out from sigma steel profiles. The shape and size of samples used for laboratory tests complied with the requirements of PN-EN ISO 6892-1: 2009 [21]. Measurements of the longitudinal and transverse deformation of the sample were carried out using a biaxial extensometer. Based on obtained results, material characteristic of ∑ profile such as the Young's modulus E = 201.8 GPa, Poisson's ratio ν = 0.28, and the yield strength of steel fy = 418.5 MPa were specified. Sika CarboDur S carbon fibre tapes (CFRP), 1.2 mm thick and 50 mm wide, were used in the tests. Composite CFRP tapes used in the study consist of unidirectional arrangement of carbon fibres embedded in an epoxy matrix. One of the inherent features of this structure is anisotropy. Composite tape in different directions is characterized by different stiffness and strength. In the longitudinal direction, the stiffness and strength are very high, while transversely, the stiffness and strength are much weaker. The transverse modulus of a unidirectional laminate is only two to three times greater than that of the adhesive matrix itself, as is the strength, and in some cases even lower. On the basis of material tests of CFRP tapes, the Poisson's ratio ν = 0.308 and Young's modulus E = 165 GPa were determined. More information on the tape strength parameters is described in [22]. SikaDur ® -30 adhesive was used in course to bond CFRP tapes to the beams. This adhesive was characterised by minimum compressive strength of 75 MPa after 7 days, a modulus of elasticity under compression of 9600 MPa, a minimum tensile strength after 7 days of 26 MPa, a deboning strength The first step of the samples' preparation consists of degreasing and matting with sandpaper and cleaning the places where CFRP tapes were foreseen to be bonded. Detailed description of the preparation of samples for testing is provided in [18].
So-called fork boundary conditions at laboratory circumstances was obtained by usage of a bolted hinge connection at the support, which enables free rotation in the beam plane. Additionally, special washers were applied to the beam at the points of concentrate forces occurrence. They were introduced in order to prevent local damage of the tested sigma beams. Such points were detected at the support region (Figure 3a), where the hot-rolled C profile was used, and at the point of application The first step of the samples' preparation consists of degreasing and matting with sandpaper and cleaning the places where CFRP tapes were foreseen to be bonded. Detailed description of the preparation of samples for testing is provided in [18].
So-called fork boundary conditions at laboratory circumstances was obtained by usage of a bolted hinge connection at the support, which enables free rotation in the beam plane. Additionally, special washers were applied to the beam at the points of concentrate forces occurrence. They were introduced in order to prevent local damage of the tested sigma beams. Such points were detected at the support region (Figure 3a), where the hot-rolled C profile was used, and at the point of application of external concentrated force, where the hot-rolled C100 profile with length of 200 mm was applied to ensure the load distribution over the entire flange area (Figure 3b). The process of sample preparation were shown in Figure 4. The Zwick and Roel (ZwickRoell GmbH & Co. KG, Ulm, Germany) testing machine at the Construction Laboratory of the Lublin University of Technology was used in order to perform experimental tests. The load increase was controlled by means of the extending piston press at a speed of 1 mm/min, recording the force every 0.01 s. During laboratory tests, strain measurement was carried out using three electrofusion strain gauges type TENMEX TFs-10 with 120 Ω ± 0.2% resistance. In each sample, the electrofusion strain gauges (T1, T2, T3) were located in the middle of its span. The arrangement of these gauges is presented in Figure 2-Section B-B and in Figure 5b. On the other hand the displacements were measured using the Aramis system and GOM Correlate software. For this purpose, a unique combination of two Aramis coupled devices and a Tritop machine was used, which enabled precise measurement of the displacement of the beam subject to significant rotation. The displacements of the specimens were measured at six measuring points (P1, P2, P3, P4, P5, P6) located in the middle of the beam span and affixed with special markings shown in Figure 5b. The first step of the samples' preparation consists of degreasing and matting with sandpaper and cleaning the places where CFRP tapes were foreseen to be bonded. Detailed description of the preparation of samples for testing is provided in [18].
So-called fork boundary conditions at laboratory circumstances was obtained by usage of a bolted hinge connection at the support, which enables free rotation in the beam plane. Additionally, special washers were applied to the beam at the points of concentrate forces occurrence. They were introduced in order to prevent local damage of the tested sigma beams. Such points were detected at the support region (Figure 3a), where the hot-rolled C profile was used, and at the point of application of external concentrated force, where the hot-rolled C100 profile with length of 200 mm was applied to ensure the load distribution over the entire flange area (Figure 3b). The process of sample preparation were shown in Figure 4. The Zwick and Roel (ZwickRoell GmbH & Co. KG, Ulm, Germany) testing machine at the Construction Laboratory of the Lublin University of Technology was used in order to perform experimental tests. The load increase was controlled by means of the extending piston press at a speed of 1 mm/min, recording the force every 0.01 s. During laboratory tests, strain measurement was carried out using three electrofusion strain gauges type TENMEX TFs-10 with 120 Ω ± 0.2% resistance. In each sample, the electrofusion strain gauges (T1, T2, T3) were located in the middle of its span. The arrangement of these gauges is presented in Figure 2-Section B-B and in Figure 5b. On the other hand the displacements were measured using the Aramis system and GOM Correlate software. For this purpose, a unique combination of two Aramis coupled devices and a Tritop machine was used, which enabled precise measurement of the displacement of the beam subject to significant rotation. The displacements of the specimens were measured at six measuring points (P1, P2, P3, P4, P5, P6) located in the middle of the beam span and affixed with special markings shown in Figure 5b. Measuring points with a diameter of 5 mm were placed on the beam walls, (Figure 4a), which allowed for preliminary measurements of geometric imperfections [23] using the Tritop system. Moreover, they were used to create a common coordinate system for the two measuring lenses in the Aramis system [24]. These points were also used to analyse the displacements of the tested beams in the GOM Correlate program (GOM, GmbH, Braunschweig, Germany). As it was mentioned before, the conducted research used optical 3D coordinate measuring machine-Tritop. It is a portable system enabling precise and quick measurements of 3D coordinates of objects. A series of accurate beams ware reinforced with CFRP tape placed on the inner surface of the upper flange (B1G, B2G, B3G), three with CFRP tape bonded on the inside surface of the web (B1S, B2S, B3S), three beams with reinforcement located on the inside surface of the bottom flange (B1D, B2D, B3D). To obtain information on the effectiveness of the applied reinforcement, two unreinforced beams (B1R, B2R) were taken as a reference beams (Figure 5a). The use of the Tritop (GOM, GmbH, Braunschweig, Germany) and Aramis (GOM, GmbH, Braunschweig, Germany) optical measuring systems requires that the steel surfaces of the tested beams do not reflect light. Consequently, all samples were painted with matt white spray paint. Then, the surface was additionally matted by spraying white chalk on it (Figure 4b).
Measuring points with a diameter of 5 mm were placed on the beam walls, (Figure 4a), which allowed for preliminary measurements of geometric imperfections [23] using the Tritop system. Moreover, they were used to create a common coordinate system for the two measuring lenses in the Aramis system [24]. These points were also used to analyse the displacements of the tested beams in the GOM Correlate program (GOM, GmbH, Braunschweig, Germany). As it was mentioned before, the conducted research used optical 3D coordinate measuring machine-Tritop. It is a portable system enabling precise and quick measurements of 3D coordinates of objects. A series of accurate photos of each of the tested beams was taken using a camera. The photos were transferred to the GOM Correlate program and measurements of individual beam dimensions were made. Section height, top flange width, and bottom flange width were monitored. All measured values were within dimensional tolerances (1 mm for height and 0.5 mm for width). Therefore, in the numerical study the impact of imperfection was neglected. Rules of application and use of the Aramis and Tritop system are in details described in [18].
The scope of this part of the laboratory research included eleven steel beams made of 200 × 70 × 2 profile with the span of 270 cm. In order to investigate the influence of CFRP tape location, three beams ware reinforced with CFRP tape placed on the inner surface of the upper flange (B1G, B2G, B3G), three with CFRP tape bonded on the inside surface of the web (B1S, B2S, B3S), three beams with reinforcement located on the inside surface of the bottom flange (B1D, B2D, B3D). To obtain information on the effectiveness of the applied reinforcement, two unreinforced beams (B1R, B2R) were taken as a reference beams (Figure 5a).
Based on the laboratory tests it was observed that failure mode of all beams was related to debonding of CFRP tapes in the adhesive-steel contact plane in the load range within 25-26 kN (Table 1). Moreover, it was noted that the correct reading of all electrofusion strain gauges was possible up to load of 25 kN, due to debonding of the CFRP tape. Therefore, it was assumed that this load level should be considered as the destructive force. As a consequence, in the further analysis, this load level was adopted as the limit load, and the final CFRP tapes performance reducing strain and displacement of tested beams was described at this load level. Simultaneously, debonding failure mode and the deformation in form of opening of the beam cross-section was observed. Namely, the top flange lifted between the load application points and caused the reasonably large displacement out of the vertical beam plane. Moreover, local upper flange deformations were observed at the points of concentrate load occurrence. At the same time, the bottom flange was not damaged. The nature of beam deformation and the debonding of the CFRP tapes are shown in Figure 6.
of CFRP tapes in the adhesive-steel contact plane in the load range within 25-26 kN (Table 1). Moreover, it was noted that the correct reading of all electrofusion strain gauges was possible up to load of 25 kN, due to debonding of the CFRP tape. Therefore, it was assumed that this load level should be considered as the destructive force. As a consequence, in the further analysis, this load level was adopted as the limit load, and the final CFRP tapes performance reducing strain and displacement of tested beams was described at this load level. Simultaneously, debonding failure mode and the deformation in form of opening of the beam cross-section was observed. Namely, the top flange lifted between the load application points and caused the reasonably large displacement out of the vertical beam plane. Moreover, local upper flange deformations were observed at the points of concentrate load occurrence. At the same time, the bottom flange was not damaged. The nature of beam deformation and the debonding of the CFRP tapes are shown in Figure 6. The exemplary load-strain relationship obtained during the laboratory test is shown in Figure 7. To enable the analysis of the obtained results, bar graphs were prepared, which are presented in Figure 8. The given values were determined on the basis of the following Equation (1): where: ρ -reduction or increase in strain of a given sample expressed as a percentage, ε -strain of a given sample, ε -arithmetic mean value of the strain of two reference beams (B1R and B2R). The exemplary load-strain relationship obtained during the laboratory test is shown in Figure 7. To enable the analysis of the obtained results, bar graphs were prepared, which are presented in Figure 8. The given values were determined on the basis of the following Equation (1): where: ρ εi -reduction or increase in strain of a given sample expressed as a percentage, ε i -strain of a given sample, ε ref -arithmetic mean value of the strain of two reference beams (B1R and B2R). The displacement of selected points located in the mid-span of example sample under the load of 25 kN, obtained with the GOM Correlate software, is shown in Figure 9. The displacement of selected points located in the mid-span of example sample under the load of 25 kN, obtained with the GOM Correlate software, is shown in Figure 9. Using Equation (2), the percentage change in displacement taking into account different location of reinforcement with CFRP tape versus reference bare beams at the load level of 25 kN was determined.
where: ρ -percent change in displacement of the ith sample, u -displacement of the ith sample, u -displacement of reference beam.
The percent change in vertical displacement is presented in Figure 10a and for horizontal displacement in Figure 10b. Positive values described increase in displacement, while negative values indicate a reduction in displacement.
Numerical Study
The numerical model was developed in Abaqus (Abaqus 2019, Dassault Systemes Simulia Corporation, Velizy Villacoublay, France). The experimentally analysed sigma shaped beam was described with a discrete model, where the tested profile was subjected to numerical analysis of the beams subjected to bending, due to the applied loads [7,25]. The finite element model of the sigma steel beam, respectively, for the laboratory tests, is made of shell finite elements (linear shape Using Equation (2), the percentage change in displacement taking into account different location of reinforcement with CFRP tape versus reference bare beams at the load level of 25 kN was determined.
where: ρ ui -percent change in displacement of the ith sample, u i -displacement of the ith sample, u ref -displacement of reference beam. The percent change in vertical displacement is presented in Figure 10a and for horizontal displacement in Figure 10b. Positive values described increase in displacement, while negative values indicate a reduction in displacement. The displacement of selected points located in the mid-span of example sample under the load of 25 kN, obtained with the GOM Correlate software, is shown in Figure 9. Using Equation (2), the percentage change in displacement taking into account different location of reinforcement with CFRP tape versus reference bare beams at the load level of 25 kN was determined.
where: ρ -percent change in displacement of the ith sample, u -displacement of the ith sample, u -displacement of reference beam.
The percent change in vertical displacement is presented in Figure 10a and for horizontal displacement in Figure 10b. Positive values described increase in displacement, while negative values indicate a reduction in displacement.
Numerical Study
The numerical model was developed in Abaqus (Abaqus 2019, Dassault Systemes Simulia Corporation, Velizy Villacoublay, France). The experimentally analysed sigma shaped beam was described with a discrete model, where the tested profile was subjected to numerical analysis of the beams subjected to bending, due to the applied loads [7,25]. The finite element model of the sigma steel beam, respectively, for the laboratory tests, is made of shell finite elements (linear shape
Numerical Study
The numerical model was developed in Abaqus (Abaqus 2019, Dassault Systemes Simulia Corporation, Velizy Villacoublay, France). The experimentally analysed sigma shaped beam was described with a discrete model, where the tested profile was subjected to numerical analysis of the beams subjected to bending, due to the applied loads [7,25]. The finite element model of the sigma steel beam, respectively, for the laboratory tests, is made of shell finite elements (linear shape function). With reference to the numerical model, both the steel washers and steel support clamps, cooperating with the steel beam reinforced with composite tapes, were made of non-deformable shell finite elements. Inside the beam profile, at the opposite ends of the profile, there were channel profiles made of steel to reduce local deformation of the beam profile. The C-profile was made as a shell finite element with a defined thickness. The material model within the discrete model was prepared on the basis of information from experimental tests [9]. On the basis of the σ-ε relationship obtained in laboratory coupon test, the bilinear elastic-plastic material model with strain hardening was adopted in the FEM numerical model. The material properties were as follows: Young's modulus (201.8 GPa), Poisson's ratio (0.282), and yield strength (418.5 MPa) (Figure 11). The lower value of Young's modulus than for the S350 GD steel grade results from the fact that the samples are made of galvanised steel.
Materials 2020, 13, x FOR PEER REVIEW 9 of 14 function). With reference to the numerical model, both the steel washers and steel support clamps, cooperating with the steel beam reinforced with composite tapes, were made of non-deformable shell finite elements. Inside the beam profile, at the opposite ends of the profile, there were channel profiles made of steel to reduce local deformation of the beam profile. The C-profile was made as a shell finite element with a defined thickness. The material model within the discrete model was prepared on the basis of information from experimental tests [9]. On the basis of the σ-ε relationship obtained in laboratory coupon test, the bilinear elastic-plastic material model with strain hardening was adopted in the FEM numerical model. The material properties were as follows: Young's modulus (201.8 GPa), Poisson's ratio (0.282), and yield strength (418.5 MPa) ( Figure 11). The lower value of Young's modulus than for the S350 GD steel grade results from the fact that the samples are made of galvanised steel. The research has mapped as accurately as possible the boundary conditions resulting from the experimental studies. For both non-deformable steel support clamps and steel washers, appropriate reference points have been defined in order to allow further declaration of the necessary boundary conditions. In these reference points assigned to the steel washers, a load of equal value is defined for each of the non-deformable steel washers (as shown in Figure 12). For the non-deformable steel support clamps, the necessary boundary conditions are also defined at the reference points (as shown in Figure 12). The numerical model was prepared taking into account contact interactions in normal and tangential directions, without taking into account the friction coefficient. In order to achieve the appropriate mapping of the experimental studies, contact relations were defined between the beam and the support clamps, channel sections, and washers ( Figure 12). The finite element model included 18,312 computational nodes, with a number of finite elements equal to 17,713. The number of deformable shell elements was equal 16,273 (for beam and a channel bar). The number of nondeformable linear shell elements was equal 1440 (for supports). In numerical tests, a bi-linear elasticplastic material model was used for beam specimen. CFRP tapes were modelled as shell finite elements connected to the beam by TIE joints. The material model used to describe CFRP tapes had orthotropic properties (E1 = 142 GPa, E2 = 8 GPa, ν12 = 0.308, and G12 = G23 = G13 = 4.5 GPa). The preliminary research focused on the preparation of a bare-beam model, in order to validate the model by performing tests on actual samples. Further research objectives included the development of appropriate three subsequent numerical models, after obtaining agreement of results for this basic case. Namely, model: BGa with CFRP tape placed in the upper flange (analogously to the B1G, B2G, The research has mapped as accurately as possible the boundary conditions resulting from the experimental studies. For both non-deformable steel support clamps and steel washers, appropriate reference points have been defined in order to allow further declaration of the necessary boundary conditions. In these reference points assigned to the steel washers, a load of equal value is defined for each of the non-deformable steel washers (as shown in Figure 12). For the non-deformable steel support clamps, the necessary boundary conditions are also defined at the reference points (as shown in Figure 12). The numerical model was prepared taking into account contact interactions in normal and tangential directions, without taking into account the friction coefficient. In order to achieve the appropriate mapping of the experimental studies, contact relations were defined between the beam and the support clamps, channel sections, and washers ( Figure 12). The finite element model included 18,312 computational nodes, with a number of finite elements equal to 17,713. The number of deformable shell elements was equal 16,273 (for beam and a channel bar). The number of non-deformable linear shell elements was equal 1440 (for supports). In numerical tests, a bi-linear elastic-plastic material model was used for beam specimen. CFRP tapes were modelled as shell finite elements connected to the beam by TIE joints. The material model used to describe CFRP tapes had orthotropic properties (E 1 = 142 GPa, E 2 = 8 GPa, ν 12 = 0.308, and G 12 = G 23 = G 13 = 4.5 GPa). The preliminary research focused on the preparation of a bare-beam model, in order to validate the model by performing tests on actual samples. Further research objectives included the development of appropriate three subsequent numerical models, after obtaining agreement of results for this basic case. Namely, model: BGa with CFRP tape placed in the upper flange (analogously to the B1G, B2G, B3G tested in the experimental study), BDa with CFRP tape in the bottom flange (like B1D, B2D, B3D), and the BSa with CFRP tape in the web (corresponding to B1S, B2S, B3S). A detailed analysis of the obtained test results was carried out mainly in places where electrofusion strain gauges T2 and T3 were initially placed, regarding the laboratory tests (the locations of strain gauges are shown in Figure 13b). The strain level that was read in the laboratory tests was directly compared with the results of numerical calculations, precisely with Max. In-Plane Principal (Abs) from Abaqus. These are components illustrating strains in the longitudinal direction A detailed analysis of the obtained test results was carried out mainly in places where electrofusion strain gauges T2 and T3 were initially placed, regarding the laboratory tests (the locations of strain gauges are shown in Figure 13b). The strain level that was read in the laboratory tests was directly compared with the results of numerical calculations, precisely with Max. In-Plane Principal (Abs) from Abaqus. These are components illustrating strains in the longitudinal direction of the beam in the plane of the individual walls of the section. In the numerical analysis, vertical displacements and horizontal displacements of the beams were reported at the points where measurements were made during laboratory tests. Examples of strain and deformation comparison of selected beams are presented in Figure 13a,b.
(c) A detailed analysis of the obtained test results was carried out mainly in places where electrofusion strain gauges T2 and T3 were initially placed, regarding the laboratory tests (the locations of strain gauges are shown in Figure 13b). The strain level that was read in the laboratory tests was directly compared with the results of numerical calculations, precisely with Max. In-Plane Principal (Abs) from Abaqus. These are components illustrating strains in the longitudinal direction of the beam in the plane of the individual walls of the section. In the numerical analysis, vertical displacements and horizontal displacements of the beams were reported at the points where measurements were made during laboratory tests. Examples of strain and deformation comparison of selected beams are presented in Figure 13a,b. In the case of strains, the maximum discrepancies between the results obtained in the Abaqus program and the average value of measurements for a given group of beams in a laboratory test reach a maximum of 4.1% at a load level of 25 kN, and in the case of displacements, 4.6-6.8%. The form of beam deformation observed during laboratory tests is also consistent with the shape of deformation In the case of strains, the maximum discrepancies between the results obtained in the Abaqus program and the average value of measurements for a given group of beams in a laboratory test reach a maximum of 4.1% at a load level of 25 kN, and in the case of displacements, 4.6-6.8%. The form of beam deformation observed during laboratory tests is also consistent with the shape of deformation obtained in the Abaqus program ( Figure 14). On the basis of further research stages, it was found that the described model is also applicable for various sigma cross-section heights.
Materials 2020, 13, x FOR PEER REVIEW 12 of 14 obtained in the Abaqus program ( Figure 14). On the basis of further research stages, it was found that the described model is also applicable for various sigma cross-section heights.
Results and Discussion
Regarding to the obtained results, a quite good agreement between FEM analysis and experimental tests were obtained. In the presented work, the retrofitting method of thin-walled, coldformed sigma beams using bonded CFRP tapes is proposed. The study investigated the effectiveness of the applied reinforcement, where satisfactory results were obtained.
As a consequence of bonding the CFRP tape to the lower or upper flange of the beam an increase in the moment of inertia with respect to the y axis by 18%, and with respect to the z axis, by nearly
Results and Discussion
Regarding to the obtained results, a quite good agreement between FEM analysis and experimental tests were obtained. In the presented work, the retrofitting method of thin-walled, cold-formed sigma beams using bonded CFRP tapes is proposed. The study investigated the effectiveness of the applied reinforcement, where satisfactory results were obtained.
As a consequence of bonding the CFRP tape to the lower or upper flange of the beam an increase in the moment of inertia with respect to the y axis by 18%, and with respect to the z axis, by nearly 9%, and the change in position of the gravity centre of the cross-section is observed by 1.26 mm in the horizontal direction and by 13.6 mm in the vertical direction. It is worth to note that placing the tape in the lower flange allows us to reduce the vertical displacement of the beam by 23% and the horizontal displacement by 8%, and the use of CFRP tape in the upper flange allows us to reduce the vertical displacement of the beam by 14% and horizontal displacement by up to 50%. One can notice that the increase in the percentage of reinforcement of the analysed beams is not only the result of an increase in the geometric characteristics of the beam and CFRP tape system, but also it is a beneficial influence of the proposed method.
It is not surprising that the use of a reinforcement in the web of the beam limits vertical displacements to the least extent. It is worth emphasizing, however, that the use of web reinforcement, which changes the moment of inertia by no more than 0.5%, reduces vertical displacements by 9% and horizontal displacements by up to 21%.
It should be also mentioned that the proposed reinforcing method is characterized a significant deviation from the classic methods of strengthening steel building structures. Namely, because of technological limitations resulting from the access to the reinforced element "in situ", proposed method consciously departed from the principle of the coincidence of the centres of gravity of the basic section and the reinforcement sections. It results from the specific geometry of the sigma section itself, which has only one axis of symmetry.
As noted in [19], the strength of steel is higher than that of conventional adhesives used in structural strengthening applications, which results in a variety of possible forms of failure. Cohesive failure in the adhesive layer, adhesive detachment along the surface of the adhesive-composite or steel-adhesive interface, Fiber Reinforced Polymer (FRP) delamination are possible forms to be considered when designing a steel reinforcement with such a structure.
In this study, all the steel beams were reinforced with one thickness of the adhesive layer (1.3 mm), and in each of the samples the tape was debonded at the glue-steel interface. Therefore, the authors cannot conclude that the use of a different thickness of the adhesive layers will change the debonding effect. It is suspected that debonding was due to the significant local deformation of the beams and not to the thickness of the adhesive. Due to the many possible forms of failure and incomplete knowledge of the behaviour of composite materials adhered to steel, it will be desirable to carry out more research before widely adoption of proposed method into engineering practice.
Conclusions
Based on the obtained laboratory and numerical results the beneficial influence of CFRP tapes on the displacement and strain reduction in case of thin-walled cold-formed beams made of 200 × 70 × 2 is observed. This is confirmed by the detailed conclusions that flow from the individual analyses, and which can be expressed as follows: • The average percentage reduction in strain in upper flange (14%) and in web (36%) is achieved for CFRP tape located in the upper (compressed) flange.
•
The 18−20% strain reduction in the bottom flange is observed when CFRP tape is bonded to bottom flange.
•
The decrease in vertical displacements in average value of 11-23% is obtained when the CFRP tape is placed on the bottom flange.
•
The decrease in horizontal displacement perpendicular to the longitudinal axis of the beam in the upper flange by 50% for reinforcement CFRP tape located on the upper (compressed) flange is achieved.
•
Location CFRP tape on the bottom flange did not reduce the horizontal displacement in any case.
•
Web reinforcement results in reduction in horizontal displacement in the upper flange by 18-21%, and of vertical displacement by 9%.
Summing up, it was found that the location of CFRP tape at the upper flange and at the web can be very advantages in case of the beam subjected to the large torsion. The innovative solution proposed in this paper is the placement of the CFRP tape on the inside surface of the flange, which is easier from technological point of view during construction works. In addition, authors used the innovative methods of displacements measurement using two lenses of the Aramis system, positioned on both sides of the tested beam, in combination with the Tritop system, which enabled the study of the displacement of mono-symmetrical beams in 3D.
Finally, it can be stated that the traditional, well-known engineering strengthening method, which recommend bonding the CFRP tapes to the bottom (tensioned flange), as the beams under consideration cannot be considered as a universally favourable. | 8,761.2 | 2020-09-29T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Modelling transport energy demand: A socio-technical approach
Despite an emerging consensus that societal energy consumption and related emissions are not only influenced by technical efficiency but also by lifestyles and socio-cultural factors, few attempts have been made to operationalise these insights in models of energy demand. This paper addresses that gap by presenting a scenario exercise using an integrated suite of sectoral and whole systems models to explore potential energy pathways in the UK transport sector. Techno-economic driven scenarios are contrasted with one in which social change is strongly influenced by concerns about energy use, the environment and well-being. The ‘what if’ Lifestyle scenario reveals a future in which distance travelled by car is reduced by 74% by 2050 and final energy demand from transport is halved compared to the reference case. Despite the more rapid uptake of electric vehicles and the larger share of electricity in final energy demand, it shows a future where electricity decarbonisation could be delayed. The paper illustrates the key trade-off between the more aggressive pursuit of purely technological fixes and demand reduction in the transport sector and concludes there are strong arguments for pursuing both demand and supply side solutions in the pursuit of emissions reduction and energy security.
Introduction
Despite a widely agreed consensus that societal energy consumption and related emissions are not only influenced by technical efficiency but also by lifestyles and socio-cultural factors (e.g. household size and composition, expenditure patterns, social norms, habits and the ageing population), there is a methodological gap between the perceived importance of these factors for energy demand and quantitative modelling frameworks or even scenario analysis. Indeed, there is much less consensus as to the character and extent of these influences, particularly when broadened out from societal changes to include individual psycho-social factors such as well-being, social norms and values. In particular, very few attempts have been made to operationalise these insights into models of future energy demand.
This paper addresses this gap in research and practice by presenting a quantitative scenario exercise using an integrated suite of sectoral and whole systems models to explore potential energy pathways in the UK transport sector. Presenting results in part from the UK Energy Research Centre's Energy 2050 project (Skea and Ekins, 2009), techno-economic driven scenarios are contrasted with one in which social change is strongly influenced by concerns about energy use, the environment and well-being so that transport energy service demand is at a significantly lower level by 2050 than in the 'business as usual' assumptions of other pathways.
Empirical evidence of the potential for travel patterns to change incrementally in response to policy and normative shifts was combined with the development of a plausible 'what if' qualitative storyline about attitudinal, cultural and behavioural change to 2050. The associated transport energy service demands were modelled using MARKAL elastic demand (MED) to assess the implications on fuel demand, emissions and the wider energy sector in the UK. This involved the novel intermediate step of soft-linking MED with a newly developed strategic transport, energy, emissions and environmental impacts modelthe UK transport carbon model (UKTCM) (see Brand et al., this issue). UKTCM is a highly disaggregated, bottom-up model of transport energy use in the UK. It allows us to model the energy service demands and vehicle choice of different assumptions about transport service demand, modal choice and trip patterns.
This paper demonstrates how sectoral and energy system models can be soft-linked to explore future scenarios in which the potential contribution of demand-side behaviour change is explored alongside technological change to meet a stringent 80% emissions reduction target in the UK.
Background
At the global level, transport currently accounts for more than half the oil used and nearly 25% of energy related carbon dioxide (CO 2 ) emissions (IEA, 2008). From a 2005 baseline, transport energy use and related CO 2 emissions are expected to increase by more than 50% by 2030 and more than double by 2050 with the fastest growth from light-duty vehicles (i.e. passenger cars, small vans, sport utility and vehicles), air travel and road freight (IEA, 2008).
Transport is invariably deemed to be the most difficult and expensive sector in which to reduce energy demand and greenhouse gas emissions (Enkvist et al., 2007;HM Treasury, 2006;IPCC, 2007). The analysis on which such conclusions are based tends to rely on forecasting and modelling frameworks which accentuate technical solutions and economically optimal and rational behaviour of individual consumers and markets, often based on historic consumer preferences. The conventional transport policy response to this issue reflects this dominant technoeconomic analytical paradigm and focuses on supply-side vehicle technology efficiency gains and fuel switching as the central mitigation pathway for the sector. Typically, the diffusion of advanced vehicle technologies is perceived as the central means to decarbonise transport. Since many of these technologies are not yet commercially mature, or require major infrastructure investment, this focus has reinforced the notion that the transport sector can only make a limited contribution to total CO 2 emissions reduction, particularly in the short term (HM Treasury, 2006;Koehler, 2009). In the UK for example, electrification of the passenger vehicle fleet is a key strategy and viewed as necessary to achieve the government's stated 2050 target to cut CO 2 equivalent of Kyoto GHG emissions by 80% from 1990 levels (Ekins et al., 2009;CCC, 2009). The UK policy focus on vehicle technology reflects other global transport modelling exercises that depend upon between 40% and 90% market penetrations of technologies such as plug-in hybrids and full battery electric vehicles between 2030 and 2050 (IEA, 2008;McKinsey and Company, 2009;WBCSD, 2004;WEC, 2007).
Although scenario exercises such as these are used to explore the potential CO 2 emissions reduction from rapid uptake of vehicle technologies, the central danger is that the full potential and necessary contribution of human behaviour, lifestyle change and the important role of individual attitudes and perceptions are often overlooked by policy makers. Other than changes in preference required to facilitate the uptake of low carbon vehicles, many of these scenario exercises treat other societal developments of significance to transport as external to policy. In addition, where societal developments are included, individual behaviour change in invariably treated in abstract terms as part of broader societal changes (Weber and Perrels, 2000).
An alternative would be to conduct scenario planning exercises which underline the role that policy can play in working with attitudes, opportunities and impacts to exert positive influence on the type of society that is developing and the nature of the transport system that thus co-evolves with it (Marsden et al., 2010). In particular, such approaches pay attention to the interaction between society and technology (Elzen et al., 2002). The lifestyle approach is usually juxtaposed from one with a purely technological focus as it tries to provide a wider picture of the consumer (and required production processes to satisfy his needs and wants) by depicting him or her in his socio-economic context (Baiocchi et al., 2006). Individual attitudes and values are seen as influential in shaping society's engagement with technological opportunities in the face of environmental impacts that will likely force a direction of response from policy makers and society.
To support this approach, there is a growing evidence base, or even just a renewed appreciation of existing evidence, of the potential for behaviour to alter in ways which mean that reductions in the demand for travel activity and associated energy are both plausible and cost effective (Sloman et al., 2010;Cairns et al., 2008;Goodwin, 2008. Also see Gross et al. (2009) for a comprehensive overview of the literature). These behaviour changes encompass a whole variety of different types of choice related to travel demand which include much more than simply 'retrofitting' more efficient transport modes on to current journey patterns. In other words, a reduction in energy service demand from transport will be achieved through a myriad of individual and societal level shifts in preference for the amount of time travelling, the choice of destinations and where to live, attitudes towards health and the environment and the local community, different models of car ownership, driving behaviour as well as more 'standard' decisions about mode and car choice. The 'Lifestyle' storyline of the Energy 2050 project looks at all of these travel behavioural choices and speculates about the nature and extent of plausible shifts before using an integrated modelling framework to examine the implications for the UK energy system and carbon reduction targets.
Methodology
The UKERC Energy 2050 project aimed to show how the UK can move towards a resilient and low carbon energy system over the period to 2050 (Skea and Ekins, 2009). The project focuses on two primary goals of UK energy policy-achieving deep cuts in CO 2 emissions by 2050, taking the current UK 80% reduction goal as a starting point, and developing a 'resilient' energy system that ensures consumers' energy service needs are met reliably. In addition, other policy goals are taken into account, namely managing environmental impacts other than those related to climate change and ensuring that everyone has access to affordable energy services.
The core analysis used a combination of sectoral and 'whole systems' models of the UK energy system to investigate key uncertainties in low and carbon resilient energy through a systematic comparison of scenarios. The system level models captured interrelationships and choices across the energy system and consisted of MARKAL (MARKet Allocation), a widely applied bottom-up, dynamic, linear programming optimisation model (Loulou et al., 2004), and an elastic demand version (MARKAL elastic demand (MED)). MED is a technology-rich, multi-time period optimisation model and portrays the entire energy system from imports and domestic production of fuel sources through to fuel processing and supply, explicit representation of infrastructures, conversion of fuels to secondary energy carriers (including electricity, heat and hydrogen), end use technologies and energy service demands of the entire economy. The model accounts for the response of energy service demands (ESDs) to prices which, in this exercise, could themselves increase as a result of carbon constraints (Anandarajah et al., 2008).
In the 'Lifestyles' sub-project of Energy 2050, two out of the four core scenarios developed in Energy 2050 were used as a starting point to be contrasted with two Lifestyle 'variants' of these core scenarios (hereafter referred to as the 'Lifestyle variants'). Before Lifestyle variants could be run in MED, an alternative set of direct inputs in the form of energy service demands, vehicle load factors, downsizing of the car fleet, low carbon technology take-up and changes in on-road fuel efficiencies needed to be generated. This was undertaken for both the residential and the transport sectors but this paper concentrates on the development of the transport inputs only (see Anable et al. (2010) for a detailed overview of the methodology). The modelling of mobility energy demands for these variants involved: 1. Framing and development of a new 'Lifestyle' storyline and translating this, using spreadsheet modelling, into projections of travel patterns as an alternative to official UK government projections used in the core Energy 2050 scenarios. 2. Detailed sectoral modelling using the newly developed UK transport carbon model (UKTCM) in order to simulate the impacts of lifestyle changes on vehicle ownership, vehicle technology choice and vehicle use. 3. Soft-linking of UKTCM and MED by aggregating and converting UKTCM outputs into MED inputs before MED was run.
Each of these stages will now be described in turn.
The lifestyle storyline and spreadsheet modelling
Transport energy demand is a function of mode, technology and fuel choice, total distance travelled, driving style and vehicle occupancy. Distance travelled is itself a function of land use patterns, destination, route choice and trip frequency. Most travel behaviour modelling and forecasting is based on principles of utility maximisation of discrete choices and on the principle that travel-time budgets are fixed (Metz, 2002). However, based on the literature on socio-technical transitions, socio-psychological models of behaviour change and evidence relating to actual travel choices in response to policy interventions as well as, the Lifestyle variant explored a world in which travel behaviour is strongly influenced by concerns relating to health, quality of life, energy use and environmental implications. As such, non-price driven behaviour, which has already been found to play a significant role in transport choices (Anable, 2005;Steg, 2004;Turrentine and Kurani, 2007) was deemed to be a dominant driver of energy service demand from transport. It should be noted that this paper does not review the literature pertaining to behaviour change theory and the detailed combination of ingredients (motivational and external) required for travel patterns to shift dramatically, nor does it review the policy evidence in detail. We refer readers to Anable et al. (2010) for the detail behind the Lifestyle storyline.
Making assumptions in this way, albeit based on uncertain evidence, is akin to the treatment of the technical potential of various solutions relating to vehicle technologies and fuels which, as discussed, normally comprise the bulk of the future developments in transport energy scenario modelling exercises, despite also being highly uncertain. In judging what rate and scale of change seems plausible we have given most weight to the existing variation in lifestyle observed in societies like our own, i.e. technologically advanced, liberal democracies. Subject to some obvious constraints imposed by age, wealth and location, for example, it seems reasonable to suppose that if a significant fraction of the population (say 5-10%) somewhere in the OECD already behave in a particular way, then it is plausible for this to become a majority behaviour in the UK within the timeframe to 2050. This implies neither incremental nor step changes in behaviour. There are increasing suggestions that incremental changes in efficiency and behaviour will not be effective enough to deliver sustainable energy systems on their own in the absence of restrictions in consumption (Darby, 2007;Crompton, 2008). In addition to incremental change, there is considerable interest in the possibility of a 'cultural shift' affecting people's lifestyles (Elzen et al., 2002;Evans and Jackson, 2007;Koehler, 2009;Crompton, 2008). Consequently, this Lifestyle variant outlines radical change leading to relatively fast transformations and new demand trajectories.
In the Lifestyle variant, travellers are more aware of the whole cost of travel and the energy and emissions implications of travel choices and are sensitive to the rapid normative shifts which alter the bounds of socially acceptable behaviour. Consequently, the variant assumed the focus would shift away from mobility towards accessibility. In other words, the quality of the journey experience rather than the quantity and speed of travel would become more important. Social norms elevate active modes and low-carbon vehicles in status and demote large cars, singleoccupancy car travel, speeding and air travel.
Efficient, low-energy and zero energy (non-motorised) transport systems will replace current petrol and diesel car-based systems. The increased uptake of slower, active modes reduces average distances travelled as distance horizons change. Localism means people work, shop and relax closer to home and longdistance travel will move from fast modes (primarily air and the car) to slow-speed modes covering shorter distances overall (local rail and walking and cycling). The novelty of air travel wanes as not only does it become socially unacceptable to fly short distances, airport capacity constraints mean it becomes less convenient. Weekends abroad are replaced by more domestic leisure travel but this is increasingly carried out by low-carbon hired vehicles, rail and luxury coach and walking and cycling trips closer to home. It also becomes socially unacceptable to drive children to school. However, capacity constraints limit the pace of change so that mode shift to buses and rail will be moderated. New models of car ownership are embraced. This includes car clubs 2 and the tendency to own smaller vehicles for every day family use and to hire vehicles for longer distance travel. These are niche markets in which new technology is fostered. Lower car ownership is correlated with lower car use.
The new modes, in turn, will result in a new spatial order towards compact cities, mixed land uses and self contained cities and regions. Some services return to rural areas, but it becomes more common to carry out personal business by internet. Smallscale technology facilitates relatively rapid behavioural change. Information and Communication Technology (ICT: telematics, incar instrumentation, video conferencing, smartcards and ecommerce) makes cost and energy use transparent to users and changes everything from destination choice, car choice, driving style and paying for travel, including in the freight sector. A more radical change takes place through changes in work patterns and business travel. The impacts of teleworking and video conferencing are known to be complex, but potentially important (Gross et al., 2009). Teleworking particularly affects the longer commute trips and thus has a disproportionately large impact on average trip lengths. Increased internet shopping and restrictions on heavy goods vehicles, particularly in town centres, increases the use of vans. There is some shift towards rail freight.
There is increasing acceptance of restrictive policies in the context of more choice for local travel as the alternatives are improved. These restrictions include the general phasing out of petrol/diesel vehicles in town/city centres through low emission zones, increased parking charges and strict speed enforcement. Generally, however, the policy environment is one of 'push and pull' as fiscal and regulatory sticks are combined with the carrot of infrastructure investment (e.g. in car clubs, public transport, cycle infrastructure and railway capacity).
Combined with the shifts towards active modes and different models of car ownership, this amounts to significant lifestyle shift. The consequences for travel patterns of these shifts were first analysed using a spreadsheet model which took as its starting point the figures for individual travel patterns in 2007 based on the UK National Travel Survey (DfT, 2008). Figures for each journey purpose (commuting, travel in the course of work, shopping, education, local leisure, distance leisure and other) in terms of average number of trips, average distance (together producing average journey length), mode share and average occupancy were altered based on an evidence review relating to the impact of transport policies and current variation in travel patterns within and outside the UK (see Anable et al. (2010) for a detailed overview of the calculations).
The underlying principle of the derived projections of 'lifestyle' travel patterns is that they should be internally consistent and plausible. The method of how they were derived implies that they do not present a forecast using an econometric transport demand model, or a 4-stage transport demand network model. Specifically, the lifestyle projections of travel demand are not the result of changes in income or price elasticities of demand, GDP or population growth. The derived 'Lifestyle' travel demand projections actually imply gradually lower income (and population) elasticities of demand as incomes and population continue to grow in all four scenarios considered in this paper (see Table 1). Notably, in order to avoid double counting once these projections were eventually fed into MED, the transport demand elasticities in the Lifestyle MED runs were set to zero.
Modelling lifestyle using the UK transport carbon model
The UKTCM is a strategic transport-energy-environment simulation model designed to model a wide range of policies and policy 'packages' (or 'bundles') including demand management policies, measures affecting vehicle ownership and use, fiscal and pricing policies, eco-driving programmes, fuel obligations, speed enforcement and targeted technology investment incentives. It provides annual projections of transport supply and demand, and calculates the corresponding energy use, lifecycle emissions and environmental impacts year-by-year up to 2050. It simulates passenger and freight transport across all transport modes, built around exogenous scenarios of socio-economic and political developments. It integrates simulation and forecasting models of elastic demand, vehicle ownership, technology choice (using a discrete choice modelling framework), stock turnover, energy use and emissions, lifecycle inventory and impacts, and valuation of external costs. An introduction to the model has been published in Brand et al. (this issue); further details can be obtained from the Reference Guide (Brand, 2010a) and User Guide (Brand, 2010b), published by the UK Energy Research Centre.
The set of 'Lifestyle' transport energy serviced demands (distance travelled, mode split and vehicle occupancy) developed above was entered into UKTCM as exogenous transport demands. In addition, lower multiple car ownership was simulated by lowering the car ownership saturation levels for households owning 2 or more cars. It was further assumed that by 2020 no 'large cars' (above a certain engine size and gross vehicle weight) are being sold. The changes in social norms, consumer preferences, improved performance and market presence of low carbon road vehicles (essentially efficient hybrid electric vehicles (HEV), battery electric vehicles (BEV) and plug-in hybrid electric vehicles (PHEV)) were modelled by assuming low carbon road vehicles have gradually increasing consumer preferences, performance and market availability up to the point where they are comparable (or even better) than their conventional counterparts of a certain reference technology (e.g. medium size gasoline internal combustion engine car of vintage 2015-2019). The scale and timing of these changes have been modelled on the assumptions behind the high-to extreme-range technology scenarios of the recent scoping exercise commissioned by UK Government Departments (BERR & DfT, 2008), and further informed by low carbon transport scenario work such as reported in Hickman and Banister (2007). Within the UKTCM discrete choice modelling framework, equal preference implies equality in perceived market potentials (availability of infrastructure), perceived risk (fuel type, 'proven' vs. 'new' technology) and performance (range, speed, acceleration, etc.). No changes in investment and fixed Operation and Management (O&M) costs were assumed, as consumers of tomorrow choose to buy greener vehicles not on the basis of reduced purchase prices but on the basis of changed preferences for and perceived risk of a low-carbon vehicle. Finally, the on-road fuel efficiency programme and general adherence to speed limits was modelled by assuming an alternative set of speed profiles for motorways and dual carriageways, with direct effects on on-road fuel consumption.
In the results sections below, the UKTCM Lifestyle outputs (TCM LS) are contrasted to a reference case (TCM REF). These two model runs represented an intermediate stage in the methodology and are not to be confused with the MED scenarios outlined in Table 1 which ultimately generated the system level energy demands and progress towards UK carbon emissions targets.
MARKAL elastic demand
UKTCM outputs (fuel consumption, vehicle fleet evolution by vehicle technology) were translated into MED inputs (technical energy efficiency, technology deployment constraints and bounds).
MED was then run to produce four contrasting scenarios-two core Energy 2050 scenarios and two Lifestyle 'variants'. In each case, they were distinguished by whether they were unconstrained (REF) or constrained 3 (LC) to guarantee the achievement of an 80% fall in UK carbon emissions relative to 1990 levels by 2050. Thus, four scenarios resulted as follows: The detailed assumptions embedded in the core scenarios are available in Skea and Ekins (2009). Table 1 provides an overview of the key elements of each.
The following 3 sections outline the results relating to the impacts on: travel patterns and vehicle technology, energy use and fuel demand in the transport sector and carbon emissions and the wider energy system.
Impact on travel patterns and vehicle technology
The impact on travel patterns was modelled using the spreadsheet model and UKTCM as described above and the results can be divided into the following key areas of energy service demands from transport: Changing surface passenger travel patterns. Domestic air travel and surface freight transport. Driving style and on-road fuel efficiency. Vehicle ownership and technology choice.
Changing surface passenger travel patterns
Based on the initial spreadsheet modelling exercise which altered average number of trips, total distance, mode share and vehicle occupancy for each journey purpose, this first exercise resulted in a 74% reduction in distance travelled by car by 2050 as a driver and a passenger. The use of all other surface transport modes increases, apart from a 12% fall in distance travelled by Heavy Goods Vehicles (HGV or trucks). The reduction in car travel comes about as a result of significant mode shifts, particularly to bus travel towards the latter half of the period (184% increase in vehicle kilometres) and cycling and walking. Mode shift is combined with a fall in average trip lengths due to destination shifting as a result of localisation and a fall in the total number of trips per capita as some journeys are replaced with 'virtual' means. Fig. 1 shows how people become progressively more 'multimodal' by the end of the period in the LS REF variant. In 2020, the car is still used for the majority of distance travelled as a driver or passenger (67%), but this drops to 28% by 2050. However, 'other 3 In the cases of the LC and LS LC scenarios, the carbon emissions constraint is binding before any system costs are optimised, resulting in higher overall system costs. The MED model provides marginal costs of reducing carbon emissions further than the constraint, thus providing a shadow price of carbon (as an output, not input).
private' (which includes taxis, hire cars and car club cars) increases from 2.4% of distance in 2007, to 7.5% so that, combined with being a car passenger, 36% of all distance is still undertaken by car in 2050. At the same time, cycling goes from accounting for less than 1% to almost 13% of distance travelled. This surpasses levels seen today in countries regarded as demonstrating best practice in this area. For example, in 2006 an average Dutch person cycled 850 km/year, corresponding to around 8% of total distance travelled (SWOV, 2006). We chose to push this further over 40 years on the basis that the Dutch have achieved this level so far without comprehensively restricting cars from urban centres and increasing the cost of motoring, which our Lifestyle variant assumes. If cycling and walking are added together, 'active modes' account for 28% of travel in 2050. Implicit in the assumptions made here is the fact that cars are increasingly banned or priced out of city/town centres.
Domestic air travel and surface freight transport
With regard to air travel, growth in domestic flights are assumed to slow and eventually saturate. This is primarily due to three factors. Firstly, it becomes increasingly uncompetitive on the basis of cost as oil price increases and carbon taxation bring an end to budget airlines and cheap flights. Secondly, domestic flying also becomes uncompetitive in terms of time as rail is improved. Thirdly, flying becomes increasingly unacceptable, particularly for short distances, and thus returns to being a luxury activity. Average load factors in the LS REF and LS LC variants are assumed to stay unchanged compared to the REF and LC core scenarios. As a result, any changes in air passenger-km translate directly into air vehiclekm (domestic only). The resulting domestic air vehicle-km in the LS REF and LS LC variants are 2% and 16% lower in 2020 and 2050, respectively, than in the REF and LC core scenarios.
With regard to van traffic, van ownership and use continues to increase as it did in the decade prior to 2007, growing by 138% by 2050 over the 2005 levels. The move towards a service economy and more teleshopping contribute to this trend. As van technology improves and their cost of ownership and use declines, this further encourages their use. Town/city centres increasingly ban heavy goods vehicles but allow electric vans and local traffic regulations will give priority to professional home delivery and coordinated urban distribution with clean vehicles. As a result, the overall distance travelled by vans increases by 5% by 2050 in the LS REF and LS LC variants when compared to the REF and LC core cases.
With regard to heavy goods vehicles, we assume their use is still set to grow (by 36% between 2005 and 2050) but as a result of increased load factors, overall distance travelled by these vehicles will fall by 3% (2020) and 12% (2050) in the LS REF and LS LC variants when compared to the REF and LC core scenarios. Changes in consumer demands (including through origin/ carbon labelling and dematerialisation (the substitution of products with services)) may lead to reductions in freight movements, but the greatest savings will come from more efficient logistics. The 'lorry intensity' of the UK economy (the ratio of lorry-kms to GDP) declined by almost 20% between 1990 and 2004, partly as a result of companies using vehicle capacity more efficiently (McKinnon, 2007). There, nevertheless, remains considerable potential for improving 'vehicle fill'. Companies can adopt a range of vehicle utilisation measures which would lead to reduced lorry-km and CO 2 emissions. In some cases this will require changes to current business practice utilising integrated logistic services pertaining to several steps of production and distribution and based on complex information systems. These changes will require policy support for the development of technologies and standards for automatic flexible freight handling and tracing together with the implementation of consolidation centres and the introduction of CO 2 related taxes for freight vehicles to effectively raise road transport costs (Hickman and Banister, 2007). These changes will together mean the growth in heavy freight will be substantially reduced, particularly by road. Rail and waterborne freight play a bigger role, mainly due to mode shift from roads.
Driving style and on-road fuel efficiency
Eco-driving reduces fuel consumption through more efficient driving style, reducing speeds, proper engine maintenance, maintaining optimal tyre pressure, and reducing unnecessary loads (King, 2007;TNO, 2006). Policy measures can include information campaigns and encouraging or requiring driver training. Potential savings appear to be significant and costs low, with the biggest obstacles being securing driver participation and ensuring that efficient driving habits are sustained over time (Gross et al., 2009). This suggests that if the potential benefits of more efficient driving styles are to be secured, an ongoing programme of training, and reinforcement through advertising and other awareness raising mechanisms is likely to be needed.
In the Lifestyle variants, the high cost of motoring and the social pressure to improve driving standards for both safety and environmental reasons, mean that efficiency, quality and reliability overtake speed as the priority for travel. Speeding becomes socially unacceptable as it is seen as wasteful. Eco-driving is reinforced with strict speed enforcement, high penalties and tax incentives for in-car instrumentation such as speed limiters, fuel economy meters and tyre pressure indicators.
Initial calculations were made in the spreadsheet model by assuming how many drivers in any given year would be practicing eco-driving and what proportion of their miles would be affected at what level of efficiency improvement. In any given year, new drivers will start to practice these techniques, and for others the effectiveness will begin to 'trail off', although it is assumed that the behaviour is reinforced by repeat training programmes and campaigns so that it becomes more or less habitual. Even for those who are practicing it, not every mile they drive will be affected. For those miles affected, an 8% efficiency improvement is assumed. This is at the lower end of the evidence base (Gross et al., 2009). Business uptake of eco-driving is expected to be quicker as it is easier to integrate training programmes and instrumentation.
Eco-driving will also be practiced by van and truck drivers. Penetration through van fleet is expected to mirror that of car business travel. Penetration through the truck fleet is the same as for vans. However, the savings per mile are lower (4%) as these vehicles are already speed limited.
In each case (for cars, vans and trucks) the savings only apply to petrol/diesel vehicles. The potential to save fuel and emissions for alternative propulsion vehicles such as electric and plug-in hybrid electric vehicles is lower, as the propulsion system is already technically optimised, leaving less room for improvement by the driver (Gross et al., 2009). These assumptions were then combined to derive a time series of aggregate fuel consumption for the conventional car, van and truck fleets. This was then transferred to UKTCM by scaling the vehicle emissions factors used. For cars, for example, the vehicle distance (km) affected reaches 60% by about 2025. Multiplying this by 8% saving per km travelled gives 4.8% in fuel consumption and emissions savings, or a scaling factor of 0.952 applied to 'without ecodriving' fuel consumption and emissions. A summary of the effects of on-road fuel efficiency improvements from eco-driving is shown in Table 2.
Vehicle ownership and technology choice
In the LS REF and LS LC variants, private, fleet and commercial buyers prefer advanced conventional and electric vehicles over conventional internal combustion vehicles (ICV). The market responds by increasing availability and performance of lower carbon vehicles. This was first modelled in UKTCM before being translated into MED inputs and further modelling using MED.
Vehicle ownership and technology choice is modelled in much more detail in UKTCM than in MED; hence it is appropriate to present the intermediate results of vehicle technology choice by comparing UKTCM Lifestyle outputs (TCM LS) reference case outputs (TCM REF). 4 In the TCM LS model run, ultra efficient ICV and hybrid electric vehicle (HEV) are the main focus in the short term (up to 2020). Battery electric vehicles (BEV) fulfil market niche roles in the medium term (2015-2030), especially electric buses, cars and vans in urban areas. Plug-in hybrid electric vehicles (PHEV) dominate sales in the medium to long term (from 2025). This 'Lifestyle' purchasing behaviour is illustrated for cars in Fig. 2, showing historic and projected new car sales (not total fleet) by propulsion type for the TCM LS model run. While in 2007 more than 99% of new cars are conventional ICV, the TCM LS scenario suggests that by 2020 28% of new cars will be ultraefficient HEV, 16% small BEV, and 8% PHEV. By 2050, nearly half (46%) of new cars will be PHEV, 18% HEV and 9% small BEV. Fig. 2 also illustrates the limited role of high-blend bio-fuels (mainly 100% second generation biodiesel) in the TCM LS model run, reflecting low consumer preference and limited market deployment due to sustainability and availability concerns.
Two further lifestyle changes were simulated for cars. First, car buyers -whether private, fleet or business -are assumed to choose smaller cars instead of larger ones. This is simulated in UKTCM by phasing out the sale of new large cars (engine size 42.0 l) by 2020-starting in 2010, with linear interpolation between 2010 and 2020. Secondly, the tendency towards less overall car use and the increased membership of car clubs for use of a variety of types of cars for longer distance journeys is modelled endogenously in UKTCM by assuming significantly lower levels of maximum car ownership per household in urban and non-urban areas-about half of the reference value (TCM REF) for households owning 'at least 2 cars' and 'at least 3 cars'. The TCM REF levels are based on assumptions contained in the car ownership module of the UK government's National Transport Model (for details see ITS Leeds, 2001;Whelan, 2007), which imply a continued growth due to changes in income, household structure and license holding. By lowering the maximum levels for second, third or more cars per household we basically limit overall car ownership levels for multiple household car ownership.
The changes in overall traffic levels, modal shares and the increased demand for lower carbon vehicles modelled in UKTCM are further illustrated in Fig. 3 is gradually replaced by HEV, BEV and PHEV technology. While in 2007 less than 0.1% of car traffic is by cars other than conventional ICV, the TCM LS model run projects that by 2020 24% of car traffic will be ultra-efficient HEV, 7% small BEV, and 2% PHEV. By 2030, 27% of car traffic will be HEV, 22% PHEV and 13% small BEV. In the long term (2050), nearly half (45%) of car traffic will be PHEV, 20% HEV and 9% small BEV. Cars running on bio-fuels do not account for more than 2% of total car traffic over the period considered. As for other vehicle types, the increase in motorcycle traffic is mainly for electric motorcycles. Penetrating the mass market from around 2012, 18% of truck vehicle-km will be HEV by 2020, increasing to 28% by 2040. PHEV trucks penetrate the market later and reach a plateau of about 22% roughly by 2035. BEV trucks never really take off, with BEV vans penetrating niches (about 3%) in the urban delivery market.
How does all this disaggregate modelling translate into MED inputs? The outputs of two UKTCM model runs (TCM REF and TCM LS) were aggregated (e.g. over car sizes), integrated with assumptions on car downsizing (see above) and translated into MED inputs for the Lifestyle variants (LS REF and LS LC) as energy service demands (in billion vehicle-km), average specific energy use figures (PJ/billion vehicle-km) for the MED vehicle types (NB: there is only one car size in MED). The inputs were essentially technology deployment bounds for minimum uptake and average vehicle fuel efficiencies (in combination with on-road fuel efficiency improvements). So as to avoid double counting, the transport demand elasticities in the Lifestyle MED runs were set to zero (while agriculture, service and industry demand elasticities were untouched) (see Table 1). In addition, the general shift in consumer preference was modelled in MED by assuming lower 'hurdle rates' (discount rate for capital expenditure) for energyefficient and lower-carbon vehicles such as PHEV cars (12.5% instead of 15%, from 2020) and BEV motorcycles (15% instead of 25%, from 2015) when compared to the core REF scenario.
The final MED outputs show the significant variance in the traffic levels, modal shares and technology mixes between the two core Energy 2050 scenarios and the two Lifestyle variants. Fig. 4 shows the car vehicle types in each of these 4 MED runs in 2020 and 2050. In the Lifestyle reference variants (LS REF & LS LC), by 2020 market shares (in terms of vehicle km, not energy use) for HEV and BEV cars reach 21% and 9%, respectively, compared to zero penetration in the core REF and LC scenarios. From 2020 gasoline PHEV cars become more popular in all but the core REF scenario, reaching market shares in 2050 of around 50%. In total, in the Lifestyle variants, HEV, BEV and PHEV cars have a 77-81% market share in 2050 albeit of a significantly smaller market overall (car use is 74% less than in the REF case). While diesel PHEV, hydrogen fuel-cell vehicles (FCV) and bio-methanol FCV cars do not appear in any of the scenarios, the core carbon constraint scenario (LC) sees a massive uptake of a high-level blend of bio-ethanol and petrol (E85) cars (33% of total car traffic in 2050), while the Lifestyle carbon constrained variant (LS LC) sees none. This suggests that at reduced demand levels in the Lifestyle variant, the shift to electric cars (BEV, PHEV) combined with a decarbonised electricity system is sufficiently costeffective to avoid deployment of more costly bio-fuel technology.
The MED results suggest that in the unconstrained Lifestyle variant (LS REF) neither BEV nor PHEV road vehicles were taken up earlier or at higher levels than the rates laid down by UKTCM outputs. In contrast, HEV buses, trucks and vans were taken up at much higher levels (100% in some cases) when compared to the UKTCM outputs. This can be explained by lower annuitized cost for HEV than for their ICV counterparts and the use of only lower (no upper) limits for technology take-up as prescribed by the UKTCM outputs.
In all four scenarios modelled in MED, nearly all of van and HGV traffic in 2020 will be by ultra-efficient diesel/biodiesel HEV. This essentially means a complete hybridisation of the existing ICV road freight fleet over the next 10 years. However, in the low carbon Lifestyle variant (LS LC), the market is more mixed as the carbon constraint results in higher take-up rates for biodiesel PHEV. BEV vans only appear in the Lifestyle variants as they penetrate niche markets in urban areas (7-8% of total road freight).
Impact on energy use and fuel demand in the transport sector
The higher uptake of lower and zero carbon vehicles combined with efficiency gains, downsizing of cars, mode shifts and significant alterations to work, shopping and leisure travel patterns result in final energy demand being halved from this sector in the unconstrained Lifestyle variant (LS REF) by 2050 compared to the unconstrained reference case (REF) (Fig. 5) However, in all scenarios, conventional fuel still dominates use in 2020, never falling below 89% of total demand. By comparison, electricity demand grows steeply, particularly in the second half of the period, accounting for 18% of total fuel demand in the unconstrained Lifestyle variants by 2050 (Fig. 5). This demand is 67% higher than in the unconstrained reference case (REF) where HEVs and BEVs have zero market share, even by 2050, although there is some increase in electricity use later in the period from rail, some battery operated buses and plug-in electric vans. In the constrained reference case (LC), however, the uptake of gasoline PHEVs is very high, although BEV uptake remains zero. Altogether, taking car (PHEV), vans (PHEV and BEV) and bus (BEV), a third of road transport energy demand is met by plug-in electric vehicle technology in 2050. Use of electrified rail also increases by over 200% over present use by 2050. Bio-fuels only play a major role in the carbon constrained cases (LC & LS LC). This is a result of the availability of unconstrained blending of second generation biodiesel and the assumption within MED than bio-fuels have zero net carbon emissions (much liked by a scenario modelling an 80% cap on emissions), while in the reference cases (REF and LS REF) demands decrease in line with petrol and diesel demands. A high-level blend of bio-ethanol and petrol (E85) used in flex-fuel cars only appears from about 2035 in the core constrained case (LC), accounting for 26% of total transport fuel demand in 2050-12 times more than in the reference case (REF). In the related Lifestyle variant (LS LC), lower demand and greater preference for efficient vehicles means that biodiesel hybrids are preferred (see also Fig. 4).
Hydrogen also only plays a major role in the constrained cases (LC & LS LC) in the long term (2050), which sees three quarter of the truck fleet switching from diesel/biodiesel ICE to hydrogen fuel cell powertrains. There is a minor role for hydrogen fuel cell trains from 2030 in the unconstrained cases, where hydrogen powers a third of rail energy demand by 2050.
Implications for carbon emissions and the wider energy system
Overall, the unconstrained Lifestyle variant (LS REF) resulted in a 26% and 58% reduction in transport CO 2 emissions at source (or direct, tailpipe) by 2020 and 2050 compared to the core reference scenario (REF) levels (Fig. 6). Importantly, the reduction in these emissions happens early on and mostly before 2030, after which transport CO 2 emissions stabilise. CO 2 emissions at source notably exclude upstream emissions from power generation, which for the whole economy are also 7% and 17% lower by 2020 and 2050 than baseline (REF) levels. This suggests that the higher uptake, use and associated electricity demand of 'plugged-in' vehicles is more than offset by the decreasing demand for (car) travel overall.
As for the carbon constrained scenarios, the results shown in Fig. 6 suggest that in the core constrained reference case (LC) the transport sector only starts to pull its weight from around 2030, mainly as a result of the widespread use of second generation biofuels. The gradually tightening decarbonisation targets prior to 2030 are met by other sectors. In contrast, the constrained Lifestyle case (LS LC) follows the same carbon emissions trajectory as the unconstrained Lifestyle case (LS REF) up to about 2040, after which transport CO 2 fall sharply to levels that are even 37% lower than in the core carbon constrained (LC) scenario.
Across the whole economy in 2050, carbon emissions are 30% lower in the unconstrained Lifestyle case (LS REF) compared to REF. As can also be seen by the late divergence of the LS REF and the LS LC lines in Fig. 6, this in turn makes the achievement of radical carbon reductions such as the 80% easier, with fewer changes required to the transport or energy systems. Indeed, total energy demand between the two Lifestyle variants is comparable (between 3800 and 4400 PJ), it is mainly the virtual elimination of diesel in favour of biodiesel that takes place in order to meet the carbon constraint, although the use of bio-fuels is still only half that taken up in the core constrained (LC) scenario. In addition, the Lifestyle constrained case (LS LC) requires around 25% less electricity than the LC scenario, thus requiring a lower rate of growth in the construction of large scale centralised zero carbon electricity technologies such as carbon capture and storage, nuclear and wind capacity.
This has important implications for climate mitigation policy. A scenario that involves voluntary lifestyle change will place much less pressure on policy to require rapid (and potentially disruptive) technical change, including technologies at the point of energy use. The assumption that encouraging lifestyle change presents more problematic issues for policy makers than a 'top down' technical solution is therefore challenged by these findings.
The most significant impact of lifestyle change on the wider energy system, compared to the core scenarios, is due to reductions in the overall demand for final energy, particularly for oil derived fuels in transport. When changes in both the residential and the transport sectors are added, total final energy demand is 15% lower than the REF scenario by 2020 and 30% by 2050, with beneficial effects for energy system costs, carbon emissions and energy import requirements. Lifestyle change alone (without a carbon constraint) has an effect on total final energy demand akin to an 80% carbon constraint with no lifestyle change. The effects are most strong for the fuels where import dependence is most likely. In the unconstrained Lifestyle case (LS REF), by 2050, gas use is 34% lower and oil use 54% lower than in REF.
The implications for energy security are therefore very substantial. This compares interestingly to findings reported elsewhere that explicit concerns about energy security would lead to greater attention to reducing demand (Skea and Ekins, 2009), i.e. the same correlation but with opposite causality. The implications of concerns about a combination of climate change and energy security merit further research.
Summary of results
Modelling of radical changes in lifestyle led to a 74% reduction in distance travelled by car by 2050. The use of all other surface transport modes increases, apart from a 12% fall in distance travelled by trucks. The reduction in car travel comes about as a result of significant mode shifts, particularly to bus travel towards the latter half of the period (184% increase in vehicle kilometres) and cycling and walking. The take-up of cycling as a mode of transport reaches the same level in terms of mode split by 2050 as Fig. 6. Projections of CO 2 emissions (Mt) at source from domestic transport in each scenario, MED results. Note: These are source (or direct, or tailpipe) emissions and thus exclude emissions from power generation. is the norm in the Netherlands today (40% of all trips). However, mode shift is combined with destination shifting as trips are either totally abstracted from the system through virtual travel or shorter as a result of localisation.
UK road vehicles are getting 'plugged-in' as PHEV cars reach nearly 50% market share of total car fleet by 2050 and 26% of road transport energy demand is met by PHEV by 2050. An unconstrained Lifestyle case (LS REF) implies that 10% of the UK car park will be able to connect to the grid by 2020 and 36% by 2030. There is no change compared to the REF scenario in the short term, as the numbers remain constrained by the lack of vehicle and infrastructure availability. For road freight, all of the scenarios imply that nearly half of the UK van and HGV fleets will be able to connect to the grid by 2030. To achieve the level of production and sales demanded by the scenarios, market conditions and necessary infrastructure to support the rollout of grid-connected vehicles, particularly PHEV, beyond urban areas will need to be in place. The period after 2020 will need to see an increase in the range of vehicles available to consumers and freight operators in order to sustain the growth momentum. In addition, car owners downsize and drivers respond to the on-road fuel efficiency programme and speed limit enforcement as the car fleet alone uses 5-6% (2020) and 11-12% (2050) less energy per km driven.
Overall, the LS REF scenario results in a 26% and 58% reduction in transport CO 2 emissions by 2020 and 2050 from the levels in the unconstrained core reference scenario (REF). The key outputs are summarised in Table 3.
Discussion and conclusions
This paper has investigated the role of pro-environmental lifestyle change for the UK energy system to 2050 by concentrating on changes in transport activity. It starts with the premise that society and human behaviour change over time, sometimes in unpredictable directions, and therefore there is a wide variety of possible future levels of energy service demand and end use technology choice. We also assume that energy using behaviours are the result of the interaction between personal decisions and the social and economic context including the available technologies, physical infrastructures and public policy. Our analysis is therefore socio-technical.
Our analysis contrasts the techno-economic driven approach to carbon emissions reduction with the results of a novel and integrated modelling approach which characterised patterns of travel behaviour consistent with a more sustainable, low energy service demand society. This necessarily involves 'what if' scenario planning, which is not intended to allow the emergence of a single vision for the future but rather to challenge policy makers to consider how to formulate policies that can be robust in the face of such future uncertainty and thus positively contribute to society's evolution. In particular, this analysis implies that the role of policy is not restricted to influencing pricing and technological change but also has a role in shaping lifestyles and energy-using behaviours.
We have used an innovative methodology to combined the strength of detailed bottom-up modelling and a sectoral modelling approach with an optimisation model of the whole UK energy system. By using a structured 'storyline' approach and breaking down current travel choices into their constituent journey purposes, lengths and modes, we reflected the potential impact that long term structural changes in society and concurrent changes in individual priorities and preferences might have on the volume and composition of travel activity. This incorporated nonprice determinants of behaviour (values, norms, fashion; trust; knowledge) and non-consumptive factors (time use; mobility; social networking; policy acceptance). We have assumed changes to behaviour that we judge reasonable in an advanced economy, based on observation of energy-using activities across the developed world today. And we have assumed rates of change that seem feasible taking into account the need for both technologies and energy-using practices to diffuse and the external constraints to this, e.g. the need to change existing infrastructure.
Our results revealed a different future in which final energy demand in the transport sector would be halved by 2050 compared to the reference case. This implies rates of change (energy demand decreases) of just below 2% annually. Moreover, despite the more rapid uptake of plug-in and battery electric vehicles and the larger share of electricity in final energy demand, the Lifestyle variants of our core scenarios showed a future in which the need for massive electrification to meet carbon targets would be significantly reduced. Thus, under a scenario where energy demand is reduced, electricity sector decarbonisation could be delayed. Impacts on carbon emissions from transport sector at source are similar to those on final energy, i.e. a 58% reduction without a carbon constraint, and with more early progress. We conclude that lifestyle change can make a significant contribution to delivering UK carbon emission goals, and assist early action, but that alone it is insufficient to deliver an 80% reduction goal, as this requires a wider transformation of the energy system. The emphasis on patterns of activity and changes to individual preferences and societal norms outlines the key trade-off to be made between the more aggressive pursuit of energy efficiency and demand reduction (Skea and Ekins, 2009). Given the many uncertainties and risks involved in decarbonising our energy supply, there are strong arguments for pursuing both demand and supply side solutions in order to make the path to an 80% reduction more sustainable and potentially more certain.
Yet current market and regulatory arrangements sit uneasily with the requirement to tackle behaviour change let alone the transformational change required. Even though the role of behaviour change in carbon emissions reduction is already established in policy analysis, the dominance of techno-economic analysis leads to a favouring of carbon pricing and technical solutions. Analyses based in these disciplines therefore sometimes give the impression (and in some cases even assumes explicitly) that 'policy cannot change behaviour'. However, there is no substance, theoretically or empirically, for such an assumption.
The policy agenda for lifestyle change is less well developed than the equivalents for pricing and technological change. But the broad principles of what works are increasingly well-understood. For instance, studies of individual travel behaviour demonstrate that behaviour is constantly changing in both sustainable and unsustainable directions (Goodwin, 1999). These changes are not equal and result in a net change in aggregate travel behaviour which, so far, has led to unsustainable patterns. It is important therefore to recognise the existence of churns in travel behaviour and to attempt to develop appropriate policies to target different groups of travellers with the relevant transport policies in order to improve the transport system. Other evidence points to the potential malleability of travel behaviour. For instance, Cairns et al. (1998) examined over 70 case studies in 11 different studies where road space had been reallocated due to sudden shocks (e.g. earthquakes) or planned (e.g. pedestrianisation schemes) and found across all case studies, the average traffic reduction in the total local network soon after the change was 22%, with a median of 11%. Similarly, we know that the traffic reduction after the London congestion charge was in the order of 15% immediately after its introduction (TfL, 2007), car traffic reduced by 39% on motorways overnight after the 'fuel protests' in the UK in 2000 (Hathaway, 2004), and cycling increased dramatically after the terrorist attacks in London in 2005 (although the trend was already upward).
However, despite mounting evidence of the key role that behaviour change can play in decarbonising the transport sector (see Anable, 2005;Hickman and Banister, 2007;Cairns et al., 2008;Koehler, 2009), UK policy gives far less attention to demand-side measures to reduce total kilometres travelled or, shift to less carbon intensive modes of transport. But, if we cannot define sustainable lifestyles and incorporate non-price driven behavioural motivations into our analytical frameworks, it will continue to be hard to assess the effectiveness of policy measures taken to move towards them and the reluctance to adopt them will continue. This paper goes some way to fulfilling this role. It is however, acknowledged, that a scenario that relies on such fundamental shifts in activity patterns, preferences and price signals will have far reaching implications on many other aspects of society and economy such as wider consumption practices (e.g. leisure, food consumption and work practices), preferences for business and residential location and knock-on land values. Therefore, in addition to understanding the behavioural and public policy processes to bring about lifestyle shifts, there is much more that needs to be done to understand the system-wide energy implications of fundamental socio-technical transitions in transport as well as other sectors. | 12,994.8 | 2012-02-01T00:00:00.000 | [
"Economics",
"Engineering",
"Environmental Science",
"Sociology"
] |
A new method based on Taylor expansion and nearest-node strategy to impose Dirichlet and Neumann boundary conditions in ordinary state-based Peridynamics
Peridynamics is a non-local continuum theory which is able to model discontinuities in the displacement field, such as crack initiation and propagation in solid bodies. However, the non-local nature of the theory generates an undesired stiffness fluctuation near the boundary of the bodies, phenomenon known as “surface effect”. Moreover, a standard method to impose the boundary conditions in a non-local model is not currently available. We analyze the entity of the surface effect in ordinary state-based peridynamics by employing an innovative numerical algorithm to compute the peridynamic stress tensor. In order to mitigate the surface effect and impose Dirichlet and Neumann boundary conditions in a peridynamic way, we introduce a layer of fictitious nodes around the body, the displacements of which are determined by multiple Taylor series expansions based on the nearest-node strategy. Several numerical examples are presented to demonstrate the effectiveness and accuracy of the proposed method.
Introduction
The propagation of cracks in solids and structures is one of the most common problems in structural engineering. In recent years, a new non-local continuum theory able to simulate crack propagation, named peridynamic theory, attracted the attention of many researchers. Each point in a body modelled with peridynamics interacts with all the neighboring points within a distance 훿. The non-locality of the peridynamic theory is essential to describe fracture phenomena in solid bodies without ad hoc criteria. Firstly, the so-called "bond-based peridynamics" was developed [40], which however has a limited capability of prescribing the Poisson's ratio. This shortcoming is avoided by the second formulation of the theory, named "state-based peridynamics" [42].
The non-local nature of the theory leads to two interrelated problems near the boundary of the body: the "surface effect" and the difficulty to impose the boundary conditions [9]. The surface effect, sometimes also called "skin effect", is due to the fact that peridynamic points near the boundary lack some neighboring points, leading to an undesired variation of the stiffness properties in the most external layer of the body [16,19]. Bond-based and state-based peridynamic models exhibit respectively a softening and a hardening-softening behavior near the boundary [2,20,35].
Imposing boundary conditions in a peridynamic model is not a trivial task to accomplish. The application of the boundary conditions to the points on the boundary, as one would do in a local model, leads to large fluctuations of the solution near the boundary [16]. In [21] it is suggested that external loads and constraints should be imposed on a layer of finite thickness respectively inside and outside the body. This strategy is surely closer to a non-local concept, but it is not really clear the proper procedure to "distribute" the boundary conditions over the finite layers. In the following, we present some of the most commonly used methods to mitigate the surface effect and impose the boundary conditions in a peridynamic model. A possible approach is to couple peridynamics with classical continuum mechanics: peridynamics is employed only in the interior of the body and the layer of material near the boundary is modelled, for instance, with the Finite Element Method [11,25,38,45,47,48], with the Carrera Unified Formulation [30], with the Extended Finite Element Method [13,14] or with the Meshless Local Exponential Basis Functions method [39]. In this way, the surface effect and the problem of the imposition of the boundary conditions in peridynamics is avoided. However, if cracks initiate or propagate near the boundary, those regions must inevitably be modelled with peridynamics and the coupling approach is not suitable to avoid the boundary issues. Furthermore, there are some spurious effects at the interface of the coupling region due to the different formulations of peridynamics and classical continuum mechanics (see the computation of outof-balance forces in [26]).
The maximum distance of interaction, namely 훿, is a measure of the non-locality of the theory. Therefore, the external layer of the body which is affected by the surface effect is thinner as 훿 approaches 0. Similarly, the imposition of the boundary conditions in a local way (constraints and loads applied only to the points closest to the boundary) becomes a better approximation if 훿 tends to 0. Since the number of nodes is bound to increase as 훿 decreases, the computational effort may become excessive. In this case, the variable horizon method can be employed to decrease the value of 훿 near the boundaries [2,3,6,[31][32][33]44]. However, this approach of reducing the non-local nature of the peridynamic theory is solely capable of confining the solution fluctuation in a smaller region, but never of completely correcting it.
The approach of modifying the stiffness properties of the bonds near the boundary has been proposed in many methods: the force normalization method [20], the force density method [15], the energy method [21,27], the volume method [1] and the position-aware linear solid constitutive model [23]. The comparison of these methods, carried out in [16], highlights that there are still some residual fluctuations of the solution near the boundary because they do not cope with the problem of the imposition of the boundary conditions in a non-local way. Another recently devised approach consists in modifying the peridynamic formulation in points which are affected by the surface effect in order to recover the classical mechanics solution for 훿 → 0 [4,46]. Nevertheless, the treatment of the boundaries becomes much more complex.
The method of the "fictitious nodes" consists in adding around the body some nodes which provide the previously lacking interactions near the boundary, mitigating in this way the surface effect [12,15,34]. The fictitious nodes have been employed also to impose the boundary conditions: the displacements of the fictitious nodes are extrapolated by means of various types of functions, such as constant, linear, polynomial, sinusoidal or odd functions, in order to obtain the desired value of the constraint or load [6][7][8]16,21,22,28,29,31,49]. Moreover, the displacements of the fictitious nodes can be determined also by means of the formulae of classical continuum mechanics to enforce the desired load at the boundary [16,22,28,31,49]. However, these procedures to impose the boundary conditions are casedependent and are applicable only for simple geometries and boundary conditions.
We proposed a new version of the "Taylor-based extrapolation method" adopting the nearest-node strategy [35]: the displacements of the fictitious nodes are determined as functions of the displacements of their closest real nodes by means of multiple Taylor series expansions truncated at a general order 푛 푚푎 푥 . The surface effect is sensibly reduced by this effective method. Moreover, the boundary of the body is discretized by a new type of nodes, named "boundary nodes". As the fictitious nodes, the boundary nodes do not constitute new degrees of freedom in the model because their displacements are determined by means of the Taylor-based extrapolation method. Dirichlet boundary conditions are included in the Taylor series expansion of the displacements of the boundary nodes about their closest real nodes, whereas Neumann boundary conditions are imposed through the peridynamic concept of force flux.
The paper is organized as follows: Sect. 2 presents a brief review of the ordinary state-based peridynamic theory, particularly focusing on the peridynamic stress tensor, the force flux, the surface effect and the imposition of boundary conditions; Sect. 3 illustrates the Taylor-based extrapolation method and the imposition of boundary conditions in a peridynamic model; Sect. 4 shows the discretization of the peridynamic model, the numerical evaluation of the peridynamic stress tensor and of the force flux, and the numerical implementation of the proposed method; Sect. 5 compares the numerical results of several meaningful 2-dimensional examples obtained without any corrections at the boundary and by using the proposed method; Sect. 6 shows the differences that may arise in crack propagation near the boundaries between corrected and uncorrected models; Sect. 7 draws the conclusions.
Review of peridynamic theory
Peridynamic points interact with each other, even within finite distance, through entities named "bonds". A bond is identified by the relative position vector in the reference configuration as where x and x ′ are the position vectors of two points in a body B modelled with peridynamics. The bond vanishes if 1 Body modelled with ordinary state-based peridynamics in the reference configuration B 푟 and deformed configuration B 푑 : a pairwise force density f arises in the bond due to the deformation of the body the distance between the interacting points exceeds the value 훿, called "horizon". A point x therefore interacts with all the points x ′ inside its neighborhood, which is defined as where B 푟 is the body in the reference configuration. Point x is named "source point" and the points within H x are named "family points". In the deformed body configuration B 푑 at time 푡, the relative displacement vector is defined as where u is the displacement field. Note that + is the relative position of points x and x ′ in the deformed configuration. The peridynamic equation of motion of a point x within the body B is given by [40,42]: where 휌 is the material density, u is the acceleration field, f is the pairwise force density, d푉 x ′ is the differential volume of a point x ′ within the neighborhood H x and b is the external force density field. The pairwise force density represents the force (per unit volume squared) in a bond. The peridynamic equilibrium equation is derived from Eq. 4 by dropping the dependence on time: where b x = b(x). f (x, x ′ ) is the force density applied to point x due to the interaction with a point x ′ inside its neighborhood. Conversely, point x belongs to the neighborhood H x ′ , thus a force density f (x ′ , x) = −f (x, x ′ ) is applied to point x ′ (see Fig. 1). The formulae to compute the pairwise force density depending on the deformation of the body are shown in the following section.
Ordinary state-based peridynamics
In state-based peridynamics, the pairwise force density is defined as [42] f (x, where T is the force density vector state. T[x] and T[x ′ ] − depend respectively on points x and x ′ , and they respectively operate on bonds and − .
In an ordinary peridynamic material, the force density vector state is aligned with the corresponding bond for any deformation, as depicted in Fig. 1, and it can be written as where 푡 is the force density scalar state (magnitude of T) and M is the deformed direction vector state (unit vector in the direction of T), defined as Note that M = −M − . Furthermore, under the assumption of small deformation ( ≪ ), the deformed direction vector state can be approximated with the bond direction unit vector in the reference configuration: Therefore, the pairwise force density can be rewritten as The reference position scalar state 푥, representing the bond length in the reference configuration, and the extension scalar state 푒, describing the elongation (or contraction) of the bond in the deformed body configuration, are respectively defined as The influence of the neighborhood H x on a source point x is expressed by two non-local properties of that point, the weighted volume 푚 and the dilatation 휃, which are defined as where 휔 is a prescribed spherical influence function and 푐 휃 is a peridynamic constant. We adopt the Gaussian influence function since it assures a smooth convergence of the numerical integration [36]. The weighted volume describes the "fullness" of the neighborhood: a neighborhood completely full of peridynamic points results in the maximum value of 푚, whereas the weighted volume of an incomplete neighborhood has a lower value. This lack of neighboring points is the origin of stiffness fluctuations, the so-called "surface effect" [16], near the boundary of the body. The surface effect is further analyzed in Sect. 2.4.
On the other hand, the dilatation represents the volumetric deformation of the neighborhood. Consider a point x subjected to a homogeneous, isotropic and small deformation 휀, so that 푒 = 휀 푥 for each bond. The peridynamic dilatation 휃 of point x corresponds to the dilatation 휃 푐푙 in classical continuum mechanics under the same deformation if the constant 푐 휃 is chosen as [17,35,42] where 휈 is the Poisson's ratio.
The force density scalar state can be computed as [35] 푡 where 푘 휃 and 푘 푒 are the peridynamic stiffness constants. These constants are derived by equalizing the peridynamic strain energy density in a point x with a complete neighborhood under homogeneous deformation, with the classical continuum mechanics strain energy density in a point sub-jected to the same deformation [17,35,42]: where 퐸 is the Young's modulus. By substituting Eq. 17 in Eq. 10, the pairwise force density is computed as Note that the magnitude of the pairwise force density in ordinary state-based peridynamics depends on the neighborhood properties (푚 and 휃) of the two points x and x ′ connected by the bond. Hence, the resultant of all the bond forces in a point x, obtained with the integral of the peridynamic equilibrium equation (Eq. 5), depends on the deformation of the points within a 2훿-distance from x.
Peridynamic stress tensor
The peridynamic stress tensor, introduced in [18], is defined in a point x with a complete neighborhood as where 훺 is a unit sphere centered in x and d훺 m is the differential solid angle on 훺 in any bond direction m. The points x − 푠m and x + (푟 − 푠)m are connected by a bond passing through point x, and we respectively name them x ′ and x ′′ . Therefore, 푠 = x ′ − x and 푟 = x ′′ − x ′ , as shown in Fig. 2. 푠 is the distance between points x and x ′ , whereas 푟 is the length of the bond between x ′ and x ′′ . The definition of the integration domain allows to take into account all the bonds passing through point x. Note that each bond passing through x (between x ′ and x ′′ ) has a corresponding bond in the opposite direction (between x ′′ and x ′ ), so that the same pairwise force density is integrated twice in Eq. 21. This is the reason why the factor 1/2 appears at the beginning of the formula. The integral over the unit sphere 훺 is not affected by the variables 푠 and 푟, but it depends only on the bond direction m. On the other hand, the integrals related to d푠 and d푟 are interdependent, as shown by the integration domain depicted in Fig. 3. For later use, the peridynamic stress tensor in a point x with a complete neighborhood is rewritten by changing the order of the integrals: where d푉 x ′′ = 푟 2 d푟 d훺 m . Under the assumption of homogeneous deformation, the bonds with the same length and direction have the same pairwise force density in any position of the body. This means that, for each bond of length 푟 and direction m, its pairwise force density does not depend on 푠 anymore. Therefore, x can be simplified from Eq. 22 as follows: Note that, since the value of 푠 does not affect f (x ′ , x ′′ ) in a body under homogeneous deformation, we conveniently choose the pairwise force density for 푠 = 0, i.e., f (x, x ′′ ). We want to compare the peridynamic stress tensor with the stress tensor in classical continuum mechanics for the same deformation conditions. For simplicity sake, we choose the where 휀 푖푠표 and 휀 푠ℎ are the values of the imposed deformations and 휎 푖푠표 and 휎 푠ℎ are the corresponding stresses.
In the following analysis of the peridynamic stress tensor, only points with complete neighborhoods are considered. The inclination of a bond with respect to the 푥-axis is called 휙. Therefore, the bond direction in a 2-dimensional model can be written as m = {cos 휙, sin 휙} ⊤ . Furthermore, the weighted volume of a point x with a complete neighborhood is given from Eq. 13 by where ℎ is the thickness of the 2-dimensional body.
In the case of a body subjected to a small isotropic deformation, any extension scalar state is 푒 푖푠표 = 휀 푖푠표 푥 and the corresponding dilatation in a point x with a complete neighborhood is 휃 푖푠표 x = 푐 휃 휀 푖푠표 . The peridynamic stress tensor under this condition is given from Eq. 23 as The obtained peridynamic stress tensor yields the same result of the stress tensor computed with classical continuum mechanics in a point under isotropic deformation. Note that only a tensile stress 휏 11 = 휏 22 = 휎 푖푠표 arises from the imposed deformation 휀 푖푠표 and there is no shear stress (휏 12 = 0). In the case of a body subjected to a small shear deformation 휀 푠ℎ , the extension scalar state can be computed by substituting = { cos 휙, sin 휙} ⊤ and = {휀 푠ℎ sin 휙, 휀 푠ℎ cos 휙} ⊤ in Eq. 12: where the formula is simplified under the assumption of sufficiently small deformation by dropping the second order terms and employing the Taylor series expansion for the square root. The corresponding dilatation in a point x with a complete neighborhood is 휃 푠ℎ x = 0 given the anti-symmetry of the integrand and the symmetry of the integration domain. The peridynamic stress tensor under this condition is given from Eq. 23 as The obtained peridynamic stress tensor yields the same result of the stress tensor computed with classical continuum mechanics in a point under simple shear deformation. Note that only a shear stress 휏 12 = 휎 푠ℎ arises from the imposed deformation 휀 푠ℎ and there is no tensile stress (휏 11 = 휏 22 = 0). We showed that the peridynamic solution for the stress tensor corresponds to that of the classical continuum mechanics for homogeneous and small deformations, as shown also in [26] for bond-based peridynamic models. However, this statement is not valid near the boundaries of the body due to the surface effect.
Force flux
The force flux (x, n) at point x in the direction of the unit vector n (see Fig. 4) is derived from Eq. 21 as [18]: where x ′ = x − 푠m and x ′′ = x + (푟 − 푠)m. As in the definition of the peridynamic stress tensor, a factor 1/2 is required since the integration domain takes into account the magnitude of the pairwise force density of each bond twice (for both direction m and −m). We briefly recall the mechanical interpretation of the force flux [18]. Consider a plane P with normal n passing through point x, as shown in Fig. 5. Points x ′ and x ′′ respectively lie in the different half-spaces generated by plane P. The differential volumes of those points are d푉 x ′ = 푟 2 d푠 d훺 m and d푉 x ′′ = 푟 2 d푟 d훺 m . The differential area of point x ′ , which is perpendicular to the bond direction m, is the portion of a sphere centered in x ′′ with a radius 푟 which subtends the differential solid angle d훺 m , namely d퐴 x ′ = 푟 2 d훺 m . By the same token, the differential area d퐴 x ′′ on a sphere centered in x ′ with a radius 푟 is equal to d퐴 x ′ . As shown in Fig. 5, the The differential pairwise force acting through the bond between points x ′ and x ′′ is f (x ′ , x ′′ ) d푉 x ′ d푉 x ′′ . Therefore, the differential pairwise force per unit area on plane P is given by Note that the integrand in Eq. 30 is the pairwise force per unit area on plane P. This provides the mechanical interpretation of the force flux as the resultant of the pairwise forces per unit area of all the bonds intersecting P in x.
Surface effect
The non-local formulation of the peridynamic theory exhibits some issues near the boundaries due to the incomplete neighborhoods of points close to free surfaces. The peridynamic constants 푘 휃 and 푘 푒 in Eqs. 18 and 19 are derived for points with a complete neighborhood. Therefore, the points near the boundaries, whose neighborhood is lacking some bonds, have different stiffness properties with respect to the points in the bulk. This phenomenon is called "surface effect" [16].
x m x 5 Differential variables involved in the computation of the force flux (x, n): the differential volumes of points x ′ and x ′′ are respectively is the projection of d퐴 x ′ and d퐴 x ′′ on the plane P In ordinary state-based peridynamics, there are two nonlocal properties of a point which may contribute to the surface effect: the weighted volume 푚 and the dilatation 휃. The latter (Eq. 14) is independent from the neighborhood "fullness" because it is normalized by the value of the weighted volume. Therefore, we focus on the value of 푚. We define the value 푑 푏 as the minimum distance of a peridynamic point from any boundary of the body. The weighted volume has its maximum value when 푑 푏 ≥ 훿, and it decreases gradually from points with 푑 푏 = 훿 towards points with 푑 푏 = 0 on the boundary. Moreover, points approaching corners, with respect to those approaching edges or surfaces, exhibit a steeper reduction in the weighted volume and a lower minimum value at the boundary.
The equilibrium of a peridynamic point x (Eq. 5) is determined by the sum of the pairwise forces of all its bonds. Therefore, x primarily interacts with the points inside its neighborhood H x . However, the magnitude of the pairwise force density (Eq. 20) depends on the weighted volumes and dilatations of both point x and point x ′ within H x . This means that x secondarily interacts with points up to a distance of 2훿 from itself. Thus, as shown in Fig. 6, we can discriminate 3 types of points depending on 푑 푏 : 6 Types of state-based peridynamic points depending on the distance 푑 푏 from the closest boundary: points are of type-I, also named points in the "bulk", if The source point x interacts primarily with the family points within the neighborhood H x (dashed line) and secondarily with all the points in the neighborhoods of the family points (dotted line) Type-I points are said to be in the "bulk" of the body and they are the only ones which are not affected by the surface effect.
As the weighted volume of one or both the points of a bond decreases, the pairwise force density of that bond increases according to Eq. 20. As shown in Fig. 6, type-II points interact with at least one point with a partial neighborhood, so that the peridynamic forces applied to those points increase. Therefore, in the layer of the body where 훿 ≤ 푑 푏 < 2훿 the peridynamic material is stiffer and exhibits a hardening behavior. The pairwise forces applied to type-III points increase even more. However, a type-III point is affected by less bonds than type-I or type-II points due to the lack of at least one family point. Therefore, the most external layer of the body (푑 푏 < 훿) exhibits a hardening-softening behavior towards the boundary. This hardening-softening behavior can also be observed in the analytical solution of a 1-dimensional state-based body subjected to a homogeneous small deformation [35]. However, we expect that the stiffness fluctuation would be amplified near the corners of the body because the points in those regions have the smallest partial neighborhood. Figures 7 and 8 show the components of the peridynamic stress tensor x in a 2-dimensional body subjected respectively to a isotropic deformation 휀 푖푠표 and to a simple shear deformation 휀 푠ℎ . x is computed numerically with a relatively high density of nodes within each neighborhood (푚 = 10) and normalized with the analytical solutions derived in Sect. 2.2. Please refer to Sect. 4.3 for the numerical procedure to compute the peridynamic stress tensor. The numerical result for the points in the bulk of the body is really close to the analytical solution, whereas there are large differences for the points near the boundary, especially near Components of the peridynamic stress tensor x for every point in a 2-dimensional body subjected to a isotropic deformation 휀 푖푠표 . The plots are normalized with the analytical solution of the tensile stress 휎 푖푠표 for a peridynamic point with a complete neighborhood the corners. Moreover, the points near the corners, due to the asymmetry of their neighborhood with respect to both 푥and 푦-axis, have a non-zero value of the peridynamic stress even without the corresponding deformation: 휏 12 ≠ 0 in the case of isotropic deformation 휀 푖푠표 and 휏 11 = 휏 22 ≠ 0 in the case of simple shear deformation 휀 푠ℎ .
Imposition of the boundary conditions
Another issue in peridynamics, which is related to the surface effect, is the proper definition of the boundary conditions. The easiest method to impose the peridynamic boundary conditions would be to assign the desired value to the boundary points, as in classical continuum mechanics. However, this method does not consider the non-local nature of the theory and results in additional fluctuations of the solution near the application of the boundary conditions.
A widely used method suggests that external loads and constraints should be imposed on a layer of finite thickness The plots are normalized with the analytical solution of the shear stress 휎 푠ℎ for a peridynamic point with a complete neighborhood respectively inside and outside the body [21]. The finite thickness is defined to be 2훿 in state-based peridynamics [37]. Since this method involves type-II and type-III points in the boundary conditions, it is undoubtedly more accurate than the previous one. However, it is not really clear the exact procedure of "distributing" the boundary conditions over the finite layer.
We propose in the next section a novel method capable of reducing considerably the surface effect and of imposing the boundary conditions in a "peridynamic way".
Taylor-based extrapolation method
A fictitious layer F of thickness 훿 is added around the body B [12], as shown in Fig. 9. The neighborhoods of the family points of type-II points are completed thanks to the additional fictitious points, so that type-II points can be considered as points in the bulk (type-I points). Similarly, the neighbor- Fig. 9 Types of state-based peridynamic points depending on the distance from the closest boundary 푑 푏 in a body with a fictitious layer of thickness 훿: there is no difference between type-I and type-II points anymore, whereas type-III points lack some secondary interactions in the neighborhoods of the family points hoods of type-III points are completed by the fictitious layer, but some of the neighborhoods of their family points are not. However, we assign to the fictitious points the value of the full weighted volume. In this way, all the points inside the body B behave as points in the bulk. The next section shows a procedure to evaluate the displacement and dilatation fields over the fictitious layer.
Extrapolation procedure to mitigate the surface effect
The displacements and the dilatations of the fictitious points are determined by means of the Taylor-based extrapolation method [35]. Consider the displacement We name u 푏 = u(x 푏 ) the displacement of the boundary point with the minimum distance from that fictitious point (nearest-point strategy). The Taylor series expansion of u 푓 about x 푏 = {푥 푏 , 푦 푏 , 푧 푏 } ⊤ truncated at the maximum order 푛 푚푎 푥 ≥ 1 is given by where 푛 is the global order (푛 = 1 is related to the gradient, 푛 = 2 to the Hessian matrix, etc.) and 푛 1 , 푛 2 and 푛 3 are the orders respectively in 푥, 푦 and 푧 directions. Similarly, consider the dilatation 휃 푓 = 휃 (x 푓 ) of a fictitious point and the dilatation 휃 푏 = 휃 (x 푏 ) of the boundary point closest to x 푓 . The Taylor series expansion of 휃 푓 about x 푏 truncated at the maximum order 푛 푚푎 푥 − 1 is given by Note that the dilatation is a measure of the strain, thus its truncation of the Taylor expansion occurs with 1 order less than that of the displacement.
This method allows to determine the displacement field and the dilatation field in the fictitious layer F as a function of the respective fields in the body B. Since the displacement and dilatation fields in F are approximated by means of a Taylor series expansion, more accurate results are obtained by increasing the truncation order 푛 max or by reducing the thickness of the fictitious layer.
The new bonds between real and fictitious points, called "fictitious bonds", are the interactions that are lacking in the peridynamic models without fictitious layer. The Taylorbased extrapolation method provides the displacement and dilatation values of the fictitious points, which are required to compute the pairwise forces of the fictitious bonds. In this way, the proposed method is able to mitigate the surface effect.
Peridynamic boundary conditions
We propose hereinafter a novel method to impose the boundary conditions in a peridynamic way when using the previously described fictitious layer method [35]. The desired boundary conditions are applied solely on the boundary points, exactly as in classical continuum mechanics. However, the influence of the boundary conditions on the body is non-local thanks to the Taylor-based extrapolation method. This concept is explained for Dirichlet and Neumann boundary conditions in the following.
A constraint u imposed in a boundary point x 푏 is simply given as This boundary condition determines the displacement field in the fictitious layer through the Taylor-based extrapolation method (by substituting Eq. 35 in Eq. 33). Therefore, the influence of the constraint can be seen as distributed in the whole thickness of the fictitious layer, as suggested in [21, pp. 29-30]. An external load per unit area p applied to a boundary point x 푏 is expressed by means of the peridynamic concept of force flux (see Eq. 30): where n is the unit vector perpendicular to the boundary in x 푏 . By definition of force flux, (x 푏 , n) is the sum of the pairwise forces (per unit area) of all the bonds passing through x 푏 . Since point x 푏 lies on the boundary, all the bonds involved in Eq. 36 are fictitious bonds. On the one hand, the pairwise forces of those bonds applied to the fictitious points, which do not constitute new degrees of freedom, are ignored. On the other hand, the corresponding pairwise forces applied to the real points are the only ones "perceived" by the body. For how the magnitude of those forces is computed (see Eq. 20), the boundary condition in point x 푏 affects the displacement in a sphere of radius 2훿 centered in x 푏 . Therefore, the external load, expressed by means of the definition of the force flux, is distributed on the points in a layer of thickness 2훿 within the body, as suggested in [21, pp. 30-32].
The proposed method for imposing the boundary conditions, which makes use of the Taylor-based extrapolation on the fictitious layer, defines a peridynamic way to distribute the constrains or the loads in the non-local region near the boundary.
Remark 1
The reaction force acting on a boundary point x 푏 , due to a constraint imposed as in Eq. 35, can be computed as the force flux in x 푏 in the direction of the unit vector n perpendicular to the boundary, i.e., (x 푏 , n).
Remark 2
The zero-traction boundary condition is sometimes applied by removing the fictitious layer in literature [22]. However, in order to maintain the correction of the surface effect, we suggest to keep the fictitious layer and impose the condition (x 푏 , n) = 0 to all the points of that boundary.
Numerical implementation
In order to discretize the domain, a mesh-free method is adopted [36,41]. For simplicity sake, the peridynamic grid consists of a finite number of equally-spaced nodes, as shown in Fig. 10. Each peridynamic node is representative of a finite volume 훥푉 = ℎ훥푥 2 , where 훥푥 = 훥푦 is the grid spacing and ℎ is the thickness of the body. Please note that the most external real nodes do not lie exactly on the boundary of the body since the nodes are positioned at the center of the volume 훥푉. The ratio between the horizon and the grid spacing is defined as 푚-ratio: 푚 = 훿/훥푥. The value of this parameter determines the density of peridynamic nodes within a neighborhood. Furthermore, the fictitious layer (empty dots in Fig. 10) is added to the real body to complete the neighborhoods of the nodes near the boundary.
Numerical Taylor-based extrapolation method
The Taylor-based extrapolation procedure in the discretized model aims to determine the values of the variables of the fictitious nodes (displacements 푢 and 푣 respectively in 푥 and 푦 directions and dilatations 휃 in the case of a peridynamic 2dimensional body). The numerical procedure to determine, for instance, the displacement 푢 푖 in a fictitious node 푖 with coordinates (푥 푖 , 푦 푖 ) is carried out as follows: -find the real node of index 푗 closest to node 푖; -perform a Taylor series expansion of 푢 푖 about node 푗 with coordinates (푥 푗 , 푦 푗 ): where 푢 푗 and 휕 푛 1 +푛 2 푢 푗 휕푥 푛 1 휕푦 푛 2 are the displacement of node 푗 and its derivatives, 푛 푚푎 푥 is the maximum order of the truncated Taylor series, 푛 1 and 푛 2 are the orders respectively in 푥 and 푦 directions and 푛 is the global order so that 푛 2 = 푛 − 푛 1 .
Since the coordinates of nodes 푖 and 푗 are known, the displacement 푢 푖 in Eq. 37 is written as a function of the displacement 푢 푗 and its derivatives. However, we aim to express 푢 푖 as a function solely of the displacement of the real nodes. The total number of derivatives of 푢 푗 , considered before truncating the Taylor series, is 푛 푑 = (푛 푚푎 푥 (푛 푚푎 푥 + 1)/2) − 1.
They can be determined as functions of the displacements of the 푛 푑 real nodes near node 푗 by following another Taylorbased extrapolation procedure: -find the 푛 푑 real nodes with indices 푗 푘 closest to node 푗, where 푘 = 1, . . . , 푛 푑 (see Remark below for the conditions on the node search); -for each of those nodes with coordinates (푥 푗 푘 , 푦 푗 푘 ), perform a Taylor series expansion of their displacements 푢 푗 푘 about node 푗: -solve the system of equations in Eq. 38 to obtain the derivatives of 푢 푗 as a function of the displacements 푢 푗 and 푢 푗 푘 : Therefore, by combining Eqs. 37 and 39 , the displacement of a fictitious node is a function only of the displacements of some real nodes. Note that the adopted nearest-node startegy is really simple to implement also for complex geometries. This procedure can be applied to determine the displacements 푢 and 푣 and the dilatations 휃 of all the fictitious nodes.
i j 5 j 4 j 2 j 3 j 1 j Fig. 11 Taylor-based extrapolation method for a fictitious node 푖 near a corner of the body: node 푗 is the real node closest to node 푖 and nodes 푗 푘 with 푘 = 1, . . . , 5 are the real nodes closest to node 푗 In the case of the dilatations, the truncation order 푛 푚푎 푥 must be replaced by 푛 푚푎 푥 − 1.
Remark 3
There might be some cases in which the system of equations (Eq. 38) is not solvable for the nodes 푗 푘 that are the closest to node 푗. For instance, if we want to determine the second derivative in 푥 direction, the nodes 푗 푘 must include at least two 푥 푗 푘 coordinates different from each other and from 푥 푗 (see example in Sect. 4.5). However, given the adoption a uniform grid in which nodes on the same lines share the same coordinates, this condition is not always met when searching for nodes 푗 푘 via the closest-node strategy without any additional condition. Therefore, in order for the system of equations to be solvable, the nodes 푗 푘 should comprise at least 푛 1 different 푥 푗 푘 coordinates and 푛 2 different 푦 푗 푘 coordinates (excluding 푥 푗 and 푦 푗 ) for each derivative of order 푛 1 in 푥 direction and 푛 2 in 푦 direction.
In the following we present an example for determining the displacement 푢 푖 of a fictitious node 푖 near a corner of the body by means of the Taylor-based extrapolation method with 푛 푚푎 푥 = 2. As shown in Fig. 11, the node 푗 near the corner is the real node closest to node 푖. Thus, the displacement 푢 푖 can be given via a Taylor series expansion about node 푗 (see Eq. 37) as where 푙 푥 = 푥 푖 − 푥 푗 and 푙 푦 = 푦 푖 − 푦 푗 . Note that the number of derivatives of the displacement 푢 푗 is 푛 푑 = 5. As shown in Fig. 11, 푗 푘 with 푘 = 1, . . . , 5 are the 5 indices of the real nodes closest to node 푗. Note that, in order to be compliant with the condition given in Remark 3, the search for the closest nodes should be carried out in terms of Manhattan distance. A system of 5 equations is written by performing a Taylor series expansion of 푢 푗 푘 about node 푗 (see Eq. 39): The factors of the Taylor series expansions, which are multiplied by the derivatives of 푢 푗 , are easily derived from Fig. 11. After some manipulations, the system in Eq. 41 yields: Therefore, by substituting Eq. 42 in Eq. 40, the displacement 푢 푖 of the fictitious node is expressed as a function solely of the displacements of the real nodes. This procedure can be repeated for the required variables of all the fictitious nodes.
Numerical formulation of peridynamics
Consider a real node 푖, as shown in Fig. 12. The neighborhood H 푖 of node 푖 embeds the complete volume of the nearest nodes and the partial volume of the nodes near the horizon limit. Therefore, all the nodes with at least a portion of their own volume within the horizon limit are considered part of the neighborhood H 푖 . For each family node 푗, the volume correction coefficient 훽 푖 푗 ≤ 1 is computed as the fraction of volume actually contained in the neighborhood [36]. If 훥푉 of node 푗 is completely inside the neighborhood, then 훽 푖 푗 = 1. The bond 푖 푗, which connects node 푖 to node 푗, could be either a real bond or a fictitious bond. In both cases, 12 The neighborhood H 푖 of a node 푖 is constituted by the nodes (black dots) with at least a part of their volume inside the horizon (gray line). The volume correction coefficient 훽 is the fraction of the volume of the family nodes 푗 within the horizon limit its reference scalar state 푥 푖 푗 , Gaussian influence function 휔 푖 푗 and inclination 휙 푖 푗 with respect to the 푥-axis, can be computed from the coordinates of the two nodes. Under the assumption of small displacements, the extension scalar state of bond 푖 푗 is given as where 푢 푖 and 푢 푗 are the displacements in 푥 direction respectively of nodes 푖 and 푗 and 푣 푖 and 푣 푗 are the displacements in 푦 direction respectively of nodes 푖 and 푗. If the family node 푗 is fictitious, Eq. 43 holds and 푢 푗 and 푣 푗 are determined as a function of the displacements of the real nodes by the Taylor-based extrapolation method exposed in Sect. 4.1.
The weighted volume of node 푖 is evaluated by performing a mid-point Gauss quadrature from Eq. 13: Since the neighborhoods of all the real nodes are complete thanks to the presence of the fictitious nodes, the weighted volume is constant in the whole body. Furthermore, the value of the weighted volume of the real nodes is assigned also to the fictitious nodes, as dictated by the Taylor-based extrapolation method.
Similarly, the dilatation of a real node 푖 is computed from Eq. 14 as On the other hand, the dilatations of the fictitious nodes are determined as a function of the dilatations of the real nodes by means of another Taylor-based extrapolation, as illustrated in Sect. 4.1.
The magnitude of the pairwise force density in bond 푖 푗 is given from Eq. 20 as Note that the constants 푘 휃 and 푘 푒 are determined by the constitutive modelling of the peridynamic theory, the parameters 푚 푖 , 푚 푗 , 휔 푖 푗 , 푥 푖 푗 and 훽 푖 푗 depend only on the geometric coordinates of the nodes in the reference configuration and the variables 푒 푖 푗 , 휃 푖 and 휃 푗 can be written as functions of the displacements of the real nodes (by using the proposed Taylor-based extrapolation method for the variables of the fictitious nodes). Therefore, by combining Eqs. 43-46 together, one can write an equation for each bond 푖 푗, either real or fictitious, to relate the magnitude f 푖 푗 of its pairwise force density to the displacements of the real nodes.
Finally, under the assumption of small deformation, the peridynamic equilibrium equation (multiplied by the node volume 훥푉) is written for every real node 푖 as where m 푖 푗 = {cos 휙 푖 푗 , sin 휙 푖 푗 } ⊤ is the bond direction in the reference configuration and b 푖 is the external force density vector applied to node 푖. The system of equations in Eq. 47 can be rewritten in the standard form where K is the peridynamic stiffness matrix (size: 2푁 × 2푁), u is the displacement vector (size: 2푁 × 1) and f is the force vector (size: 2푁 × 1). 푁 is the number of real nodes. The stiffness matrix K includes the contributions of the fictitious bonds, thus it embeds the correction of the surface effect by means of the Taylor-based extrapolation method.
Numerical evaluation of the peridynamic stress tensor
This section deals with the numerical procedure to compute the peridynamic stress tensor. The theoretical background can be found in Sect. 2.2.
In the discretized peridynamic model, we name the nodes corresponding to the points x, x ′ and x ′′ (see Fig. 2) respectively as 푖, 푗 and 푘. Under the assumption of point 푖 being in the bulk of a body subjected to a homogeneous deformation (see Eq. 23), the peridynamic stress tensor can be computed as where 푟 푖푘 is the length of the bond 푖푘. Equation 49 provides a good approximation also in the case of non-homogeneous deformations if the horizon 훿 is sufficiently small, as shown in [10] for bond-based peridynamics. However, Eq. 49 is not valid for nodes near the boundary which are affected by the surface effect (if no fictitious layer is employed).
In order to compute numerically the peridynamic stress tensor 푖 in a general node 푖, also not in the bulk of the body, the integrand of Eq. 22 should be evaluated for each bond 푗 푘 between node 푗 and node 푘. The differential volume d푉 x ′′ corresponds simply to the finite volume of node 푘, i.e., 훥푉. On the other hand, we must distinguish two types of bonds, which are shown in Fig. 13, to determine 훥푠 as the corresponding of the differential length d푠 of point x ′ in the direction of the bond m: where the trigonometric functions are within the absolute value because 훥푠 > 0. In the former case bond 푗 푘 is a type-A bond and, in the latter, a type-B bond, which are respectively shown in Fig. 13a and b. A type-A bond contributes to 푖 if it intersects the area 훥퐴 A 푖 = ℎ훥푦 passing through node 푖 perpendicular to 푥 direction, where ℎ is the thickness of the 2-dimensional body. Similarly, a type-B bond contributes to 푖 if it intersects the area 훥퐴 B 푖 = ℎ훥푥 passing through node 푖 perpendicular to 푦 direction. Therefore, the peridynamic stress tensor in a node 푖 is given as where f 푗 푘 is the magnitude of the pairwise force density of bond 푗 푘 obtained with Eq. 46, m 푗 푘 = {cos 휙 푗 푘 , sin 휙 푗 푘 } ⊤ is the bond direction and 훼 푗 푘 is a correction coefficient, given as: where 휕 퐴 푖 is the boundary of the area 훥퐴 푖 , which can be referred either to type-A bonds (훥퐴 A 푖 ) or to type-B bonds (훥퐴 B 푖 ). The different cases in Eq. 52 are illustrated in Fig. 14: -in the case with 푗 = 푖 (see Fig. 14a), since only half of the length 훥푠 related to node 푗 is on the opposite side of 훥퐴 푖 with respect to node 푘, then 훼 푗 푘 = 1/2; -similarly, in the case with 푘 = 푖 (see Fig. 14b), since only half of the volume 훥푉 of node 푘 is on the opposite side of 훥퐴 푖 with respect to node 푗, then 훼 푗 푘 = 1/2; -in the case that the bond intersects the boundary of 훥퐴 푖 , i.e., 푗 푘 ∩휕 퐴 푖 ≠ ∅, since 휕 퐴 푖 overlaps the boundary of the area of another node (node 푞 in Fig. 14c), the magnitude of the pairwise force density of bond 푗 푘 is equally shared between those nodes and, therefore, 훼 푗 푘 = 1/2; -in the case that the bond intersects 훥퐴 푖 not on its boundary, i.e., 푗 푘 ∩ ( 훥퐴 푖 \ 휕 퐴 푖 ) ≠ ∅ (see Fig. 14d), the magnitude of the pairwise force density of bond 푗 푘 contributes entirely to 푖 and, therefore, 훼 푗 푘 = 1; Moreover, to improve the computational efficiency, one might remove the factors 1/2 in Eq. 51 and consider each bond just once (for example consider only bond 푗 푘 and not bond 푘 푗). The proposed numerical procedure to compute the peridynamic stress tensor is used to highlight the surface effect in a 2-dimensional body in Sect. 2.4. Figures 7 and 8 show that the numerical computation of the peridynamic stress tensor is very close to the analytical solutions, obtained in Sect. 2.2, for nodes with a complete neighborhood.
Numerical evaluation of the force flux
Consider a finite area 훥퐴 which constitutes one of the sides of the volume cell of a node, as shown in Fig. 15. The numerical procedure to compute the peridynamic force flux through the finite area 훥퐴 is exposed in this section.
The force flux (x, n) of a point x in a direction n is interpreted in Sect. 2.3 as the sum of the pairwise forces per unit area of all the bonds intersecting the differential area d퐴 x on the plane P in x, where P is the plane passing through x perpendicular to n (see Figs. 4 and 5 ). Therefore, the force (a) (b) Fig. 13 Examples of type-A and type-B bonds for the computation of the peridynamic stress tensor 푖 in node 푖. The bonds contribute to 푖 only if they intersect the corresponding area 훥퐴 푖 flux through 훥퐴 can be discretized from Eqs. 30 and 32 as where x and n are respectively the centroid and the normal of 훥퐴, f 푗 푘 m 푗 푘 훥푉 2 is the pairwise force of any bond 푗 푘 intersecting 훥퐴 and 훼 푗 푘 is the correction coefficient given by Eq. 52 (see also in Fig. 15a, b the possible cases in the com- putation of the force flux). Note that, in order to improve the computational efficiency, the factor 1/2 is removed because only bonds satisfying the condition m 푗 푘 · n > 0 are considered. Since we are mostly interested in the force flux computed at the boundary of the body to impose Neumann boundary conditions, this concept is further analyzed in the next section.
Numerical implementation of the peridynamic boundary conditions
The real nodes closest to the boundary are not exactly on the boundary of the body (see Fig. 10). Therefore, by following the concepts exposed in Sect. 3.2, the boundary conditions in the discretized model should be imposed on the sides of the volume cells which overlap the boundary. We introduce a new cathegory of nodes, called "boundary nodes", at which the boundary conditions are imposed. Each boundary node is positioned at the centroid of the side of the volume cell 16 Boundary nodes at the boundary of the body: each node 푏 is representative of a finite area 훥퐴 푏 and is associated to the normal n 푏 external to the body of the nodes closest to the boundary and is representative of the finite area 훥퐴 푏 of that side, as shown in Fig. 16. As the fictitious nodes, the boundary nodes do not constitute new degrees of freedom because their displacements are determined as a function of the displacements of the real nodes by means of the Taylor-based extrapolation method. Suppose that the problem requires a constraint 푢 for the displacement 푢 푏 in 푥 direction of a boundary node 푏, condition given as 푢 푏 = 푢. The Taylor-based extrapolation method is applied to the boundary node exactly as done for the fictitious nodes in Sect. 4.1. The following procedure is valid also for 푢 = 0. The displacement 푢 푏 of the boundary node 푏 with coordinates (푥 푏 , 푦 푏 ) is determined by a Taylor series expansion about node 푗 with coordinates (푥 푗 , 푦 푗 ) as where node 푗 is the real node closest to node 푏. The 푛 푑 derivatives of 푢 푗 can be expressed as functions of the displacements of the 푛 푑 real nodes close to node 푗. Therefore, the Dirichlet boundary condition can be written as a function of the displacements of some real nodes: For example, we consider the case shown in Fig. 17 with a truncation order 푛 푚푎 푥 = 2. The Taylor series expansion of the displacement 푢 푏 about node 푗 is given as In order to determine the two derivatives in 푥 direction of Eq. 56, we find nodes 푗 1 and 푗 2 , shown in Fig. 17, as the nodes closest to node 푗 having 푥 coordinates different from each other and from 푥 푗 (see Remark 3). We write the system b u b = u j j 1 j 2 Fig. 17 Example of the Taylor-based extrapolation method used on a boundary node 푏 of equations given by the Taylor series expansions of 푢 푗 1 and 푢 푗 2 about node 푗, as in Eq. 38, and solve it to obtain the needed derivatives: By substituting Eq. 57 in Eq. 56, the constraint condition 푢 푏 = 푢 is given as a function of the displacements of some real nodes: More in general, the proposed method allows to write the Dirichlet boundary conditions as functions of the displacement vector u: Suppose now that an external force per unit area p is applied to a boundary node 푏, as shown in Fig. 18. The Neumann boundary condition is written in terms of force flux through the area 훥퐴 푏 associated to node 푏: where x 푏 is the position of node 푏 and n 푏 is the unit vector perpendicular to 훥퐴 푏 external to the body. Since the pairwise force density of any bond can be expressed as a function of the displacements of the real nodes (see Sect. 4.2), also (x 푏 , n) is a function of those displacements. Thus, Neumann boundary conditions can be written as functions of the displacement vector u: where B is the matrix of the boundary conditions (size: 2푁 푏 × 2푁), u is the displacement vector (size: 2푁 × 1) and c is the vector of the known terms (size: 2푁 푏 × 1). 푁 푏 is the number of boundary nodes. In order to include the boundary conditions (B u = c) in the system of equations derived from the equilibrium of the real nodes (K u = f), we conveniently use the technique of the Lagrange multipliers [5]. The vector of the Lagrange multipliers (size: 2푁 푏 × 1) is introduced in the system as The displacement vector u, extracted from the vector { u, } ⊤ , is the solution to the system of equilibrium equations which satisfies the imposed boundary conditions.
Numerical examples
Several examples are presented to verify the reliability and accuracy of the proposed method. Whenever possible, the numerical peridynamic results are compared with the reference solutions derived from classical continuum mechanics. The reference solution coincides with the peridynamic solution only in the limit of the horizon 훿 approaching 0 [26,43]. Therefore, the "difference" between these solutions includes two components: a discrepancy due to the different (local and non-local) formulations of the theories and the actual error given by the discretization and the implementation of the peridynamic model (either with or without the proposed method). The difference (in percentage) of the displacements between the peridynamic numerical results and the reference Poisson's ratio solution is computed at each node 푖 as where 푢 푖 and 푣 푖 are the displacements of node 푖, 푢 푟 푒 푓 (x 푖 ) and 푣 푟 푒 푓 (x 푖 ) are the displacements obtained with the reference solution at the position x 푖 of node 푖 and u 푟 푒 푓 is the displacement vector obtained with the reference solution at all the nodes. The reference solution is defined for each example by providing either the analytical solution (if possible), or the results obtained with the Finite Element Method. For simplicity sake, we consider a plate under plane stress conditions with different boundary conditions. The parameters adopted for the simulations of the plate are reported in Table 1. Firstly, we solve each example without adding the fictitious nodes. In this case, the boundary conditions are implemented by assigning the desired value of the constraints or loads to the most external nodes of the plate. In particular, Dirichlet boundary conditions are imposed by assigning to the nodes closest to the boundary the value of the displacement computed with the reference solution. Then, the same examples are solved by adopting the proposed Taylor-based extrapolation method and by implementing the boundary conditions as described in Sect. 4.5.
Plate under traction
The boundary conditions of the first example are shown in Fig. 19. The analytical solution is given by classical continuum mechanics as where 푝 is the traction load.
x y p Fig. 19 Boundary conditions for the plate under the traction 푝 = 1MPa The plots in Fig. 20 show the difference of the displacement field, computed without adopting any correction to the peridynamic model, with respect to the reference solution. The surface effect and the approximated way of imposing the boundary conditions lead to large errors near the boundary of the plate, especially near the corners. On the other hand, there are no fluctuations in the displacement field when the proposed Taylor-based extrapolation method with 푛 푚푎 푥 = 1 is employed. The differences of the displacement field of the corrected model (see Fig. 21) decrease sensibly with respect to those obtained without corrections at the boundary. Similar results are obtained by choosing higher orders for the Taylor-based extrapolation.
In the case adopting the proposed method, the error can be further reduced by increasing the accuracy of the integration over the neighborhoods, i.e., by increasing 푚 [36]. In order to show this, we compute the relative difference (in percentage) at a node 푖 as In the case employing the Taylor-based extrapolation on the fictitious nodes, the relative difference of every node inside the body is the same. Therefore, we gather in Table 2 the relative errors for different values of 푚. We can observe that the relative differences decrease significantly as the value of 푚 increase.
Plate under shear load
We present another example considering a plate under a shear load 푝. Figure 22 shows the boundary conditions of this case. Classical continuum mechanics yield the following analytical solution: (67) The differences of the displacement field, computed without corrections at the boundary, with respect to the reference solution are shown in Fig. 23. We observe that the differences are lower with respect to the case of the plate under traction in Sect. 5.1. However, as highlighted in Eq. 67, one would expect the displacements 푣 in 푦 direction to be 0 in the whole body. This fact is not verified in the numerical simulation because of the surface effect (see plot of the component 휏 22 of the peridynamic stress tensor in Fig. 8). This problem is completely solved by implementing the proposed Taylor-based extrapolation method with 푛 푚푎 푥 = 1, as shown in Fig. 24. Also, the differences of the displacements 푢 in 푥 direction decreases with respect to those obtained without corrections at the boundary. Similar results are obtained by choosing higher orders for the Taylor-based extrapolation.
As in the case of the plate under traction, we compute the relative differences 휖 푟 푒푙 푢 with Eq. 66 for different values of 푚 when implementing the proposed method in the case of the plate under shear load. For each 푚, the relative differences of the displacements 푢 are constant in the whole body also in this case and they are reported in Table 3. The numerical results show a significant reduction of the differences when the numerical integration is improved.
Plate under sinusoidal load
In the previous examples, the linear variation of the displacement field was properly captured by the Taylor-based extrapolation with the order 푛 푚푎 푥 = 1 (or higher). We investigate now an example in which the order of the Taylor-based extrapolation does not match the order of variation of the displacement field. The boundary conditions of this example are shown in Fig. 25. The force density applied thoughout the plate is given as where 푏 = 10 6 N/m 3 , where the origin of the reference system is at the center of the plate. Given the symmetry of the boundary conditions, the displacements 푣 in 푦 direction on the 푥-axis are fixed to be 0 coordinates with some of the FEM nodes, so that the peridynamic results can be compared with the reference solution. We expect that, by increasing 푛 푚푎 푥 , the Taylor-based extrapolation would approximate better the displacements of the fictitious layer. Figures 27 and 28 show the differences between the numerical results and the reference solution for the numerical models either without employing any correction for the
Crack propagation near the boundaries
A qualitative study is hereinafter conducted to investigate the behavior of crack growth near the boundaries of the body. We compare the results provided by the proposed method with the solution obtained with peridynamics when no corrections for the surface effect are adopted and the boundary conditions are imposed in a local way, i.e., constraints and load are applied only at the nodes closest to the boundary. The surface effect near the new boundaries generated by the crack growth is not corrected in the present paper, but it will be dealt with in future works. In order to model fracture phenomena, we introduce the scalar 휇 which yields the status of the bond (unbroken or broken) [41]: where 푠 is the stretch of the bond and 푠 푐 is the critical stretch for plane stress conditions. These quantities are respectively computed as [25] where 퐺 0 is the energy release rate. The scalar 휇 is historydependent since a broken bond cannot be restored. The equilibrium equation is therefore modified as and the damage at each node can be evaluated as [41] For the quasi-static crack propagation, the sequentially linear analysis used in [24] is employed: not only the stiffness matrix should be modified accordingly by removing the contributions of the broken bonds, but also the same external tension should be applied solely through the residual unbroken fictitious bonds.
Crack propagation due to Dirichlet boundary conditions
The geometry and the boundary conditions of a plate with a pre-existing crack are shown in Fig. 29. The properties of the plate under plane stress conditions are: thickness ℎ = 0.005푚, Young's modulus 퐸 = 1GPa, Poisson's ratio 휈 = 0.2 and energy release rate 퐺 0 = 196J/m 2 . The constraint is given as 푢 = 0.001m. A grid spacing 훥푥 = 0.0025m is used and 푚 = 3 is chosen. The results obtained by means of the peridynamic model with no corrections at the boundary and with the proposed method are compared in Fig. 30, in which the difference of the displacements in a node 푖 is computed as where u 푐표푟푟 and v 푐표푟푟 are the displacement fields respectively in 푥 and 푦 directions. The superscript 푢푛푐표푟푟 stands for "uncorrected" (peridynamic model with no corrections at the boundaries) and 푐표푟푟 for "corrected" (peridynamic model with the Taylor-based extrapolation method with 푛 푚푎 푥 = 1). It can be noticed that there are non-negligible differences near the crack tip that may lead to different crack paths. As shown by the damage of the nodes in the final configuration of the model with no corrections (see Fig. 31), the crack branches and reaches the upper edge of the plate in two separated paths. On the other hand, when the Taylorbased extrapolation method is adopted, the crack propagates to the upper edge in a unique path and there is no branching phenomenon. Hence, the crack path may change near the boundaries if the surface effect is mitigated and the boundary conditions are imposed in a "peridynamic way".
Crack propagation due to Neumann boundary conditions
The geometry and the boundary conditions of a plate with a pre-existing crack are shown in Fig. 33. 36 Crack propagation in the pre-cracked plate for the case with Neumann boundary conditions when the Taylor-based extrapolation method is used As in Sect. 6.1, the differences between the displacements obtained with and without the proposed method are computed as in Eq. 74. The differences near the tip of the preexisting crack (see Fig. 34) are non-negligible and they may lead to different crack behaviors. The crack paths, shown in Figs. 35 and 36 respectively for the case without and with boundary corrections, are indeed different between each other. Therefore, the crack behavior may be modified by the mitigation of the surface effect and the proper imposition of the Neumann boundary conditions.
Conclusions
Two issues arising near the boundary of a body modelled with ordinary state-based peridynamics are addressed: -the surface effect, i.e., the stiffness fluctuation near the boundary; -the current lack of standard strategies to impose the boundary conditions.
The surface effect has been studied numerically by evaluating the peridynamic stress tensor with a novel discretization method (see Sect. 4.3) and the characteristic hardening/softening behavior towards the boundary has been highlighted (see Figs. 7 and 8 ). This issue has been addressed by introducing a fictitious layer that completes the partial neighborhoods of the nodes near the boundary. We proposed a new version of the Taylor-based extrapolation method adopting the nearest-node strategy: the displacements of the fictitious nodes are expressed as functions of the displacements of the closest real nodes by means of multiple Taylor series expansions. In this way, the surface effect is mitigated.
The fictitious layer is also exploited to impose the boundary conditions in a peridynamic way. The boundary of the body is discretized by the so-called "boundary nodes". As the fictitious nodes, the boundary nodes do not constitute new degrees of freedom because their displacements are obtained with the Taylor-based extrapolation method. On the one hand, Dirichlet boundary conditions are implemented by constraining the boundary node and, accordingly, the fictitious layer mitigates the surface effect. On the other hand, Neumann boundary conditions are applied, via the numerical computation of the force flux at the boundary, to the bonds involving the fictitious nodes. Therefore, the boundary conditions are imposed in a "peridynamic way".
Several numerical examples were presented to verify the accuracy of the proposed method. The numerical results obtained with the Taylor-based extrapolation method show a great improvement with respect to the peridynamic models without corrections at the boundary. Furthermore, the order of the Taylor-based extrapolation can be increased until the undesired fluctuations of the numerical results become negligible for the application of interest. It is also shown that the numerical integration of the peridynamic equilibrium equation plays a fundamental role, so that the numerical results are improved even further by increasing the value of the 푚-ratio.
Moreover, we carried out a qualitative study on crack propagation near the boundaries by comparing the results obtained by means of the proposed method with those of the peridynamic model without boundary corrections. We presented two numerical examples in which the crack paths are different because of the difference in the displacement fields. This highlights the importance of mitigating the surface effect and of imposing properly the peridynamic boundary conditions.
Declarations
Conflicts of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. | 15,670.2 | 2022-05-09T00:00:00.000 | [
"Engineering",
"Physics"
] |
Enhanced Gain Difference Power Allocation for NOMA-Based Visible Light Communications
: With the escalating demand for high-data-rate wireless services, visible light communication (VLC) technology has emerged as a promising complement to traditional radio frequency wireless networks. To further enhance the achievable rate and error performance in non-orthogonal multiple access-based VLC downlinks, an efficient power allocation scheme named enhanced gain difference power allocation (EGDPA) is proposed for a multiple-input multiple-output VLC system. The power factors are determined by considering users’ channel gains and utilizing the residual allocation principle, which focuses on the remaining power available after allocating it to the previous users. In addition, the impacts of the user distribution and transmission power are investigated, and the performance metrics in terms of achievable data rate, energy efficiency, and bit error rate are also analytically presented. Simulation results demonstrate that energy efficiency can be significantly improved and the achievable data rate gain can be enhanced by at least 6.25% with the proposed EGDPA scheme as compared with other traditional methods, confirming its superiority and validity for efficient multi-user accessing.
Introduction
Wireless data traffic has grown exponentially with the increase in mobile applications and emerging services [1].However, the limited spectrum of existing radio frequency (RF) is progressively becoming congested, and the available RF resources cannot fully satisfy the specific communication requirements for high spectral and energy efficiency scenarios [2].Recently, visible light communication (VLC) has been regarded as a potential supplementary technology to traditional RF wireless networks [3] due to its many advantages, such as an abundant and unlicensed spectrum, low cost, low power consumption, and enhanced security characteristics [4].One of the main disadvantages of VLC is the limited modulation bandwidth of the employed light-emitting diodes (LEDs) [5], where the 3 dB bandwidth has only 5∼10 MHz.To improve the achievable data rate, extensive studies on methods such as advanced optical modulation [6,7], channel equalization [8], multiple access schemes [9], and multiple-input multiple-output (MIMO) [10] have been carried out based on intensity modulation and direct detection (IM/DD) architecture.
Non-orthogonal multiple access (NOMA) is one of the key-enabler technologies in 5G networks and has attracted increasing attention from the academic and industrial communities owing to its high spectrum efficiency, user fairness, strong reliability, and massive connectivity [11].Unlike orthogonal multiple access (OMA) techniques, multiple users can be simultaneously served with the same time-frequency resources by using NOMA, which is more suitable for massive connectivity.Briefly, the power domain resources are used at the transmitter to distinguish and superimpose transmission for different users, while successive interference cancellation (SIC) is performed at the receiver to detect the signals for each user.NOMA can be integrated with VLC since it performs well at a high signal-tonoise ratio (SNR), which is a typical feature guaranteed by VLC systems [12].By pairing users and employing appropriate LEDs, the performance of NOMA over traditional OMA can be enhanced accordingly [13].Closed-form expressions for the bit error rate (BER) of NOMA-VLC systems with on-off keying (OOK) and L-ary pulse position modulation have been derived, considering perfect and imperfect channel state information [14].The results of [15] demonstrated that the performance metrics (BER, sum rate, and outage probability) in a multi-user NOMA-VLC system can be affected by the number of users, signal type, and shadowing under different half-angles of LEDs and signal reflection path conditions.
Due to the intercluster interference in the SIC procedure, users who have poor channel conditions should employ higher power to decode their useful information.How to reasonably allocate the limited power to each user plays a significant role in NOMA [16].Several studies on efficient power allocation including fixed allocation, fractional transmit allocation, strategy design [17][18][19][20], heuristics [21,22], and indirect methods based on mathematical theory [23,24] have been proposed in NOMA-VLC systems.In [17], a gain ratio power allocation (GRPA) strategy was proposed, which was reliant on the user's gain in comparison to that of the first sorted user based on the decoding order.The corresponding BER performance of this strategy was found to outperform the fixed power allocation method.For MIMO-VLC networks, a normalized gain difference power allocation (NGDPA) method that relies on the channel gain difference to determine the power allocation coefficients was proposed to increase the total rate in [18].In addition, multiple LEDs were utilized to enhance the performance of the communication system and improve the data transfer rate, capacity, and robustness.An improved fractional strategy (IFS) revising the power factors within GRPA was proposed in [19], where the constraints of the proposed strategy were rigorously explained through the proposed asymptotic and compact throughput bound.In [22], the nonlinear marine predator algorithm was applied to solve the fair power allocation problem, optimizing the sum rate efficiently and allowing for quick convergence.By utilizing the derived lower bound of the achievable rate and semidefinite relaxation technology, optimal power allocation schemes for static and mobile users were derived in [24].
However, the negative impact of residual user interference on system performance during SIC implementation has not been fully considered in the above literature.In [25], adjustable superposition coding and SIC decoding schemes were proposed to alleviate the influence of error propagation by adjusting the relative bit rate of each user.A convolutional neural network-based demodulator for NOMA-VLC was presented in [26], aiming to achieve joint signal compensation and recovery.The experimental results demonstrated that this receiver exhibited improved robustness against linear and non-linear distortions compared to receivers using SIC and joint detection.A modified SIC decoder was proposed to improve the symbol error rate performance of the three-user uplink/downlink NOMA by assuming channel gains, and the joint influence of the SNR and channel gains on the symbol error rate was also analyzed in [27].Based on the above analysis, we can see that most existing works have focused primarily on designing signal detection algorithms at the receiver to improve error performance.However, the residual interference is not well mitigated, which may lead to significant performance degradation in the achievable data rate and BER performance.
In this paper, a new multi-user power allocation scheme named enhanced gain difference power allocation (EGDPA) is proposed to mitigate the adverse effect of residual interference and then improve the achievable data rate and detection performance.The allocation factors are determined by considering users' channel gains and utilizing the residual allocation principle, which focuses on the remaining power available after allocating to the previous user rather than the initially assigned power.Moreover, the corresponding achievable data rate, energy efficiency, and error probability are analyzed to characterize the impact of power allocation factors on NOMA design.Simulation results show that the proposed scheme can provide a satisfactory sum rate, energy efficiency, and error performance compared with the OMA scheme or traditional NOMA power allocation strategies, which validates the effectiveness of the proposed scheme.
The rest of this paper is organized as follows.In Section 2, the system model for MIMO-NOMA-VLC is presented.In Section 3, the EGDPA scheme is proposed.The system sum achievable data rate, energy efficiency, and error probability are also evaluated in Section 3. Simulation results are included in Section 4, followed by conclusions summarized in Section 5.
System Model
As illustrated in Figure 1, we consider a MIMO-VLC system comprising I LEDs and K users.Therein, each user is equipped with J photodetectors (PDs).The coverage area radius for the cell is R and U 1 is positioned at the center of the cell.We assume that all users are arranged in straight lines, and the distance between the central user and the edge user is denoted as r, while the distance between the central user and U k is denoted as r k .We define L i as the i-th LED transmitter, and D j as the j-th PD.In addition, a DC bias I DC is always added to the signal to obtain the non-negative waveform x i to drive L i , which can be expressed by where P t is the electrical power of the emitter, µ i,k represents the normalized power allocation factor at the i-th LED transmitter for U k , and s i,k denotes the zero-mean OOK modulated signal prepared for U k at L i .To maintain constant total electrical power, the power allocation factors should satisfy bruary 10, 2024 submitted to Electronics 3 of 17 error performance compared with the OMA scheme or traditional NOMA power allocation 92 strategies, which validates the effectiveness of the proposed scheme.
93
The rest of this paper is organized as follows.In Section 2, the system model for 94 MIMO-NOMA-VLC is presented.In Section 3, the EGDPA scheme is proposed.System 95 sum achievable data rate, energy efficiency, and error probability are also evaluated in 96 Section 3. Simulation results are included in Section 4, followed by conclusions summarized 97 in Section 5.As illustrated in Figure 1, we consider a MIMO-VLC system which is comprised of I 100 LEDs and K users.Therein, each user is equipped with J photodetectors (PDs).Given that 101 the coverage area radius for the cell is R, and U 1 is positioned at the center of the cell.We 102 assume that all users are arranged in straight lines, and the distance between the central 103 user and the edge user is denoted as r, while the distance between the central user and U k 104 is denoted as r k .Define L i as i-th LED transmitter, and D j as j-th PD.In addition, a DC 105 bias I DC is always added to the signal to obtain the non-negative waveform x i to drive L i , 106 which can be expressed by where P t is the electrical power of the emitter, µ i,k represents the normalized power alloca-108 After VLC channel transmission, the optical signal is captured by the PD at U k and then converted to an electrical current based on optical-electrical conversion.Since the DC signal does not convey any useful information, it is always eliminated from the received signal before demodulation.Therefore, the received signal for U k can be given by where γ oe is the optical-electrical responsivity of the PD, P o represents the output optical power of the emitter, ζ denotes the modulation index, the channel matrix for U k is denoted as H k ∈ C J×M , and the vector x represents the transmitted signal vector from all LEDs.Additionally, n k denotes the additive noise vector with zero mean and variance σ 2 noise , which comprises shot noise and thermal noise [13,28] and can be expressed by where q is the electronic charge, I bg = 5100 µA represents background noise, and the equivalent bandwidth of noise is denoted as B. k is the Boltzmann constant, G = 10 denotes the open-loop voltage, η = 112 pF/cm 2 denotes the input capacitance of the PD, Γ = 1.5 represents the field-effect transistor (FET) channel noise factor, and g m captures the FET transconductance [14].
In this paper, we only focus on the LOS component in VLC systems because the power of the NLOS links is relatively lower than that of the LOS components.Considering the Lambertian radiation of LEDs, the LOS channel gain between L i and D j for U k can be formulated by where the Lambertian order of the LED is m0 = −1/log 2 (cos(ϕ 1/2 )) and ϕ 1/2 denotes the semi-angle of the LED.A PD captures the area of detection of the PD, and d ji,k represents the distance between L i and D j for U k .φ ji,k and ψ ji,k are the irradiance angle and the incident angle of the optical link for U k , respectively, while ψ C is the field of view (FOV).T s denotes a constant of an optical filter gain, and g s ψ ji,k represents the gain of an optical concentrator for U k with a refractive index n, which is given as By substituting d ji,k = r k 2 + L 2 in (4), the LOS channel gain can be expressed as where , L is the height of the light source.The channel gain primarily relies on the distance between the user and the LED, assuming a constant LED height and Ω ji,k values.
Proposed Power Allocation Scheme
In this section, we propose an enhanced gain difference power allocation (EGDPA) scheme that is based on the differences in channel gains among all users.Then, based on the residual allocation principle, the corresponding analytical expressions in terms of achievable data rate, energy efficiency, and error probability are presented accordingly.
Allocation Principle Formulation
A block diagram of a MIMO-VLC system with the proposed power allocation scheme is depicted in Figure 2. At the transmitter, the data of all users are superimposed in the power domain according to an allocation strategy and then combined with a DC signal to drive the LEDs.After passing through the VLC channel, the user captures the received signals by using J PDs.According to MIMO demultiplexing and the SIC procedure, the signals are finally demodulated into useful data for each user.With the dynamic adjustment of the power allocation factors, the transmission rate for the MIMO-VLC system employing NOMA can be enhanced.It is worth noting that a constant channel gain for fixed transmitter and receiver positions can be obtained according to (6).The GRPA scheme ranks the user channel gains and calculates the power allocation coefficients based on the numerical channel gain relationship between adjacent users in the channel gain ranking.As for GRPA, the relationship between the power factors assigned to U k and U k+1 at L i can be described by In the single-cell NOMA-VLC scenario, the primary problem is multi-user interference, where the power assigned to the demodulated user is supposed to surpass the total power allocated to the previously demodulated users, in particular at a high SNR.The problem can be alleviated by a modified fixed-power allocation (MFPA) scheme, which relies on the remaining power after the allocation to the previous user rather than the power initially assigned to them.The corresponding allocated power can be described as where α is the fixed power factor of the scheme.Equation ( 8) is referred to as the residual power principle.In this paper, by combining ( 7) and ( 8), an efficient power allocation scheme with enhanced gain difference power allocation (EGDPA) is proposed to further enhance the achievable data rate.The main idea is that the allocation factors are determined by the users' channel gain differences, which can offer higher flexibility than fixed allocation factors and is better suited to the practical needs.Then, the residual allocation principle is employed to further reduce residual interference from previously demodulated users to the intended users.In particular, the power allocation factor for U k at L i can be formulated as where b i,k can be expressed by and α i,k is represented as Comparing with ( 7) and ( 8), we find that the adjacent power allocation factors formulated by (9) exhibit smaller differences, which results in less residual interference from the previous user to the subsequent user in the SIC process.
The proposed power allocation algorithm is shown in Algorithm 1.With I LEDs and K users, the computational complexity required for b i,k can be approximated as O(IK) based on (10) and lines 2-6 in Algorithm 1.According to (11), α i,k can be calculated using the results of (10) without additional complexity.Based on (9), the computational complexity for µ i,k can be approximated as O(IK 2 ).Therefore, the overall computational complexity of the proposed algorithm can be approximately denoted as O(IK 2 ).The computational complexity of GRPA is comparable to the proposed algorithm, which can be estimated as O(IK 2 ) due to the calculation of µ i,k , as shown in (7).The calculation process of NGDPA is similar to that of GRPA, with a required computational complexity of O(IK 2 ).As described in [25], the computational complexity of MFPA can be approximately expressed as O(IK) in this paper.In summary, the proposed algorithm, in comparison to MFPA, exhibits slightly increased computational complexity.However, with the advancements in computing power, this additional complexity can be easily handled.Furthermore, the computational complexity of the proposed algorithm aligns with that of the GRPA and NGDPA methods.Calculate α i,k based on (11) 10:
end for 20: end for
In NOMA-VLC systems with multiple LEDs, decoding is arranged based on the aggregate channel gains from all LEDs to mitigate interference among users.Specifically, the user with the poorer channel condition would be assigned more power in the SIC decoding schemes [11].For simplicity, within the same LED, the power allocation factor for U k can be equivalently represented as µ k , and the equivalent channel gain can be expressed as h k = J ∑ j=1 h ji,k .The channel gains of users can be arranged in ascending order, given as where the first and last users are regarded as the weakest and strongest, respectively.Therefore, to ensure the quality of the edge user, the decoding order follows an increasing order of the channel gains.Therefore, the power allocated to each user can be ordered as When SIC is used, the residual interference from users of superior decoding order can be regarded as noise.According to (2), the interference and noise at U k are represented as where κ is a constant factor denoting the degree of residual interference, which falls within the range [0,1].A smaller value of κ indicates a better decoding performance for SIC.When κ is equal to 0, this represents perfect decoding of the user information with no residual.When κ is not equal to 0, this signifies imperfect decoding of the user information with some residual remaining.For P k = µ k P t , the signal-to-interference-plus-noise ratio (SINR) for U k can be expressed by where σ 2 = σ 2 noise /P t .Therefore, the corresponding achievable rate for U k can be derived as where B is the modulation bandwidth.It should be noted that ( 16) is conditioned on the fact that U k can detect all messages from U j , for ∀ j ≤ k.The rate at which U k detects the messages sent to U j is denoted as R k→j , and the target rate for U j is Rj .This condition can be formulated as If ( 17) is satisfied, it can be inferred that the perfect SIC in the decoding chain can be achieved.Otherwise, communication interruption will occur at U k .It is assumed that each user has not specified any requirements for the target data rate but instead strives to maximize their communication performance using the allocated power (i.e., Rj = R j ).Lemma 1.The rate at which U k detects the messages sent to U j is always higher than the achievable rate of U j .
Proof.As for j = k, the rate at which U k detects the messages sent to U j equals the achievable rate for U j based on ( 16) and (17).As for j < k, according to ( 16) and ( 17), R k→j can be expressed as and R j is calculated by To simplify the subsequent analysis, let S k→j and S j replace R k→j and R j according to (19) and (18), respectively.This can be represented as We arrange all users according to (12); thus, we have h j ≤ h k .Based on the aforementioned analysis, the comparison of R k→j and R j is comparable to that of S k→j and S j .Consequently, the difference between S k→j and S j can be expressed as Let S ′ denote (22), which can be further represented by Therefore, the rate difference can be expressed by The proof is completed.
Consequently, each user can achieve a data rate determined by (16), and the total system rate is a summation of all users' data rates.Furthermore, it can be observed from ( 16) that R k can be increased with the value of h k for a given power allocation factor.If all users possess equal power allocation factors, the user with superior channel conditions will attain a higher data rate.Moreover, to enhance fairness among users, the channel allocation factor for users with inferior channel conditions is augmented while the power allocation factor for users with superior channel conditions is reduced.
Accordingly, the energy efficiency can be calculated by where P max represents the maximum transmit power of the LED.Hence, considering that P max is constant, the problem of maximizing energy efficiency can be reduced to the problem of maximizing achievable rates.
Error Probability
For simplicity, only two users are considered in the NOMA-based MIMO-VLC system.It is assumed that the symbols of both users are mutually independent and equiprobable when evaluating the error probability.Considering the OOK modulation, the BER for U 1 (i.e., distant user) can be formulated as where x exp(−u 2 /2)du.Let Θ i and Θ i represent the successful and erroneous demodulation of U i (i = 1, 2), respectively.Thus, the BER for U 2 (i.e., near user) can be calculated as where P(Θ 2 , Θ 1 ) represents the joint probability when U 1 correctly decodes its signal, whereas U 2 has incorrect decoding.Similarly, P(Θ 2 , Θ 1 ) denotes the joint probability when both U 1 and U 2 obtain the incorrect decoding at the same time.P(Θ 2 |Θ 1 ) and P(Θ 2 |Θ 1 ) are the conditional probabilities as U 2 decodes its signal incorrectly on the conditions that the correct and incorrect decoding of U 1 are achieved, respectively.Replacing γ 1 with γ 2 in (26), the BER for decoding the U 1 's signal at U 2 can be given as where γ 2 = γ oe h 1 /σ n .As for the evaluation of P e,2 , the decoding process is initially applied to the far user and subsequently followed by the implementation of SIC.After the successful execution of SIC at U 1 , the error probability for U 2 can be derived as When the signal of U 1 is incorrectly decoded at U 2 , the joint error probability of U 2 can be derived as By substituting ( 28)-( 30) into (27), the error probability for U 2 can be achieved.
Simulations
A NOMA-based MIMO-VLC system with I = 2 and J = 2 is considered for the simulations by employing various power allocation strategies.The architecture of the system is depicted in Figure 1, where the LED spacing is 1 m and the PD spacing is 4 cm.
The receiving plane has a height of 0.85 m above the floor, and the cell radius R is 4 m.Let φ C and ϕ 1/2 be fixed to 72 • and 50 • , respectively, and the modulation index ζ be 0.5.The refractive index n is 1.5, the transmitted optical power P o is 10 W, whilst the modulation bandwidth B is configured to 10 MHz.Regarding the PDs, the detection area and the optical-electrical responsivity are 1 cm 2 and 0.53 A/W, respectively.For brevity, the main simulation parameters are listed in Table 1.We assume that the users are uniformly distributed with stationary positions.Additionally, the OMA strategy with equal power allocation is evaluated here for performance comparison.
Comparisons of Achievable Rates with Different User Numbers
For K = 2, Figure 3 demonstrates the achievable data rate provided by each LED at different relative distances.It can be seen that the data rates of the two methods exhibit minimal difference when the relative distance is below 1.6 m.As the relative distance ranges from 1.6 m to 2.4 m, the data rates of each LED are rapidly diminished.Nonetheless, we find that the reduction in data rates is relatively moderate when using the proposed EGDPA.For instance, as the distance grows from 2 m to 2.4 m, the rate for L 2 using the proposed EGDPA reduces from 56.9 Mbits/s to 54.9 Mbits/s, while that of NGDPA drops more visually from 55.6 Mbits/s to 53.1 Mbits/s.When the relative distance exceeds 2.8 m, the rate of L 2 rebounds and stabilizes at about 57.7 Mbit/s while that of L 1 declines greatly due to its poor channel conditions.The proposed scheme achieves better results relative to the NGDPA method, albeit by a small margin.
Figure 4 compares the achievable sum rate with different power allocation schemes.The results demonstrate that as the relative distance rises, the sum rate for OMA and GRPA is dramatically decreased.Only a slight performance degradation is introduced in the proposed scheme.When the relative distance is less than 1.6 m, NGDPA, MFPA [25], and the proposed scheme completely overlap.However, after the relative distance exceeds 1.6 m, the rates of NGDPA and MFPA fluctuate below that of the proposed scheme.For instance, for K = 2, at a relative distance of 2.4 m, the minimum sum rate is attained by NGDPA, MFPA, and the proposed scheme.However, by utilizing the proposed scheme, a greater sum rate of 106.9 Mbit/s is attained, which still surpasses the 102.7 Mbit/s and 104.4 Mbit/s accomplished by MFPA and NGDPA.For K = 3, at relative distances of 2 m and 4 m, the sum rate of NGDPA remains relatively low, achieving approximately 105 Mbit/s and 104.9 Mbit/s, respectively.Nonetheless, the proposed scheme achieves better rates of 111.7 Mbit/s and 109.7 Mbit/s, respectively, at these distances.Compared with alternative methods, the proposed scheme displays notable resistance to interference, indicating the robustness of the system.The gain in sum rate obtained by the proposed EGDPA when compared to the traditional NGDPA is demonstrated in Figure 5.The results clearly show that the sum rate gain of the proposed EGDPA over the traditional NGDPA increases substantially when the number of users increases from 2 to 3. As compared with NGDPA at r = 2 m, the proposed scheme achieves improvements in data rate by 2.12% and 6.25% for K = 2 and 3, respectively.Furthermore, when the relative distance is set as 4 m, the proposed scheme achieves a gain of 4.58% for K = 3 since the furthest user is at the edge of the cell.The aforementioned analysis substantiates the effectiveness of the proposed scheme.
Figure 6 shows the performance comparison of the achievable sum rate under different numbers of served users.Notably, the GRPA, IFS [19], and NGDPA methods employ a format dependent on the channel gain, while the MFPA method uses a modified fixed allocation format.The figure clearly shows that all schemes can achieve an excellent data rate when serving a small number of users.As K rises, the achievable data rates provided by GRPA, IFS, NGDPA, and MFPA decline sharply, while that of the proposed scheme still achieves rather stable performance.The main reason for this is that the adjacent power allocation factors of the proposed scheme are more different compared with other schemes when the number of users is increased.Additionally, the proposed EGDPA scheme has improved the sum rate by 25.2% with 30 users.The gain in sum rate obtained by the proposed EGDPA when compared to the tra-303 ditional NGDPA is demonstrated in Figure 5.The results clearly show that the sum rate 304 gain of the proposed EGDPA over the traditional NGDPA increases substantially when the 305 number of users grows from 2 to 3. As compared with NGDPA at r = 2 m, the proposed 306 scheme achieves the improvements of data rate by 2.12% and 6.25% for K = 2 and 3, 307 respectively.Furthermore, when the relative distance is set as 4 m, the proposed scheme 308 achieves a gain of 4.58% for K = 3 since the furthest user is at the edge of the cell.The 309 aforementioned analysis substantiates the effectiveness of the proposed scheme.Figure 6 shows the performance comparison of the achievable sum rate under different 311 numbers of served users.Note that, the GRPA, IFS [19] and NGDPA methods employ a 312 format dependent on the channel gain, while the MFPA method uses a modified fixed 313 allocation format.The figure clearly shows that all schemes can achieve an excellent data 314 rate when serving a small number of users.As K rises, the achievable data rates provided 315 by GRPA, IFS, NGDPA and MFPA decline sharply while that of the proposed scheme still 316 The gain in sum rate obtained by the proposed EGDPA when compared to the tra-303 ditional NGDPA is demonstrated in Figure 5.The results clearly show that the sum rate 304 gain of the proposed EGDPA over the traditional NGDPA increases substantially when the 305 number of users grows from 2 to 3. As compared with NGDPA at r = 2 m, the proposed 306 scheme achieves the improvements of data rate by 2.12% and 6.25% for K = 2 and 3, 307 respectively.Furthermore, when the relative distance is set as 4 m, the proposed scheme 308 achieves a gain of 4.58% for K = 3 since the furthest user is at the edge of the cell.The 309 aforementioned analysis substantiates the effectiveness of the proposed scheme.Figure 6 shows the performance comparison of the achievable sum rate under different 311 numbers of served users.Note that, the GRPA, IFS [19] and NGDPA methods employ a 312 format dependent on the channel gain, while the MFPA method uses a modified fixed 313 allocation format.The figure clearly shows that all schemes can achieve an excellent data 314 rate when serving a small number of users.As K rises, the achievable data rates provided 315 by GRPA, IFS, NGDPA and MFPA decline sharply while that of the proposed scheme still 316
The Impact of Residual Interference and Modulation Bandwidth
As shown in Figure 7, we investigated the impact of the residual interference factor κ on the sum rate when using the proposed scheme.The values of κ were set to 0, 0.0001, 0.001, and 0.01.As the value escalated, the rate of the proposed scheme declined.Consequently, it became apparent that the residual interference, which is not fully eliminated during the SIC process, significantly hampers the system performance.For instance, when κ is 0, the sum rate of 10 users in the illuminated area is approximately 113.1 Mbit/s.Nevertheless, as κ rises to 0.0001, 0.001, and 0.01, the sum rate decreases to 109.8 Mbit/s, 94.8 Mbit/s, and 66.1 Mbit/s, respectively.Figure 8 illustrates the impact of the transmission power and modulation bandwidth on the sum rate performance for the proposed scheme.The sum rate of the proposed scheme is positively correlated with transmission power when the modulation bandwidth is fixed.As the signal power increases, the additive power also increases to a lesser extent, resulting in improved SNR and subsequently a higher rate.Furthermore, increasing the modulation bandwidth also leads to an increase in rate with a fixed transmission power.Additionally, with the increase in modulation bandwidth, the impact of transmission power on the rate of the proposed scheme becomes more prominent.For instance, for a required system sum rate of 100 Mbit/s, the power consumption is 6 W at a modulation bandwidth of 10 MHz, whereas it reduces to 1.3 W when the bandwidth is increased to 20 MHz.
Comparisons of Energy Efficiency
The performance of energy efficiency in two-user and three-user scenarios is shown in Figure 9.The proposed scheme demonstrates superior energy efficiency in both scenarios when compared to GRPA, NGDPA, and MFPA.Furthermore, the energy efficiency of the proposed scheme improves markedly as the number of users grows, while those of the GRPA and NGDPA methods exhibit a decline.As K grows from 2 to 3, the energy efficiency obtained by GRPA decreases significantly by at least 8.14%.At the same time, that of NGDPA increases initially but subsequently decreases, e.g., by 3.85% when the transmission power is 15 W. The main reason for this is that the channel differences are decreased in the three-user scenario.Nevertheless, the proposed scheme can better utilize user channel information to address power imbalances among users.Hence, the proposed scheme is suitable for energy-constrained conditions.
Error Probability
The error probability achieved by the proposed scheme in the two-user scenario is illustrated in Figure 10.As the location of the users is fixed, the system's error probability reduces obviously with an increase in the LED optical power.Specifically, U 1 (i.e., the distant user) is allocated more power, resulting in better error performance.Despite the better channel conditions of U 2 , its BER performance is slightly inferior due to receiving less power.To a certain extent, this indicates the fairness of the proposed scheme.Furthermore, as the relative distance between users rises, the error performance improves considerably when the optical power is determined.For instance, when the error probability equals 10 −3 , the required optical power decreases from 2.7 W to 1.3 W as the relative distance r grows from 0.8 m to 1 m.This is due to the symmetrical geometric positioning of the two users around L 2 when r is 1 m, resulting in power resource savings.Overall, the proposed scheme achieves a superior BER performance, indicating its reliability.
Comprehensive Analysis
Table 2 presents a comprehensive comparison of different schemes.The proposed scheme outperforms other schemes in terms of the sum rate and energy efficiency.For example, compared to the GRPA scheme, the proposed scheme achieves a maximum sum rate gain of 36.26%, a maximum sum rate gain of 25.09% compared to the NGDPA scheme, and a gain of 10.87% compared to the MFPA scheme.Additionally, when the total transmission power of the three users is 15 W, the proposed scheme achieves an energy efficiency gain of 23.44% compared to GRPA, 5.68% compared to NGDPA, and 1.71% compared to MFPA.The proposed scheme's performance gain increases with the number of users, as its residual allocation principle effectively reduces inter-user interference caused by multiple users.grows from 0.8 m to 1 m.This is due to the symmetrical geometric positioning of the two 362 users around L 2 when r is 1 m, resulting in power resource savings.Overall, the proposed 363 scheme achieves a superior BER performance, indicating its reliability.
Comprehensive analysis 365
Table 2 presents a comprehensive comparison of different schemes.The proposed 366 scheme outperforms other schemes in terms of sum rate and energy efficiency.For example, 367 compared to the GRPA scheme, the proposed scheme achieves a maximum sum rate gain 368 of 36.26%, a maximum sum rate gain of 25.09% compared to the NGDPA scheme, and 369 that of 10.87% compared to the MFPA scheme.Additionally, when the total transmission 370 power of the three users is 15 W, the proposed scheme achieves an energy efficiency gain of 371 23.44% compared to GRPA, 5.68% compared to NGDPA, and 1.71% compared to GRPA.372 The proposed scheme's performance gain increases with the number of users, as its residual 373 allocation principle effectively reduces inter-user interference caused by multiple users.
Conclusion 375
In this paper, an enhanced gain difference power allocation scheme has been proposed 376 to improve the sum rate of a multi-user NOMA based MIMO-VLC system, which adapts 377 to user channel conditions and efficiently utilizes the gain difference.Efficient power 378
Conclusions
In this paper, an enhanced gain difference power allocation scheme has been proposed to improve the sum rate of a multi-user NOMA-based MIMO-VLC system, which adapts to user channel conditions and efficiently utilizes the gain difference.Efficient power allocation is achieved by utilizing the residual allocation principle, which emphasizes the power that remains available after allocation to the preceding users, rather than the initially assigned power.Furthermore, an assessment of performance metrics such as the achievable data rate, energy efficiency, and BER was conducted.The numerical results demonstrate that the interference in SIC can be effectively alleviated, and the proposed scheme can achieve a significant performance improvement in terms of both sum rate and energy efficiency over the traditional schemes.In addition, the proposed scheme requires more iterative operation than alternative schemes, so the consumption of hardware resources is slightly higher.In the scenario of a random geometric distribution of users, there may be users with the same channel gain.However, the sorting criterion for this case has not been taken into account.In the future, we are planning to extend our proposed EGDPA scheme into multi-cell scenarios with NLOS links.To better accommodate practice, we will consider adaptive SNR requirements and user association mode selection.The proposed EGDPA scheme will be adjusted and optimized to adapt to the various system requirements.
Figure 1 .
Figure 1.The indoor NOMA-based MIMO-VLC system with I transmitters and K users, where each receiver is equipped with J PDs.
Figure 1 .
Figure 1.Indoor NOMA-based MIMO-VLC system with I transmitters and K users, where each receiver is equipped with J PDs.
Figure 2 .Algorithm 1
Figure 2. Block diagram of a MIMO-VLC system with the proposed power allocation scheme.
Figure 3 .
Figure 3. Achievable rate for each LED with two users (K = 2).
Figure 4 .
Figure 4. Achievable sum rate with two and three users (K = 2 and 3).
0, 2024 submitted to Electronics 12 of 17 Figure 5 .
Figure 5. Sum rate gain of the proposed scheme over NGDPA. 310
Figure 6 .
Figure 6.Comparison of the achievable sum rate under different numbers of users.
Figure 5 .
Figure 5. Sum rate gain of the proposed scheme over NGDPA.
Figure 5 .
Figure 5. Sum rate gain of the proposed scheme over NGDPA. 310
Figure 6 .
Figure 6.Comparison of the achievable sum rate under different numbers of users.
Figure 6 .
Figure 6.Comparison of the achievable sum rate under different numbers of users.
Figure 7 .
Figure 7.Comparison of the achievable data rate for different numbers of users with various values of κ.
Figure 8 .
Figure 8. Impact of the transmission power and modulation bandwidth for the proposed scheme.
Figure 9 .
Figure 9. Energy efficiency comparison for different power allocation schemes. 364
Figure 10 .
Figure 10.Error probability of the proposed scheme with increasing optical power between two users.
Figure 10 .
Figure 10.Error probability of the proposed scheme with increasing optical power between two users.
Table 2 .
Comprehensive comparison of different schemes.The values (Mbit/s/W) are taken when the transmission power is 15W. *
Table 2 .
Comprehensive comparison of different schemes.
* The values (Mbit/s/W) were measured when the transmission power was 15 W. | 8,821.4 | 2024-02-16T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Plasmodium vivax Cell-Traversal Protein for Ookinetes and Sporozoites: Naturally Acquired Humoral Immune Response and B-Cell Epitope Mapping in Brazilian Amazon Inhabitants
The cell-traversal protein for ookinetes and sporozoites (CelTOS), a highly conserved antigen involved in sporozoite motility, plays an important role in the traversal of host cells during the preerythrocytic stage of Plasmodium species. Recently, it has been considered an alternative target when designing novel antimalarial vaccines against Plasmodium falciparum. However, the potential of Plasmodium vivax CelTOS as a vaccine target is yet to be explored. This study evaluated the naturally acquired immune response against a recombinant P. vivax CelTOS (PvCelTOS) (IgG and IgG subclass) in 528 individuals from Brazilian Amazon, as well as the screening of B-cell epitopes in silico and peptide assays to associate the breadth of antibody responses of those individuals with exposition and/or protection correlates. We show that PvCelTOS is naturally immunogenic in Amazon inhabitants with 94 individuals (17.8%) showing specific IgG antibodies against the recombinant protein. Among responders, the IgG reactivity indexes (RIs) presented a direct correlation with the number of previous malaria episodes (p = 0.003; r = 0.315) and inverse correlation with the time elapsed from the last malaria episode (p = 0.031; r = −0.258). Interestingly, high responders to PvCelTOS (RI > 2) presented higher number of previous malaria episodes, frequency of recent malaria episodes, and ratio of cytophilic/non-cytophilic antibodies than low responders (RI < 2) and non-responders (RI < 1). Moreover, a high prevalence of the cytophilic antibody IgG1 over all other IgG subclasses (p < 0.0001) was observed. B-cell epitope mapping revealed five immunogenic regions in PvCelTOS, but no associations between the specific IgG response to peptides and exposure/protection parameters were found. However, the epitope (PvCelTOSI136-E143) was validated as a main linear B-cell epitope, as 92% of IgG responders to PvCelTOS were also responders to this peptide sequence. This study describes for the first time the natural immunogenicity of PvCelTOS in Amazon individuals and identifies immunogenic regions in a full-length protein. The IgG magnitude was mainly composed of cytophilic antibodies (IgG1) and associated with recent malaria episodes. The data presented in this paper add further evidence to consider PvCelTOS as a vaccine candidate.
The cell-traversal protein for ookinetes and sporozoites (CelTOS), a highly conserved antigen involved in sporozoite motility, plays an important role in the traversal of host cells during the preerythrocytic stage of Plasmodium species. Recently, it has been considered an alternative target when designing novel antimalarial vaccines against Plasmodium falciparum. However, the potential of Plasmodium vivax CelTOS as a vaccine target is yet to be explored. This study evaluated the naturally acquired immune response against a recombinant P. vivax CelTOS (PvCelTOS) (IgG and IgG subclass) in 528 individuals from Brazilian Amazon, as well as the screening of B-cell epitopes in silico and peptide assays to associate the breadth of antibody responses of those individuals with exposition and/or protection correlates. We show that PvCelTOS is naturally immunogenic in Amazon inhabitants with 94 individuals (17.8%) showing specific IgG antibodies against the recombinant protein. Among responders, the IgG reactivity indexes (RIs) presented a direct correlation with the number of previous malaria episodes (p = 0.003; r = 0.315) and inverse correlation with the time elapsed from the last malaria episode (p = 0.031; r = −0.258). Interestingly, high responders to PvCelTOS (RI > 2) presented higher number of previous malaria episodes, frequency of recent malaria episodes, and ratio of cytophilic/ non-cytophilic antibodies than low responders (RI < 2) and non-responders (RI < 1). Moreover, a high prevalence of the cytophilic antibody IgG1 over all other IgG subclasses (p < 0.0001) was observed. B-cell epitope mapping revealed five immunogenic regions in PvCelTOS, but no associations between the specific IgG response to peptides and inTrODUcTiOn Malaria remains a major public health problem worldwide. It is caused by protozoan parasites of the genus Plasmodium, being responsible for nearly 438,000 deaths and 150-300 million new infections in 2015 (1) and the reason of enormous socioeconomic impact in endemic settings (2). Among the Plasmodium species able to infect humans, Plasmodium falciparum and Plasmodium vivax are the most prevalent malaria parasites. P. falciparum is extremely prevalent in Africa and is responsible for the majority of cases and deaths worldwide, while P. vivax is the most prevalent species outside Africa (3). Despite the reduction in the number of malaria cases and deaths over the past decade (1), the emergence of drug resistance and the significant ongoing burden of morbidity and mortality emphasize the need for an effective malaria vaccine. Unfortunately, potential P. vivax vaccine candidates lag far behind those for P. falciparum (4). Currently, besides the RTS, S vaccine, there are 30 candidate vaccine formulations in clinical trials against P. falciparum, while there is only one against P. vivax (5). These data allied to the impact caused by the high P. vivax prevalence (2), the severity of the disease (6)(7)(8)(9)(10)(11), and the emergence of strains resistant to chloroquine (12)(13)(14) and primaquine (15)(16)(17), reiterate the importance of identifying and exploring the potential of vaccine candidates against P. vivax as an essential step in the development of a safe and affordable vaccine.
Malaria liver-stage vaccines are one of the leading strategies and the only approach that has demonstrated complete, sterile protection in clinical trials. Therefore, vaccines targeting sporozoite and liver-stage parasites, when parasite numbers are low, can lead to the elimination of the parasite before it advances to the symptomatic stage of the disease (18). Corroborating this idea, the sterile protection against P. falciparum by immunization with radiation-attenuated sporozoites was demonstrated in several studies (19)(20)(21) and the protection lasted for at least 10 months and extended to heterologous strain parasites (22). Based on these findings, sporozoite surface antigens are one of the most promising vaccine targets against malaria, to protect and prevent the symptoms and block its transmission. To date, RTS,S, the subunit vaccine consisting of a portion of P. falciparum circumsporozoite protein (CSP), conferred partial protection in Phase III trials and fell short of community-established vaccine efficacy goals (23)(24)(25)(26). Conversely, Gruner and collaborators have demonstrated that the sterile protection against sporozoites can be obtained in the absence of specific immune responses to CSP (27). In addition, a recent study found 77 parasite proteins associated with sterile protection against irradiated sporozoites (28). Collectively, these data reinforce the concept that a multivalent anti-sporozoite vaccine targeting several surface-exposed antigens would induce a higher protection efficacy.
In this scenario, cell-traversal protein of Plasmodium ookinetes and sporozoites, a highly conserved protein among Plasmodium species, emerged as a novel target in the development of a vaccine against Plasmodium parasites (29). This secretory microneme protein is translocated to the sporozoites and ookinetes surface, being necessary for sporozoites and ookinetes to break through cellular barriers and establish infection in the new host, having a crucial role on cell-transversal ability in both stages (29,30). The disruption of the genes encoding CelTOS in Plasmodium berghei reduces the infectivity in the mosquito host and also the infectivity of the sporozoite in the liver, almost eliminating their ability to cell pass (29). Interestingly, P. falciparum CelTOS (PfCelTOS) was naturally recognized by acquired antibodies in exposed populations (31), able to induce cross-reactive immunity against P. berghei and inhibit sporozoite motility and invasion of hepatocytes in vitro (32). However, the knowledge about P. vivax CelTOS (PvCelTOS) has remained limited. Only recently, a study reported PvCelTOS as naturally immunogenic in infected individuals from Western Thailand. Our group, investigating the genetic diversity of genes encoding PvCelTOS in field isolates from five different regions of the Amazon forest, reveals a highconserved profile. Together, both findings support the potential of PvCelTOS as an interesting target on P. vivax sporozoite surface, but further studies are still necessary to consolidate this protein as an alternative in future multitarget vaccines. Therefore, the present study aimed at evaluating the naturally acquired humoral immune response against PvCelTOS in exposed populations from Brazilian Amazon, determining the antibody subclass profile, identifying its B-cell epitopes and verifying the existence of associations between the specific IgG and subclass response against PvCelTOS and epidemiological data that can reflect the exposition and/or protection degree.
ParTiciPanTs anD MeThODs study area and Volunteers
A cross-sectional cohort study was conducted involving 528 individuals from Rio Preto da Eva (2°50′50″S/59°56′28″W), located north of the Amazon River and 80 km distant from Manaus, the capital of Amazon state. This city has an area of 6,000 km 2 exposure/protection parameters were found. However, the epitope (PvCelTOSI136-E143) was validated as a main linear B-cell epitope, as 92% of IgG responders to PvCelTOS were also responders to this peptide sequence. This study describes for the first time the natural immunogenicity of PvCelTOS in Amazon individuals and identifies immunogenic regions in a full-length protein. The IgG magnitude was mainly composed of cytophilic antibodies (IgG1) and associated with recent malaria episodes. The data presented in this paper add further evidence to consider PvCelTOS as a vaccine candidate.
Keywords: PvcelTOs, P. vivax, vaccines, epitope mapping, epitope prediction, malaria vaccines, malaria and a population of about 22,000 people, who live in rural areas inside the forests. Transmission of malaria in the Amazon occurs throughout the whole year, with seasonal fluctuations with maximum transmission occurring during the dry season from May to October and prevalence of infections by P. vivax, responsible for more than 85% of reported malaria cases.
Samples and survey data were collected from November 2013 to March 2015. In addition, we also included, as control subjects, 10 naive individuals living in Manaus, and with no reported previous malaria episodes. Written informed consent was obtained from all adult donors or from parents of donors in the case of children. The study was reviewed and approved by the Fundação Oswaldo Cruz Ethical Committee and the National Ethical Committee of Brazil.
epidemiological survey
In order to evaluate the possible influence of epidemiological factors on humoral immunity against PvCelTOS, all donors were interviewed upon informed consent prior to blood collection. The survey included questions related to personal exposure to malaria, such as years of residence in the endemic area, recorded individual and family previous malaria episodes, use of malaria prophylaxis, presence/absence of symptoms, and personal knowledge of malaria transmission. All epidemiological data were stored in Epi-Info for subsequent analysis (Centers for Disease Control and Prevention, Atlanta, GA, USA).
Malaria Diagnosis and Blood sampling
Venous peripheral blood was drawn into heparinized tubes and plasma collected after centrifugation (350 × g, 10 min). Plasma samples were stored at −20°C and transported to our laboratory. Thin and thick blood smears of all donors were examined for malaria parasites. Parasitological evaluations were done by examination of 200 fields at 1,000× magnification under oilimmersion and a research expert in malaria diagnosis examined all slides. Donors positive for P. vivax and/or P. falciparum at the time of blood collection were subsequently treated using the chemotherapeutic regimen recommended by the Brazilian Ministry of Health.
recombinant PvcelTOs expression in heK-293T cells As previously described (33), the P. vivax sequence for CelTOS (Salvador I; Uniprot accession number Q53UB7) was cloned in the expression vector pHLsec, which is flanked by the chicken β-actin/rabbit β-globin hybrid promoter with a signal secretion sequence and a Lys-His6 tag. The protein was expressed upon transient transfection in HEK-293T cells with endotoxin-free plasmids in roller bottles (2,125 cm 2 ). The secreted protein was purified from the supernatant by immobilized Ni Sepharose affinity chromatography. The presence of proteins in the elution samples was confirmed using 6xHis epitope tag antibody [horseradish peroxidase (HRP) conjugate] in a Western blot. The sample was concentrated using an Amicon Ultra centrifugal filter system (Life Technologies) until reaching 10 ml of final volume. Contaminant proteins and salts were removed from the concentrate by size exclusion purification (SEC) using Superdex medium in the column. Protein concentration after recovery was tested using a Bradford protein assay, and purity was assessed by silver staining and by Western blotting.
antibody assays
Anti-PvCelTOS specific antibodies were evaluated on plasma samples from 528 exposed individuals from Brazilian Amazon and 10 healthy individuals, who had no reported malaria episodes, using enzyme-linked immunosorbent assay (ELISA), essentially as previously described (33,34). Briefly, MaxiSorp 96-well plates (Nunc, Rochester, NY, USA) were coated with PBS containing 1.5 µg/ml of recombinant protein. After overnight incubation at 4°C, the plates were washed and blocked for 1 h at 37°C. Individual plasma samples diluted 1:100 in PBS-Tween containing 5% non-fat dry milk (PBS-Tween-M) were added in duplicate wells. After 1 h at 37°C and three washings with PBS-Tween, bound antibodies were detected with peroxidaseconjugated goat antihuman IgG (Sigma, St. Louis) and followed by addition of o-phenylenediamine and hydrogen peroxide. Optical density was identified at 492 nm using a SpectraMax 250 ELISA reader (Molecular Devices, Sunnyvale, CA, USA). The results for total IgG were expressed as reactivity indexes (RIs), which were calculated by the mean optical density of an individual's tested sample divided by the mean optical density of 10 non-exposed control individuals' samples plus 3 standard deviations. Subjects were scored as responders to PvCelTOS if the RI of IgG against the recombinant protein was higher than 1. Additionally, the RIs of IgG subclasses were evaluated on responders individuals by a similar method, using peroxidase-conjugated goat antihuman IgG1, IgG2, IgG3, and IgG4 (Sigma, St. Louis).
B cell epitope Prediction on PvcelTOs
The prediction of linear B-cell epitopes was carried out using the program BepiPred (35), which is based on hidden Markov model profiles of known antigens, and also incorporates hydrophilicity and secondary structure prediction. For each input FASTA sequence, the server outputs a prediction score for each amino acid. The recommended cutoff of 0.35 was used to determine potential B-cell linear epitopes, ensuring sensibility of 49% and specificity of 75% to this approach. Linear B-cell epitopes are predicted to be located at the residues with the highest scores. In this study, BepiPred was used to predict B-cell linear epitopes and to evaluate the prediction value of peptides containing short amino acid sequences of PvCelTOS.
The Emini surface accessibility (ESA) was used to evaluate the probability of predicted linear B-cell epitopes to be exposed on the surface of the protein. This approach calculates the surface accessibility of hexapeptides and values greater than 1.0 indicate an increased probability of being found on the surface (36). Sequences with BepiPred score above 0.35 and ESA score above 1.0 were considered potential linear B-cell epitopes in regions that could be accessed by naturally acquired antibodies.
B-cell epitope Mapping of PvcelTOs
A peptide library of 32 PvCelTOS synthetic 15-mer peptides overlapping by nine amino acids (GenOne Biotechnologies; purity 95% based on HPLC) was synthesized. To evaluate the specific (39) was used to solvate the system. Charges were neutralized using Na+ and Cl− ions. Steepest descent method was used for energy minimization. Further, 100 ps temperature equilibration was carried out at a temperature of 300 K in the presence of position restraints of 1,000 KJ/mol and the pressure coupling of 1,000 ps at 1 bar of atmospheric pressure. After equilibration, the simulation of 200,000 ps (200 ns) without position restraints was carried out. All simulations were run three times, and consistent results were recorded. RMSF was analyzed from simulation trajectory using GROMACS utilities. The Electrostatic potential surface for the PvCelTOS was calculated using APBS (40) and visualized in PyMOL (Pymol LLC) and the electrostatic potential surfaces for the contours from −3kT/e (red) to +3kT/e (blue) were visualized. The figures were rendered using PyMol.
statistical analysis
All statistical analyzes were carried out using Prism 5.0 for Windows (GraphPad Software, Inc.
resUlTs epidemiological Profile of studied individuals
Most studied individuals were adults and naturally exposed to malaria infection throughout the years ( Table 1). Age ranged from 10 to 89 years with an average of 36.9. The proportion of men was significantly higher (53.8%) than for women (46.2%; χ 2 = 5.761, p < 0.0164). Regarding the previous personal history of malaria, only seven individuals reported no malaria episode (1.3%). Among those who remembered the Plasmodium species, the majority (29.9%) reported infections by P. falciparum and P. vivax. The number of past malaria episodes also varied greatly among donors, ranging from 0 to 50 (mean = 7.74 ± 16.5). Finally, the time elapsed since the last malaria episode ranged from 0 to 480 months (mean = 71.7 ± 77.9). Interestingly, a correlation trend was observed between the time of residence in the endemic area and the number of previous malaria infections (p = 0.0003; r = 0.153). Collectively, the epidemiological inquiry indicated that the studied population had different degrees of exposure and/or immunity. specific IgG antibodies against the protein. Interestingly, the epidemiological data were similar between responders and NRs against this protein ( Table 1). On both groups, responders and NRs, the age, time of residence in endemic area, the number of previous malaria episodes, the number of recent malaria episodes, the frequency of individuals with recent malaria episodes, and months elapsed from the last malaria episode were similar (p > 0.05).
high igg ris against PvcelTOs are Driven by cytophilic antibodies and associated with recent infections
In order to identify possible factors that could be associated with this large spectrum of reactivity against PvCelTOS in IgG-positive individuals, we explored epidemiological data among responders. Initially, we observed that the RI against PvCelTOS was directly correlated with the number of previous malaria episodes (p = 0.047; r = 0.227; Figure S1C in Supplementary Material) and inversely correlated with the time elapsed from the last malaria episode (p = 0.045; r = −0.24; Figure S1D in Supplementary Material). Based on these findings, responder individuals were divided into two subgroups: high responders (HRs; individuals who had RI of IgG against PvCelTOS higher than 2) and low responders (LRs; individuals who had RI of IgG against PvCelTOS between 1 and 2). Figure 2A illustrates the means of epidemiological parameters of HRs, LRs, and NRs to PvCelTOS. Interestingly, while NRs and LRs presented a very similar profile of epidemiological parameters, HRs presented a statistically higher number of previous malaria episodes in comparison to NR and LR (p = 0.0058; p = 0.0051, respectively). Moreover, despite no statistical differences could be observed on the time elapsed from the last malaria episode (p = 0.15 in ANOVA test), the frequency of individuals who reported recent episodes of malaria was higher in HR (41.6%) than LR (12%, p = 0.02) and NR (13.1%, p = 0.016). Moreover, the proportion of RIs of cytophilic over non-cytophilic antibodies (IgG1 + IgG3/IgG2 + IgG4) presented direct correlation with RI of IgG of responder individuals (p = 0.0016; r = 0.32), suggesting that higher RI could be associated with a cytophilic profile of humoral response against PvCelTOS. Interestingly, although the proportion of individuals with cytophilic profile was similar in both groups, HR and LR (83% and 78%, respectively), the ratio of (cytophilic/non-cytophilic) antibodies was significantly higher in HR than LR (p = 0.0076) (Figure 2B).
Five immunogenic regions identified in PvcelTOs and Two linear B-cell epitopes Broadly recognized by naturally acquired igg antibodies
Four B-cell linear epitopes were predicted in silico in the entire sequence of PvCelTOS (PvCelTOSK6-N13; PvCelTOSG38-R57; PvCelTOSI136-E143; PvCelTOSK166-S191).
In order to validate the prediction data and identify possible non-predicted immunogenic regions of PvCelTOS, plasma from IgG responders to PvCelTOS was tested against 32 overlapping peptides corresponding to the complete amino acid sequence. First, 10 peptides (N13-L27; S19-V33; E73-I87; L79-K93; S97-A111; P127-V141; I133-G147; P139-V153; L181-L195; E182-D196) were broadly recognized by responders to PvCelTOS (Figure 3). Two of the predicted epitopes (PvCelTOSI136-E143 and PvCelTOSK166-S191) were present (partially or entirely) in peptides confirmed as naturally immunogenic. Interestingly, peptides I133-G147 and E182-D196 were recognized by IgG specific antibodies of responders to PvCelTOS in frequencies higher than 50% (92% and 54%, respectively) and presented median of RI higher than 1 (1.79 and 1.14, respectively). In addition, peptides P127-V141, P139-V153, and L181-L195 were located besides the most immunogenic peptides and presented overlapped sequences, which were also recognized by IgG antibodies in moderate frequencies. Peptide I133-G147 (ASTIKPPRVSEDAYF) presented the highest IgG RI (p < 0.0001 by ANOVA test) and the highest frequency of recognition (92%) compared to all other peptides. While it contains the entire sequence of predicted epitope PvCelTOS136-143, peptides P127-V141 and P139-V153, which contain only the partial sequence of the predicted epitope, presented minor frequencies of recognition (38% and 39%, respectively; p < 0.0001 on Fisher's exact test). The peptides L181-L195 and 186-196 were both partially inserted in the predicted linear epitope PvCelTOS166-191 and could be the immune dominant sequence of this longer predicted epitope. These data supported the prediction of linear B-cell epitopes PvCelTOSI136-E146 and PvCelTOSK166-S191. Conversely, peptides N13-L27, S19-V33, and S97-A111 also presented frequency of recognition about 40% (38, 40, and 36%, respectively). After the confirmation of five immunogenic regions and two immunodominant epitopes in PvCelTOS, we also compared the RI and frequencies between HR and LR for PvCelTOS. However, no differences were found.
Main B cell epitopes are Present on PvcelTOs surface
Peptides that presented overlapped amino acids and were recognized by more than 20% of responders to PvCelTOS (Figure 3) were grouped as immunogenic regions. All peptides inserted in identified immunogenic regions are listed in Table 2 with their respective frequencies of recognition, BepiPred and ESA scores. In this context, we identified five immunogenic regions PvCelTOSN13-V33, PvCelTOSE73-K93, PvCelTOSS97-A111, PvCelTOSP127-V153, and PvCelTOSL181-D196, in which B-cell epitopes could be inserted. Interestingly, the peptides with higher frequency of specific responders (I133-G147, L181-L195, and 182-186) presented a good combination of BepiPred and ESA score. The molecular dynamics and electrostatic potential surface of PvCelTOS indicate regions P127-V153, N13-V33, and L181-D186 as more flexible than E73-K93 and S97-A111 (Figure 4A). Regarding solvent exposure, all immunogenic regions were exposed and accessible in solution. Interestingly, the immunogenic regions L181-D196 and E73-K93 are part of a very negatively charged region, while N13-V33 and P127-V153 are in a mostly neutral-positive region ( Figure 4B).
DiscUssiOn
Despite significant advances in the understanding of the biology of Plasmodium parasites and the immune response elicited by these pathogens, there is not yet a subunit vaccine capable of providing long-lasting protection. The cell-traversal protein for ookinetes and sporozoites (CelTOS) has been considered a potential novel alternative for a vaccine against malaria (29,32,41), but the knowledge on P. vivax CelTOS potential remains scarce. Unfortunately, many conventional vaccinology strategies applied to P. falciparum are especially difficult when dealing with non-cultivable microorganisms such as P. vivax. Consequently, seroepidemiological studies have played a significant role in the identification and validation of P. vivax vaccine candidates (42)(43)(44)(45)(46)(47)(48). Therefore, we confirmed the naturally acquired humoral response against PvCelTOS (IgG and IgG subclass) and identified five B-cell epitopes along the entire PvCelTOS amino acid sequence, which were recognized by IgG antibodies from malaria-exposed populations from Brazilian Amazon. Plasma samples were collected in three cross-sectional studies with Brazilian Amazon communities between 2013 and 2015. The profile of the studied individuals shows that our population included rainforest region natives and migrants from nonendemic areas of Brazil who had lived in the area for more than 10 years. The majority of individuals reported a prior experience with P. vivax and/or P. falciparum malaria. Concerning malaria history, the highly variable range of number of previous infections, time of residence in endemic areas, and time since the last infection suggests differences in exposure and immunity, since it is well known that the acquisition of clinical immunity mediated by antibodies depends on continued exposure to the parasite (49)(50)(51). The correlation between time of residence in endemic areas and months since the last infection observed in our study also indicates that this phenomenon could be occurring in low/ medium endemic areas like the Brazilian Amazon. Therefore, the selection of these individuals was ideal to detect the presence of antibodies against the new recombinant antigen and distinguish whether the alterations found were related to malaria exposure and/or indicatives of protection.
First, we found 94 individuals presenting specific antibodies to PvCelTOS and confirmed the natural immunogenicity of PvCelTOS among exposed individuals from Brazilian Amazon. Recently, Longley and collaborators also reported the first evidence of naturally induced IgG responses to PvCelTOS in human volunteers from Western Thailand (33). Interestingly, the frequency of responders to PvCelTOS observed in our studied population (17.8%) was similar to the frequency observed by Longley on uninfected and clinical malaria individuals (33). Moreover, the low humoral reactivity against PvCelTOS is commonly found in other Plasmodium preerythrocytic antigens (48,52,53). The short life of specific antibodies, host genetic factors, and/or epidemiological parameters could be possible reasons for the low frequency of responders against PvCelTOS in endemic areas. The short life of specific PvCelTOS humoral response hypothesis does not seem to occur since Longley et al. verified that IgG positivity and magnitude of response were present over the 1-year period in the absence of P. vivax infections (33). Our study also describes anti-PvCelTOS antibodies in individuals who reported no malaria in the last 10 years or more. However, in both cases, the contact between human host and sporozoite antigens in transmission areas was not evaluated. In relation to host genetic factors, there is a significant body of evidences of its influence in malaria outcomes and the capacity to mount a humoral immune response (54)(55)(56)(57). To date, associations of HLA class II on humoral immune response to malaria antigens were reported in individuals living in malariaendemic areas from Brazilian Amazon (58,59) and in human vaccine trials (60)(61)(62). In P. vivax preerythrocytic targets, the presence of HLA-DRB1*03 and DR5 was associated with the absence of antibody response to the CSP amino-terminal region (48) and HLA-DRB1*07 was related to the absence of specific antibodies for CSP repeats of VK210 (52). Moreover, Chaves and collaborators reported that PvCelTOS gene sequence is highly conserved among isolates from different Brazilian geographic regions (unpublished data), suggesting a low selective pressure by immune response against PvCelTOS. In our view, the influence of immunogenetic factors in PvCelTOS-specific humoral response are feasible, but more studies are still necessary to confirm this hypothesis.
Regarding the influence of epidemiological factors, we initially tried to investigate the associations between exposition to malaria and the frequency of IgG responders to PvCelTOS. Surprisingly, although the association of epidemiological data with specific response against Plasmodium antigens was well characterized on several studies (63-65), we observed a similar epidemiological profile between responders and NRs to PvCelTOS. Therefore, we focused on the search of distinct epidemiological and IgG subclass profiles among PvCelTOS responder individuals. The knowledge about the antibody subclass profile is critical to suggest functional antimalarial immunity and to evaluate potential vaccine candidates. Cytophilic antibodies (IgG1 and IgG3) are frequently prevalent on immune serum from high-transmission areas (66)(67)(68)(69) and often correlate with protection from disease (70)(71)(72). In our study, IgG1 presented higher frequencies of responders and median RI than all other subclasses. Moreover, IgG3 RIs were directly associated with the number of malaria episodes over the last 12 months and inversely correlated with the time elapsed from the last malaria episode, suggesting that recent P. vivax infections can raise the levels of anti-PvCelTOS specific IgG3. The sterile protective immunity to malaria was recently associated with a panel of antigens (28), and the relationship of cytophilic antibodies and reduced risk of symptoms are a common finding in high endemic areas (70)(71)(72)(73)(74). However, in our study, concerning the higher levels of IgG1 for PvCelTOS and the association of IgG3 levels with recent infections, we cannot confirm or discard its role as part of protective humoral response until more conclusive studies, such as sporozoite inhibition by anti-PvCelTOS specific antibodies, are conducted. In the same way, among responders, IgG RIs were directly correlated with the number of previous malaria episodes and inversely correlated with the time elapsed from the last malaria episodes, suggesting that antibody levels for PvCelTOS could be associated with recent infections.
The influence of epidemiological parameters on immunity to malaria was previously observed in studies from Brazilian Amazon population. Based on previous studies that associated high levels of antibodies with multiple preerythrocytic antigens with reduced risk of clinical malaria in children (75) and decreased risk of infection in adults (68), we also aimed to investigate if Rodrigues-da-Silva et al. the epidemiological parameters could reveal new findings about the role of exposition on PvCelTOS immunogenicity. Therefore, we subdivided the large spectrum IgG RIs among PvCelTOS responders into HRs (RI > 2) and LRs (RI < 2). Although LRs and NRs to PvCelTOS presented similar exposition factors to malaria, interestingly, HR individuals presented a remarkable higher number of previous malaria episodes, frequency of recent malaria episodes, and a higher ratio of cytophilic/non-cytophilic antibodies than LRs. This observation suggested that higher level of exposition to malaria induced a more intense and improved humoral response against PvCelTOS. Unfortunately, the cross-sectional design of our study limited the investigation to retrospective malaria histories, and the best approximation of an individual's protection was the estimated amount of time that had passed since their last malaria episode, which presented no significant association with IgG response against PvCelTOS.
Prospective studies on humoral immune responses and studies addressing the ability of these antibodies to interfere the motility/ invasion of sporozoites (76,77) will provide more evidences of the protective role of anti-PvCelTOS antibodies. Information at the amino acid level about the epitopes of proteins recognized by antibodies is important for their use as biological tools and for understanding general molecular recognition events (78). In this context, epitope prediction programs have been widely used in malaria research (4,(79)(80)(81). Nevertheless, the use of chemically prepared arrays of short peptides is a more powerful tool to identify and characterize epitopes recognized by antibodies (46,82,83). It is also important to mention that in order to raise antibodies for a peptide, a minimum length of six amino acids is required, and peptides of >10 amino acids are generally required for the induction of antibodies that may bind to the native protein (84). In this context, the synthesis of 15 amino acid peptides, with 9 overlapping, has allowed the identification of PvCelTOS B-cell epitopes encompassed in sequences ranging from 15 to 27 amino acids in length. Therefore, after the confirmation of PvCelTOS as naturally immunogenic in exposed populations, the present presented highest IgG RI and frequency compared to all other naturally recognized peptides, suggesting that the majority of naturally acquired antibodies against PvCelTOS are directed to the C-terminal region. Moreover, T cell responses to PvCelTOS may also help to determine the immunodominant repertoire in individuals living in malaria-endemic regions, which could also supply information for the development of a vaccine for PvCelTOS. In humans, PfCelTOS derivate peptides elicited proliferative and IFN-γ responses in ex vivo ELISPOT assays using peripheral blood mononuclear cells from naturally exposed individuals living in Ghana (30).
Recently, CelTOS was demonstrated as highly conserved protein across several large groups of apicomplexan parasites including Plasmodium spp., Cytauxzoon, Theileria, and Babesia and considered essential to cell infection, traversal, and membrane disruption (85). Despite the genetical differences between PfCelTOS and PvCelTOS, it is important to mention that Bergmann-Leitner and colleagues immunized mice and rabbits with recombinant PfCelTOS and also observed specific antibodies for linear B-cell epitopes at C-terminal (82). These observations suggested that CelTOS could present a similar conformation among species, with similar regions targeted by antibodies. We considered that the exposition of linear epitopes is a critical step to their recognition by circulating antibodies; therefore, the combination of ESA, molecular dynamics, and electrostatic potential surface was used as a complementary approach to predict the exposition of epitope sequences on protein surface. All immunogenic regions identified were exposed and accessible to antibodies. This finding could be important in a future subunit vaccine composition based on these identified regions. However, the potential of these specific antibodies directed main PvCelTOS epitopes in the inhibition of sporozoite motility, invasion, and/or traversal remains to be investigated.
aUThOr cOnTriBUTiOns JL-J did study designing, performed experiments, data analysis, manuscript preparation, and manuscript review. RR-d-S did study designing, performed experiments, data analysis, and manuscript preparation. IS performed experiments. CL-C did recombinant protein expression and manuscript review. JM did molecular dynamics and bioinformatics and manuscript review. DP-d-S performed collection of blood and epidemiological data. AF did fieldwork support. AT performed collection of blood and epidemiological data and diagnosis. FP did fieldwork support. LC performed experiments. LP-R did data analysis and manuscript review. AR-S did recombinant protein expression, data analysis, and manuscript review. DB did study designing, fieldwork support, manuscript review, and data analysis.
acKnOWleDgMenTs
We are grateful to all volunteers who made this study possible. | 7,539 | 2017-02-07T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Data Analysis for Predictive Maintenance of Servo Motors
Vibration and temperature data of a servo motor are analyzed with PLC which is widely used in the industry. With this system, power supply can be detected on the servo motors. In this way, undesirable situations such as disruptions in production and productivity loss can be prevented from occurring. It is an important problem for businesses to detect malfunctions that may occur in servo motor dysfunction. Previously, methods such as ultrasonic sound measurements, thermal cameras, endoscopy equipment, and energy analysis have been used and discussed in the literature. Our study offers a PLC-based vibration and temperature measurement system designed as a solution of this problem. In this system, vibration and temperature measurements were made while the servo motor was kept running. 'ese measurements were measured with or without load, considering the operating ranges of the servo motor, and the compatibility of the data was evaluated.
Introduction
One of the biggest problems encountered in the automation sector is the loss suffered in production when the product dysfunctions. In order to prevent the unexpected malfunctions from occurring during the intensive production periods, the maintenance system must be well managed and organized. e implementation of maintenance activities is very important for the smooth operation of machines and work processes that operate continuously [1,2]. In such cases, it is possible to work with an external company to carry out the relevant maintenance work [3]. Briefly, maintenance activity is a planned and programmed movement that all the partitions in the firm create in an organized manner to maintain the functions of the systems in the most efficient way [4]. Research carried out on the efficiency of maintenance work shows that 33% of the maintenance expenses are unnecessary or wasted due to disruption of their periodic maintenance [5,6]. While maintenance strategy is created, for this reason, the selection of a business method that suits the maintenance requirements constitutes great importance [7]. Failure can occur while production is intensive. Maintenance and repair to be made during this process can cause high costs. Since the application of this method determines the malfunctions in advance, costs are minimized [6].
Even if the maintenance work is carried out in a comprehensive manner, it may become stereotyped over a long period, and it may not be possible to achieve the desired benefits proportional to the experience of the maintenance technician. Among the types of maintenance, predictive care occupies the last position, with an application rate of 2%. is shows that the predictive care type has been ignored although it provides many benefits for companies because it can foresee and prevent failure before the equipment malfunction occurs [8].
e origin and the development of malfunctions learned from the analyses made with the data obtained can be used in high capacity use of engines and in avoiding shutdowns caused by timeless failures. e predictive care applications are measurement, analysis, and, respectively, repair [9]. e predictive care uses the vibration measurement tool, ultrasonic sound measurement tool, thermal cameras, endoscopy equipment, and energy analyser tools. Benefits of predictive care are the increased life cycle of equipment, estimation of maintenance time, prevention of labour loss, quality, and more efficient use [10][11][12]. Although the benefits of predictive care are known, it has been determined that the proportion of firms that determine their malfunctions by applying fractional care in the world is 0.04% [13]. e systems used to determine the malfunction of products such as electric motors, generators, and transformers commonly used in the industry measure voltage and current signals to determine the malfunction status of related products [14,15]. Depending on the operating conditions and the characteristics of the points to be measured using various parameters that characterize the behaviour of systems at runtime to monitor the operating conditions and performances of the machine, the working performance of the machine under various physical quantities can be observed at certain intervals [16]. ese measurements are carried out by employing methods. ese methods include monitoring actual data, the load set and frequencies, performance curves during offline use, and use of the existing net worth [17,18]. However, the acquisition and processing of these data are disadvantageous due to the lack of an information boundary and the lack of continuity. System design and management of data traffic are important in the prevention and diagnosis of malfunctions. Writing-based system developments are among the attempted methods [19]. However, these system approaches are generally concentrated on vibration.
For predictive maintenance purposes, thermodynamic and dynamic analysis of a motor that operates between the attachment of the sharp ends of the cylinder, gamma type, free piston, and hot and cold welding temperature can be carried out by detecting the development of a running machine's failure [1] on asynchronous motors [2]. Also, real-time monitoring and evaluation of rotating machines can be done [20]. Today, some methods have been used to determine the friction problems of fixed and rotating parts. One of these methods is friction spectral analysis which is carried out by the measurement of the distribution of the characteristics of reaction. However, the disadvantage of this method is that the friction generating equipment produces noise in the current frequency band. A complete analysis of the temporary response of the rotor-stator interaction in which the friction process is represented by a linear product model (Coulomb friction), and the distribution of the cavity effects due to friction in the spiral vibrations which increases the stability of the system in the rubbing area, has been detected [21,22].
Our study offers a PLC-based vibration and temperature measurement system designed for the solution of this problem. Vibration and temperature measurements were made while working on the servo motor made according to this system. ese measurements were measured with and without load, while considering the operating ranges of the servo motor and while considering the compatibility of working with the graphical data analyses.
Materials and Methods
is system is designed to measure the vibration and temperature measurements in servo motors. As seen in Figure 1, the vibration values and temperature values of the surface area of the servo motor on the X and Y axes were controlled by PLC, which is widely used in the industry.
In this study, the automation systems (motor, moving elements, etc.) is used to measure the vibration and temperature values of the products to determine whether the products comply with the standards and to prevent malfunctions. e acceptance of vibration values obtained from the measurements of the machines has been determined with the international ISO 2372 standard. is standard has been used to evaluate the vibration intensity of machines operating between 10-200 Hz [23]. In this study, measurements and evaluations were applied to servo motors between 10-50 Hz by connecting them to any machine. All the harmonic movements that occur in simple harmonic vibrations are repeated periodically. e magnitude of the forces required for the vibration to occur is proportional to the intensity of the vibration [24,25]. Displacement, velocity, and acceleration are units of amplitude. e unit to be used during the measurement is decided upon the system's work value Hz. e measurements done have been interpreted and evaluated according the ISO 2372 standards.
Temperature is a term connected to a system's molecule's average kinetic energy. It is a base magnitude and a scaler. As the temperature increases, the kinetic energy of the molecules also increases and they move faster; while temperature decreases, the molecules' kinetic energy decreases well and they move slower. If two or more objects are in contact, an energy transfer occurs from the hot objects to the cold objects until there is a thermal equilibrium [26]. Temperature measuring detectors, which are frequently used in industrial environments, are very important, because they determine the temperature range and process conditions made in industrial environments. ese measuring systems are generally of low cost semiconductor (PTC-NTC and similar) materials. Today, analog temperature detectors such as NTC and PTC are also used alongside digital temperature detectors. e operating algorithm of the system is shown in Figure 2. Press start to perform vibration and temperature analysis and select the servo motor model in the recipes. Measurement will start automatically after the recipe is selected. e measurement will be completed when the preset time has elapsed. If the servo motor is operating in accordance with normal operating conditions, the SCADA system will record the data and finish the operation. If the servo motor is not operating in accordance with normal operating conditions, the SCADA system will warn and the servo motor maintenance operation will be required. Shock and Vibration
System Design.
e system designed in this work is used to measure the vibration and temperature values of servo motors. When selecting a detector, the following aspects need be paid attention: e reading sensitivity of the detector. e minimum and maximum values to be measured. e sensitivity limit against the highest temperature to be measured. e reaction speed and the reading accuracy against the change of temperature in unit time.
e continuation time of determination and accuracy. e restrictions of the environment. e accuracy level of the application and the change in cost according to the way the detector has been mounted. As there are transducers that measure temperature without contact, temperature detectors usually work by being in contact with the surface that is going to be measured. e temperature detecting equipment consists of thermoelectric temperature elements and resistive temperature elements.
Vibration and Temperature Measurement.
Vibration is expressed in two physical variables which are vibration frequency and intensity of vibration. e frequency of vibration is the number of vibration in a unit of time, and the unit is expressed in hertz (Hz). e severity of vibration is the current strength that occurs in unit of time in the environment where the vibration occurs perpendicular to the energy which comes from the vibration, and the unit of the severity of vibration is (W/cm 2 ) [27].
Analysis of the data obtained by measurement is important for maintenance and performance use. Accurate analysis of these data creates predictable data for machine failure. For example, the balancing of all forces on piston and rotating machines and the use of special montages decrease the stress. Likewise, the vibration characteristics of the system need to be understood, and the resonance condition analysis needs to be carried out to get an excellent working performance [28]. e goal here is to avoid resonance with the measurement of the vibrations created and the experiments. Moreover, thus decrease releases, many studies have been performed on this. As an example, the deduction below has been made about a ship's vibration in conclusion of the experiments made [29].
Alongside the measurements of vibration, the temperature measurements also carry importance to a large extent. Because it is a parameter that affects various properties and creates a deformation effect on materials, it is essential for the measurements to be done in specific periods to be controlled. Different temperature measurement devices can be designed by taking advantage of their various thermometric features in the measurement of materials' temperatures. Today, there are various temperature measurement devices that depend on the length, pressure, volume, electric resistance, electromotor force inside the electric circuit created by two different wires, and the changes of materials' external heat intensity. ese along with devices usually measure by being in contact with the surface to be measured. Besides this method, there are devices used in measuring high temperatures that measure contact-free [16].
In an expertly designed algorithm, the number of transactions must be constant per data instance. erefore, the total number of operations must be linear depending on N. In general, the processing time required for a collection is much shorter than the processing time required for a multiplication operation. Algorithms can be developed to make these complex operations quick and easy [30]. With this analysis, the signal can be seen in the frequency domain, and the frequency spectrum in the blocks can be calculated and displayed. Real-time data processing is used to calculate time, field signals, and the frequency domain signals obtained from these signals must also be higher than the data acquisition rate. e control of vibration-temperature during operation and a suitable system for the servo motors to operate have been created. e system is run by the PLC. e sensors have been used to measure the temperature and the vibration values of the devices used in the systems are shown in Figure 3. Shock and Vibration e vibration and temperature sensor code used in this process is QM42VT2. e vibration sensors have X-and Zaxis indicators on the surface. When X is parallel to the sensor, the Z axis moves through the sensor on a plane. e X axis is mounted on the same axis as the motor shaft or on it axially.
For best results, the sensor should be installed as close as possible to the motor mount. If this is not possible, the sensor must be mounted on a rigidly connected surface with the vibration characteristics of the motor. Using a surface with a cloth on or any another unstable mounting location to detect specific vibration characteristics can result in a reduced accuracy or capability.
Properties of the Designed Set.
In order to evaluate the measured data in the designed measurement system, a Siemens S71200 PLC, a Siemens Comfort 9″ operator panel with a SCADA system, two vibration and temperature monitors, and a Siemens Brand MODBUS card have been used to measure the vibration and temperature. e measured values have been transferred via the Modbus RTU protocol to the PLC. Five units have been used on the measurement system to control the on/off switch. Besides, a fuse has been added to protect the led system and the high supply voltage from alerting the alarm. e design of the system is shown in Figure 4. e PLC program in the measuring system has been created by programming the Siemens Tia Portal Professional V14 SP1 software and the SCADA program installed in the operator panel with the Siemens Win CC Comfort V14 SP1 software. Modbus RTU and Ethernet protocol was created to ensure communication between the products used in the system.
Results
rough the system designed in this paper, a servo motor used in the industry has been implemented. e measurements have been made between 0-3000 and 3000-0 RPM. Besides, vibration and temperature analyses have been performed by running the motor with and without load.
0 to 3000 RPM Unloaded.
e servo motor, with the label information seen, has values increased from 0 RPM to 3000 RPM with no load, and the vibration and temperature values have been recorded for 60 seconds. e vibration data have been taken on the body both in the vertical and in the horizontal axes; the temperature data have also been taken from the body (see the graphs from Figures 5(a)-5(c), respectively). According to the graphic values in Figures 5(a) and 5(b), an increase in vibration speed occurred as the speed value increased. e value of vibration was fixed and it has remained stable after reaching the speed of 3000 RPM. Even though there have been minimal fluctuations in temperature, no considerable change in the temperature value has been observed, as it can be seen in Figure 5(c). e maximum and the minimum vibration values that can be measured were between the values of 1.5 G and −1.5 G. e vibration values have never reached those values. e servo motor has worked with temperature under the maximum measurement of 35 degrees Celcius ( Figure 5).
3000 RPM to 0 RPM Unloaded.
e servo motor was slowed down from 3000 RPM to 0 RPM with no load, and the vibration and temperature values have been recorded for 60 seconds. e vibration data taken from the vertical axis are seen in Figure 6(a), the vibration data taken from the horizontal axis are seen on Figure 6(b), and the temperature data obtained through the body are seen in the chart from Figure 6(c). According to the graphical data of Figures 6(a) and 6(b), it has been served that as the vibration value decreases, the speed value also decreases. e vibration value has remained constant after the speed value has dropped to 0 RPM and then it has been fixed. Even though there have been minimal value changes, the temperature value has decreased before the change in the value. After that, it has increased, as the motor speed has decreased (see Figure 6(c)). In this case, it is acknowledged that the generally known servo motors get warm when they operate at lower speeds. In the measurements made, depending on the measurement direction (X, Y, Z), the operating values of the servo motor are compared with the reference values. From the value of the normal operation of the corresponding servo motor, depending on the measurement direction according to the references, it is observed that it works between the maximum vibration value of 1.5 G and minimum vibration value of −1.5 G. It has been observed that the maximum temperature value of 45 servo motor working referenced to the standard operating value of the respective servo motor works under the maximum value.
0 RPM to 3000 RPM Loaded.
e servo motor, with the label information seen, is increased from 0 RPM to 3000 RPM with on-load, and the vibration and temperature values have been recorded for 60 seconds. e vibration data have been taken on the body in the vertical axis, as seen in Figure 7(a), data taken on the body in the horizontal axis are illustrated in Figure 7(b), and the temperature data taken on the body in Figure 7 are shown on the graph. According to the graphical data shown in Figures 7(a) and 7(b), when the speed value reaches 500 RPM, it is observed that the vibration values exceed over the limit that can be measured. A decrease in the vibration value has been observed when the speed value of 500 RPM increases to 1500 RPM. While the Shock and Vibration speed value has increased from 1500 RPM to 3000 RPM, the vibration value also has increased and exceeded the limit values. As a result of the vibration values measured loaded, it can be concluded that the servo motor does not work properly loaded and that maintenance should be applied. Even though there have been minimal fluctuations in the temperature of the motor, it can be seen that it has been working in an acceptable manner, in accordance with the temperature value, as illustrated in Figure 7(c).
3000 RPM to 0 RPM Loaded.
e servo motor, with the label information seen, is increased from 0 RPM to 3000 RPM with on-load, and the vibration and temperature values have been recorded for 60 seconds. e vibration data have been taken on the body in the vertical axis, as seen Figure 8(a), data taken on the body in the horizontal axis are illustrated in Figure 8(b), and the temperature data taken from the body are graphically shown in Figure 8(c). A decrease in the value of vibration has been observed in the graphic values in Figures 8(a) and 8(b) until the speed value has dropped from 3000 RPM to 1200 RPM. While the speed value has decreased from 1200 RPM to 700 RPM, an increase in the vibration value could have been observed, and when the speed value has reached 700 RPM, it has exceeded the limit values. While the speed value dropped from 700 RPM to 0 RPM, the vibration value also decreased and became 0 G. When the servo motor vibration values have exceeded the limit values at 700 RPM, it could have been observed that the servo motor needed maintenance. When observing the temperature graph from Figure 8(c), it can be seen that the servo motor works at typical temperature values.
Discussion
System performance may be dependent on the effects of vibration and temperature data such as the total operating time of the system, operating conditions of the system, and the external environment. So, the reference value can be obtained from an idle system. By applying the specified load, vibration and heat data can be monitored during operation and termination processes. As seen in the results, critical levels can be overcoming under load. is can make the framework require support before due time, and if this circumstance is not resolved in time, there may be a malfunction in the motor. Control used for detecting errors can allow it to give a warning when the limit values are exceeded and to keep values optimal. Vibration values were measured for Siemens 1FK7063 coded servo motor at 1000, 2000, and 3000 RPM speeds, with horizontal and vertical axis displacement. e value of the temperature in degrees Celsius (°C) has been measured for a minute and it was graphically illustrated. e working time for all the measurements has been set at 60 s.
First, to create the sample system in terms of the reference value, data have been obtained from the system displayed in this article between 0-3000 RPM. e vibration values increase depending on the number of vibration cycles as expected. When the data has been examined, no abnormal value has been observed, and regular operations have been found according to the motor speed. However, when the heat has been measured, the reference value of 35°C has not been exceeded.
To test the validation of the first results of measurement made with 3000-0 RPM measurement cycles, observations were made that the vibration values have decreased according to the amount of vibration it received and it was kept within the working ranges. e temperature value at 500 RPM is within the boundary of the first variable and the temperature value reaches the critical point in the measurement range of 3000-0 RPM. It has been identified as a normal thermal behaviour, considering the running time of this system. e values of the system were recorded while kept idle under monitoring.
A loaded system has operated between 0-3000 RPM. 0-500 RPM and 2200-3000 RPM operating ranges have exceeded the limit of the values of the vibration data. It has been identified that the vibration data have remained at the desired level from this point on. It means that the machine needs to be taken into the maintenance. Otherwise, there is a possibility of causing friction-related malfunction in the system. On the other hand, temperature values were constant at 900-1200 RPM.
is variable data remains within the boundary values. e measurements in the system, based on different parameters during the study, could be done in realtime and variable areas have been identified. e data obtained from the regular study are deemed to be inside the working range. When the 0-3000 RPM values of the motor was measured, it was observed that at 3000-2600 RPM that it exceeded over the vibration limits. e rise in these vibration values were seen at 1200-600 RPM. Between the measurements of 0-3000 RPM, the same consequences were not observed in said frequencies. is spike was observed in both measurements made with 0-3000 RPM and 3000-0 RPM and different frequencies in data are observed. Even with these vibration values, the system's temperature has never exceeded over the working temperature of 35 degrees Celsius. In conclusion, better data for predicting malfunction with 3000-0 RPM measurement method have been gained.
Conclusions
Disruptions during the mass production processes can cause a competitive disadvantage to companies in the industrial field.
Furthermore, malfunctions can cause major setbacks during the manufacturing process since continuous production depends on how well the machines can perform. Particularly in remotely controlled processes, achieving both timely and accurate monitoring might prove difficult. e system designed and developed in this study provides a way to measure the heat and the vibration in real-time.
is innovative system can be configured in accordance with future research regarding industry 4.0.
e monitoring system can be used on demand to show measurements for Shock and Vibration heat and vibrations, remotely and on demand in real-time. e control systems are integrated to provide a safe and nonhazardous production. e system can display real-time data. With the data obtained, a suitable dataset is created for use in artificial intelligence applications. Cost advantages can be provided and the system has a structure that can be improved in all aspects. It can be easily moved anywhere and has a multifunctional structure that can be used for every device and machine of similar structure. e information requested in reporting can be selected by the user. Also, by adopting the understanding of intervention before failure occurs, the highest efficiency can be obtained and the production resulting from a halt or unplanned maintenance caused by the breakdown can be contributed to production by minimizing the cost losses.
is system vibration and temperature values can be examined during the operation of the machines without damaging the systems and the data that will cause problems can be monitored. In addition, it can be examined in other motors without damaging machinery and equipment, and how real errors are reflected on the graphics as a result of vibration measurements can be examined.
Data Availability e data are available on request through a data access committee or institutional review board or from the authors by sending e-mail. | 5,887 | 2020-09-11T00:00:00.000 | [
"Materials Science"
] |
Mapping inorganic crystal chemical space
The combination of elements from the Periodic Table defines a vast chemical space. Only a small fraction of these combinations yields materials that occur naturally or are accessible synthetically. Here, we enumerate binary, ternary, and quaternary element and species combinations to produce an extensive library of over 10 10 stoichiometric inorganic compositions. The unique combinations are vectorised using compositional embedding vectors drawn from a variety of published machine-learning models. Dimensionality reduction techniques are employed to present a two-dimensional representation of inorganic crystal-chemical space, which is labelled according to whether they pass standard chemical filters and if they appear in known materials databases
Introduction
The fundamental building blocks of materials are the chemical elements of the Periodic Table .Depending on the choice of elements and the interactions between them, the resulting material may be stable or unstable; crystalline or amorphous; insulating or conducting.The principles connecting chemical composition, crystal structure, and physical properties of materials remain a subject of longstanding interest 1,2 and ongoing study.Materials informatics has emerged as an important subject at the interface of traditional materials science and data science 3 .It uses informatics techniques to understand, design, and discover materials.The underlying materials data may be drawn from experimental investigations (e.g.crystal structure databases populated from X-ray or neutron diffraction measurements) or from computer simulations (e.g.structure-property databases based on density functional theory calculations).
In this study, we consider the cartography of inorganic crystal chemical space.Specifically, we address the combination of 2-4 elements to form stoichiometric inorganic compounds.This builds upon our earlier work 4 by featurising each chemical composition using embedding vectors from machine learning models and labelling the entries to probe the distribution of known and unknown materials.The resulting hyperspace is reduced to two dimensions to produce visual representations that show hints of the innate separation between allowed and forbidden compounds.
Chemical enumeration
As the number of chemical components increases, there is a combinatorial explosion in the number of possible compounds.We previously reported the code Semiconducting Materials from Analogy and Chemical Theory (SMACT) to enable rapid screening over such large configurational spaces 5 .This was inspired by early work on the exploration of new semiconducting materials based on electron counting principles 6 .The Python library features element and species classes, with integrated iteration tools and adjustable chemical filters.
We can make the combinatorial space of multi-component compounds more tractable by introducing chemical constraints.We choose to work with the first 103 elements of the Periodic Table (from H to Lr).This pool of atomic building blocks is expanded into 421 species when the accessible oxidation states are considered.For instance, Fe(II) and Fe(III) are both formed from the element Fe, but exhibit distinct physicochemical properties such as the black pigment Fe(II)O and the red antiferromagnet Fe(III) 2 O 3.
We consider the set of binary (A w B x ), ternary (A w B x C y ) and quaternary (A w B x C y D z ) combinations where the stoichiometric factors w, x, y, z < 9 . This approach yields a total number of 225,879 unique compounds for binary combinations, 77,637,589 for ternary combinations, and 16,902,534,325 for quaternary combinations.We ensure that combinations with equivalent stoichiometry, such as MgO and Mg 2 O 2 , are excluded from our analysis.
We apply established chemical filters to distinguish between plausible ("allowed") and implausible ("forbidden") inorganic stoichiometries.The first filter is charge-neutrality, based on the sum of the formal charge (q) of each species: This chemical filter can be framed equivalently in terms of electron counting or valency, which is common in the study of semiconductors 6,7 .The filter applies to a broad range of inorganic materials, particularly those classified as being formed of ionic and covalent interatomic interactions, where the charge neutrality principle holds.However, it may not be suitable for describing metallic alloys (e.g.Cu 1-x Zn x ), intermetallic compounds (e.g.Ni 3 Al), and nonstoichiometric compounds (e.g.YBa 2 Cu 3 O 7-δ ), as these materials often involve different chemical bonding and variable compositions with electron counting rules that go beyond the scope of simple charge neutrality considerations.Special consideration would also be required for mixedvalence compounds where a single element appears in the same compound with multiple oxidation states.For example, the binary compound magnetite Fe 3 O 4 contains an equal number of Fe(II) and Fe(III) ions, and thus would be described as a ternary compound by SMACT based on its three distinct constituent species.
A second filter is the electronegativity balance, which requires that the most electronegative ion has the most negative charge in the compound.Using the Pauling electronegativity scale 8 , χ anion -χ cation > 0. For example, the pnictide semiconductor GaSb is allowed by this filter (χ anion(Sb)χ cation(Ga) = 0.24), while the oxide catalyst Sb 2 O 3 is also allowed, where Sb is the cation (χ anion(O)χ cation(Sb) = 1.39).This filter helps distinguish between allowed and forbidden inorganic stoichiometries based on electronegativity considerations, ensuring that the composition contains sensible combinations of species.
Each chemical composition can be assigned a label {'allowed', 'forbidden'} according to whether it passes these chemical filters for inorganic compounds.They can also be labelled as {'known', 'unknown'} according to the presence of that composition in the Materials Project (MP) 9 database.The entries considered here were retrieved via the MP API (v2023.11.1) using an anonymised formula notation (e.g.AB2), ensuring a consistent approach to formula representation.We can then categorise each enumerated composition according to their combination of labels: standard {'allowed', 'known'}; missing {'allowed', 'unknown'}; interesting {'forbidden', 'known'}; and unlikely {'forbidden', 'unknown'}.
Table 1.Chemical compositions are labelled "standard", "missing", "interesting", or "unlikely" according to whether they pass the chemical filters implemented in SMACT and their presence in the Materials Project database.Examples are provided for metal oxides in the Li-Zn-Sn-O chemical space.
Chemical filter
Allowed Allowed Forbidden Forbidden This pattern is more extreme in ternary compounds, where only 0.03% are standard, and the number of interesting compounds is negligible.The quaternary compounds continue this trend, with a mere 0.00% being standard or interesting, and 82.8% falling into the unlikely category.Significantly, the data reveals an increase in missing compounds from the MP database across the complexity spectrum: 4.4% in binary, 13.9% in ternary, and 17.2% in quaternary.This escalation may suggest that as the complexity of the compounds grows, the probability of their synthesis or identification of novel stable crystal materials decreases.Concurrently, the potential for discovering new crystalline materials increases, as evidenced by the larger missing category in higher-order compounds.The statistics hint at unexplored territories in materials science, particularly for ternary and quaternary compounds.A Periodic Table including the elements that commonly appear in binary compounds allowable by the chemical filters is shown in Figure 1.Elements with a greater number of oxidation states (accessible species) are more abundant.Among the non-metallic elements, carbon (C), nitrogen (N), oxygen (O), silicon (Si), phosphorus (P), sulfur (S), chlorine (Cl), and germanium (Ge) are notable for their multiple accessible oxidation states.This enables them to participate in a diverse range of chemical compositions while maintaining charge neutrality.The same is true for transition metals such as chromium (Cr), manganese (Mn) and iron (Fe).Furthermore, elements with high electronegativity values, such as fluorine (F) and oxygen (O), are also favoured by the filters.F with an electronegativity of 3.44 and O with 3.98, despite having only one and two negative oxidation states, respectively, are likely to pass the SMACT filters as stable anions.In summary, elements with either many oxidation states or high electronegativity are favourable to form more inorganic compounds.
Materials embedding vectors
An integer representation of elements, in terms of atomic number, is straightforward and intuitive for human chemists to learn.However, machine learning models benefit from the descriptive power of a higher dimensional representation, often in the form of continuous element vectors V i .To effectively represent elements, several types of element embedding have been developed.The Magpie 10 representation, for instance, incorporates diverse element properties such as atomic weights, electronegativity, and melting temperature.different materials as mentioned in scientific literature.This method effectively leverages unstructured textual data to enhance understanding of material properties.Skipatom 13 learned representations by predicting the surrounding atomic environment of a target atom based on structural information.It emphasizes capturing the local chemical environments and their impact on material properties.Megnet16 14 utilises graph neural networks, where the embedding is based on graph attributes that include whole graph information.This method employs the weights of the neural networks to predict the formation energy of crystalline materials, treating the atomic structure of materials as a graph with detailed node and edge representations.We employ the Python package ElementEmbeddings 15 to compile the various embeddings.
To make compositional embeddings from element embeddings for compounds, a weighted sum of the constituent element embeddings is performed, i.e.
V Composition = (wV
This step is implemented as the CompositionalEmbedding function in the ElementEmbeddings package.
Dimensionality reduction
To systematically map the inorganic crystalline chemical space in two dimensions, we utilise three primary dimensionality reduction techniques.These are Principal Component Analysis (PCA) 16 and t-distributed Stochastic Neighbour Embedding (t-SNE) 17 , both implemented using the sklearn library 18 , as well as Uniform Manifold Approximation and Projection (UMAP) 19 , which is implemented using the UMAP Python library.
We consider five distinct element embeddings: Magpie, Mat2vec, Megnet16, Skipatom, and Oliynyk.A random embedding of 200 dimensions is used to act as a control with no embedded chemical information, while still providing a unique representation for each element.The dimensionality of the embedding vectors is 22, 200, 16, 44, 200 for Magpie, Mat2vec, Megnet16, Oliynyk, and Skipatom, respectively.For a comprehensive analysis, 3000 data points were randomly selected for each of the four categories: standard, missing, interesting, and unlikely.These data points are transformed into two-dimensional vectors using the specified dimensionality reduction methods.The resulting embeddings are visually represented for binary, ternary, and quaternary compounds in Figures 2, 3 and 4, respectively.
For binary compounds, the distribution patterns of embedding vectors reveal distinct characteristics across different element embeddings.Vectors derived from Mat2vec, Skipatom, and Random element embeddings exhibit a dispersed distribution across the reduced space.In contrast, the embeddings generated using Magpie and Oliynyk show a more concentrated, clustered configuration.Figure 5 captures this phenomenon, presenting the reduced embedding vectors for binary compounds, consistent with those in Figure 2, but classified into distinct categories according to types of chemical compounds according to the anion present, such as pnictides, halides, chalcogenides, and oxides.Notably, the observed clustering patterns with the Mat2vec, Skipatom, and Random embeddings indicate a pronounced tendency for these vectors to group according to specific types.For instance, the oxide binary compounds (marked as green points) form isolated clusters.Such a tendency suggests that atom types play a significant role in the construction of compositional embeddings, which are derived from a weighted sum of individual element embeddings.On the other hand, the Magpie and Oliynyk embeddings, formulated based on a variety of atomic properties indicate the presence of influential atomistic characteristics that extend beyond merely the types of atom species.
Faraday Discussions Accepted Manuscript
The analysis of the PCA plots of Mat2vec, Oliynyk, and Megnet16 in Figure 2 exhibits a separation of interesting from standard and missing compositions.This segregation indicates that interesting compounds, which are known stable materials yet excluded by chemical filters, possess unique and distinct characteristics that set them apart from other categories.This is expected as large families of metallic alloys and intermetallic compounds fall into this category.
For ternary systems, a similar trend is observed where standard materials demarcate themselves from those interesting and missing.This is particularly evident in Mat2vec, Megnet16, and Oliynyk embeddings in Figure 3.It is worth highlighting that the standard and missing materials have a separate distribution in the quaternary space of Figure 4.It hints that navigating missing materials could unveil unexplored regions of the chemical space, potentially leading to the discovery of synthesisable materials with unique properties and applications.Indeed, the high fraction of empty space that exists for multi-component compounds has recently been exploited in a largescale computational screening study that identified 2.2 million plausible inorganic crystals 20 and offers a fertile playground for generative machine learning models [21][22][23][24] .
Overall, the degree of clustering across categories escalates from binary to quaternary systems with the increasing order of complexity and chemical diversity inherent in higher-order compounds.As transition to quaternary compounds, the distinct characteristics of each class become more salient.It is worth highlighting that the dimension reduction, resulting from the unsupervised learning algorithms, demonstrates cohesive clustering that corresponds to our classification with a striking clustering of the standard and missing compositions.
Conclusions
We have explored the vast expanse of inorganic crystal space, encompassing an array of 10 10 compounds that span binary, ternary, and quaternary compounds.While the uncharted space may be considered infinite, we tame it by introducing chemical constraints in the form of filters and limits on stoichiometric combinations.We label the resulting entries as standard, missing, interesting and unlikely, according to whether they pass these filters and if they are present in the Materials Project database.This separates the proportion of discovered compounds that conform to standard chemical rules to form stable inorganic solids.Furthermore, we have visualised the inorganic crystal chemical space through the lens of these two filters, revealing that higher-order compounds exhibit pronounced distinctive characteristics.It hints that navigating complex spaces could unlock materials with novel properties in unexplored regions, offering new avenues for scientific exploration.The study thus serves as a foundational reference for future endeavours in data-driven materials discovery, emphasising the potential of unknown regions within the chemical space.
Data access statement
This study used several open-access tools, including the SMACT (https://github.com/WMDgroup/SMACT)and ElementEmbeddings (https://github.com/WMD-group/ElementEmbeddings)packages.The associated scripts (or notebooks) to generate the plots in this paper are available in the SMACT examples directory.Interactive plots for binary combinations can be generated using CrystalSpace (https://github.com/WMD-group/CrystalSpace).
Conflicts of Interest
There are no conflicts to declare.
Figure 1 .
Figure 1.Periodic Table including s, p and d-block elements commonly found in binary (A w B x ) compounds that are allowed by the chemical filters implemented in SMACT.The number below each element indicates their frequency of occurrence.
Oliynyk 11 embedding comprises chemical descriptor vectors, derived from properties of elements properties.Mat2vec12 , utilising natural language processing (NLP) techniques, learned material representations from an extensive text corpus, capturing the context and relationships ofFaraday Discussions Accepted ManuscriptOpen Access Article.Published on 16 April 2024.Downloaded on 8/26/2024 12:12:47 AM.This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.View Article Online DOI: 10.1039/D4FD00063C
Figure 2 .Figure 3 .Figure 4 .Figure 5 .
Figure 2. Visualisation of embedding vectors for the space of binary compounds with six element embeddings across PCA, t-SNE, and UMAP dimension reduction methods.The data points are colour-coded to indicate the four categories of composition: standard (blue), missing (red), interesting (green), and unlikely (grey).
Binary (Zn-O) …Quaternary (Li-Zn-Sn-O)LiZnSnO …Examples of chemical compositions generated from the screening procedure are given in Table1.The metal oxide examples include binary (Zn-O), ternary (Li-Zn-O), and quaternary (Li-Zn-Sn-O) systems.Seven compounds ZnO, 8 are present in the MP database within our stoichiometry limits.In the binary system, Zn(II)O and Zn(II)O 2 are classified as standard, given that Zn can exhibit an oxidation state of +2 with electronegativity 1.65, and oxygen has oxidation states of -2 (oxide) and -1 (peroxide) with an electronegativity of 3.44.Interestingly, Zn 2 O passes the chemical filter with the less common +1 oxidation state of Zn (often associated with the presence of Zn-Zn bonds) but is not found in the MP database so it is identified as missing.In the ternary Li-Zn-O system, LiZnO 2 and Li 6 ZnO 4 are standard materials, while various missing and unlikely compounds are identified.
Table 2 .
Among binary, ternary, quaternary compounds, 13,464, 10,779,441, and 2,909,434,982 compounds respectively passed the chemical filter.Within the MP database, there are 9,981 binary, 36,866 ternary, and 17,417 quaternary compounds identified.For binary compounds with a total of 225,879 unique combinations, 3,627 (1.6%) are standard, 9,837 (4.4%) are missing, 6,354 (2.8%) are interesting, and the vast majority, 206,061 (91.2%), are deemed unlikely to be formed.Even for the simple case of combining two components, the compositional space is sparsely populated.
Table 2
. Number of binary, ternary, and quaternary compounds based on enumeration and chemical filtering of 421 chemical species in SMACT and their presence in the Materials Project database.
Ternary (A w B x C y )
Open Access Article.Published on 16 April 2024.Downloaded on 8/26/2024 12:12:47 AM.This article is licensed under a Creative Commons Attribution 3.0 Unported Licence. | 3,803.8 | 2024-09-18T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Charged Lepton Flavor Violation in the Semi-Constrained NMSSM with Right-Handed Neutrinos
We study the \mu \to e \gamma decay in the Z_3-invariant next-to-minimal supersymmetric (SUSY) Standard Model (NMSSM) with superheavy right-handed neutrinos. We assume that the soft SUSY breaking parameters are generated at the GUT scale, not universally as in the minimal supergravity scenario but in such a way that those soft parameters which are specific to the NMSSM can differ from the soft parameters which involve only the MSSM fields while keeping the universality at the GUT scale within the soft parameters for the MSSM and right-handed neutrino fields. We call this type of boundary conditions"semi-constrained". In this model, the lepton-flavor-violating off-diagonal elements of the slepton mass matrix are induced by radiative corrections from the neutrino Yukawa couplings, just like as in the MSSM extended with the right-handed neutrinos, and these off-diagonal elements induce sizable rates of \mu \to e \gamma depending on the parameter space. Since this model has more free parameters than the MSSM, the parameter region favored from the Higgs boson mass can slightly differ from that in the MSSM. We show that there is a parameter region in which the \mu \to e \gamma decay can be observable in the near future even if the SUSY mass scale is about 4 TeV.
Introduction
It is now clear that the lepton flavor number is not a conserved symmetry because of experimental observations of neutrino oscillations [1]. In the minimal extensions of the Standard Model (SM) with the Majorana neutrino mass terms, the branching ratios for charged lepton-flavor violating (LFV) processes are extremely small since they are suppressed by at least a factor of m 2 ν /m 2 W , which makes it very difficult for near-future experiments to detect LFV signals. On the other hand, in more general extensions of the SM, which are motivated by several reasons, it is known that sizable LFV rates are predicted depending on parameter region. If LFV processes are discovered, it directly means an indirect signature of physics beyond the SM (BSM). Recently, the MEG experiment reported a new upper limit of Br(µ → eγ) < 5.7 × 10 −13 [2]. This already gives a strong constraint on models beyond the SM, and hence it is very important to keep updating these upper bounds on the LFV processes. Supersymmetry (SUSY) is still a promising candidate for physics beyond the SM [3]. Lots of efforts have been devoted to the discovery of SUSY at the LHC, but only in vain so far. The most studied model of SUSY is the minimal SUSY SM (MSSM). Even in the framework of the MSSM, there are some unsolved problems such as the µ problem. Next-to-the MSSM (NMSSM) is an extension of the MSSM with a SM-singlet Higgs chiral superfieldŜ. The NMSSM could give a hint to solve the µ problem since in this model the µ term is induced by the vacuum expectation value (VEV) of the scalar component S ofŜ. In this sense the NMSSM is a natural extension of the MSSM.
One of the difficulties in the MSSM is the Higgs boson mass. In the MSSM, the tree-level lightest Higgs boson mass is bounded from above as, and has to rely on large radiative corrections to reproduce the observed Higgs boson mass of 126 GeV [1].
The main contribution to the radiative corrections comes from the top Yukawa coupling [4][5][6], and to maximize this effects one needs a top-squark mass much larger than the top-quark mass. In the NMSSM, the lightest Higgs boson mass reads [7]: where v ∼ 174 GeV. As seen from this equation, the contribution from the new parameter λ, which is the coupling among the new singlet S and the MSSM Higgs doublets H u and H d , makes the tree-level Higgs boson mass larger, in particular for small tan β. We have to note that the mixings between S and the MSSM Higgs doublets can make a negative contribution to the lightest Higgs boson mass, and the NMSSM does not always predict a larger Higgs boson mass. We will discuss this issue in details later in this paper.
There are more than one-hundred free parameters in the MSSM. Usually, we assume an underlying scenario for SUSY breaking, and it allows us to reduce the number of free parameters. In this paper we assume the minimal supergravity (mSUGRA)-like boundary conditions that the SUSY breaking parameters m 0 , M 1/2 , A 0 are universal at the GUT scale. The parameters at the SUSY scale are obtained by evolving these parameters according to the renormalization group equations (RGE). These mSUGRA-like boundary conditions are very effective for avoiding constraints from the SUSY-induced flavor changing neutral current (FCNC) processes. This is also true for the charged LFV processes, and in the mSUGRA, also known as the constrained MSSM (cMSSM), there are essentially no charged LFV. This is similar in the case of the constrained NMSSM.
The neutrino masses are exactly zero in the framework of the SM, which clearly needs modifications in view of the observation of neutrino oscillations. One of the most natural mechanisms to explain the tiny neutrino masses is the (Type-I) seesaw mechanism [8][9][10], which we consider in this paper. The extension of the original seesaw mechanism to SUSY models is straightforward. In the MSSM extended with the right-handed neutrinos ν R , which we call the MSSM + ν R model, even if one assumes the mSUGRA-like boundary conditions at the GUT scale, off-diagonal elements in the slepton mass matrices are induced via radiative corrections from the neutrino Yukawa couplings, which can predict sizable rates for the LFV processes like µ → eγ. This mechanism also works in the NMSSM extended with the right-handed neutrinos, which we call the NMSSM + ν R model and which we consider in this paper, but since there are more free parameters than in the case of the MSSM + ν R model, the predicted LFV rates can slightly differ from those in the MSSM + ν R model in the parameter region favored from the Higgs boson mass.
The contents of this paper are as follows. In Section 2, we introduce the model we work with, and in Section 3 we explain the origin of the LFV (off-diagonal) elements of the slepton mass matrices. In Section 4, we discuss constraints on the parameters of the model. We introduce the results of numerical calculations in Section 5, and in Section 6 we summarize this paper.
Z 3 -invariant NMSSM
The NMSSM is an extension of the MSSM, and it has an extra Higgs chiral superfieldŜ which is singlet under the SM gauge group. In the Z 3 -invariant NMSSM [7], the µ term µĤ u ·Ĥ d in the superpotential of the MSSM is replaced by the term λŜĤ u ·Ĥ d , and the µ-parameter is determined from the singlet VEV s as µ eff = λs. Namely, the superpotential of the Z 3 -invariant NMSSM is given as where the dot in the termĤ u ·Ĥ d represents a product of two SU (2) doublets, and the hats on the fields stand for the superfields corresponding to the fields. We assume that the R-parity is conserved, and assign the even R-parity toŜ. The soft SUSY breaking terms are In the case of the constrained NMSSM, the gaugino masses, sfermion soft SUSY breaking masses, and the A-parameters take the values which are "universal" at the GUT scale, similarly to the case of the cMSSM: where α (α = 1, . . . , 3) labels the gauge groups of the SM, and i and j are the indices for generations, i, j = 1, . . . , 3. As for the parameters A λ , A κ and m 2 S which are specific to the NMSSM, we assume that the values of A λ and A κ at the GUT scale are not necessarily equal to A 0 , and that m 2 S at the GUT scale can be different from m 2 0 . We call the NMSSM with this class of boundary conditions the semi-constrained NMSSM.
Z 3 -invariant NMSSM extended with right-handed neutrinos
In this paper we take the simplest extension of the Z 3 -invariant NMSSM with the right-handed neutrinos, in which the (type-I) seesaw mechanism [8][9][10] is at work. The superpotential is given by where the Z 3 -charges are assigned as in Table 1 [11]. This charge assignment excludes the term (λ ν ) ijŜN c i · N c j from the superpotential 1 .
The neutrino masses in this model is where U MNS is the MNS matrix [12]. In the standard representation of the PDG, the matrix reads: where c ij = cos θ ij , s ij = sin θ ij . The mixing angles θ ij (i, j = 1, . . . , 3, i < j) describes the mixing between the mass eigenstates ν i and ν j , and the factors δ, α 21 , α 31 are complex phases, and represent the Dirac phase and the two Majorana phases, respectively. According to the latest data [1] 1 It is possible to derive the (left-handed) neutrino masses via the type-I seesaw mechanism from the Majorana masses which emerge from term (λν ) ijŜN c i ·N c j after replacing S with its VEV. In this case, since the singlet VEV S is at most O(1 − 100 TeV), the Majorana masses must be about the same order, which forces us to assume a very small neutrino Yukawa coupling (Y N ) in order to explain the tiny neutrino masses. This makes the LFV rates extremely small and hence we do not consider this scenario in this paper.
The mass-squared differences, which are also important parameters, are: 2.52 ± 0.07 (10 −3 eV 2 ) (normal mass hierarchy) 2.44 ± 0.06 (10 −3 eV 2 ) (inverted mass hierarchy) . (2.14) In this paper, we assume the normal hierarchy scenario for the neutrino masses, and take the values and, for the mixing angles, Concerning the complex phases, we take for simplicity. Another free parameters are the 3 × 3 elements of M N . Although it is known that the structure of this matrix gives an influence to the predicted LFV rates [13][14][15][16][17], in this paper we assume where M ν is a real number.
Lepton Flavor Violation
In this section we discuss charged lepton flavor violation, taking the NMSSM + ν R model as an example of new physics beyond the SM.
µ → eγ in the Standard Model with ν R
Within the SM, the neutrinos are strictly massless and lepton flavor number is exactly conserved. The experimental observations of neutrino oscillations [1], however, make it clear that we have to extend the SM in such a way that it can accommodate the neutrino masses and mixings. One of the simplest extensions is to introduce right-handed neutrinos (ν R ) which are singlet under the SM gauge group, which allows us to introduce Dirac mass terms for the neutrinos in the Lagrangian.
Once we introduce the right-handed neutrinos in the SM, in general, charged lepton flavor number is no longer conserved. This is similar to the case in the quark sector, and the mismatch between the gauge eigenstates and the mass eigenstates violates the lepton flavor number conservation. The branching ratio of µ → eγ in this model is given by [18][19][20] The suppression factor (m 2 ν,i − m 2 ν,1 ) 2 /M 4 W makes the branching ratio extremely small, and it is very difficult for near future experiments to detect µ → eγ in this model. On the contrary, in the non-minimal extensions of the SM such as the (N)MSSM+ν R , sizable LFV rates can be predicted depending on the parameter region, and this makes the LFV searches very important as a probe of new physics beyond the SM. Figure 1: The diagrams which give dominant contributions to the l i → l j γ decay in the NMSSM+ν R model.
µ → eγ in NMSSM + ν R Model
In the NMSSM+ν R model, there are two diagrams which give dominant contributions to the l i → l j γ decays (where i and j are the generation indices which run from 1 through 3 with i > j), which we show in Fig. 1. One is the diagram with the neutralino and the charged slepton in the loop, and the other is the diagram which involves the chargino and the sneutrino. In general, the amplitude T for the l i → l j γ decay can be written as where e is the positron charge, ǫ α is the polarization vector of the photon, u i and u j are the spinors for the initial-and final-state leptons, respectively. The operators P L,R stand for the chiral projection operators. The dependence of the amplitude on the models is included in the coefficients A L 2 and A R 2 , and by calculating the diagrams in Fig. 1 we can determine A L 2 and A R 2 . In the case of the MSSM+ν R model, the explicit forms of A L 2 and A R 2 are given, for example, in Refs. [21][22][23]. In the case of the NMSSM + ν R model, they are essentially the same as the MSSM + ν R model, except that there are five neutralinos, instead of four, at low energies, and we can use the expressions in Refs. [21][22][23] with small modifications. By using the formulas mentioned above, the decay branching ratio Br(l i → l j γ) can be calculated from the amplitudes to be where Γ li is the total decay width of the lepton l i . In order to have a non-vanishing LFV rate from the diagrams in Fig. 1, we must have off-diagonal elements in the slepton mass matrices. The mass matrices are given as, RL are the 3 × 3 matrices whose (i, j) elements are given as where v d ≡ v cos β. In this paper, we assume mSUGRA-like boundary conditions, in which all the SUSY breaking parameters that have flavor indices do not have flavor mixings at the GUT scale. This means that there are no off-diagonal elements in the matrices M 2 l and M 2 ν . However, off-diagonal elements in these mass matrices are induced by radiative corrections at the energy scale higher than M N , which can be seen in the RGE, where t = ln Q with Q being the renormalization scale. This directly means that both M 2 l and M 2 ν have off-diagonal elements at low energies. The size of these off-diagonal elements can be roughly estimated as [21][22][23], where i = j. As is clear from Eq. (3.11), the slepton off-diagonal elements in this model comes from the neutrino Yukawa couplings, Y N . The branching ratio can be estimated in terms of the off-diagonal elements to be [23] Br (3.12) At this moment, the most stringent experimental constraint on the µ → eγ is given by the MEG experiment and the upper limit is 5.7 × 10 −13 [1,2]. This bound will be further improved by the upgraded MEG experiment to ∼ 6 × 10 −14 [24], and this makes the experiment very important as a probe of new physics beyond the SM.
Other cLFV processes
In this paper we focus on µ → eγ in the later sections, but there are many other charged LFV processes [25]. Here we mention some of them. There are two other l i → l j γ processes, τ → µγ and τ → eγ. Their current experimental limits are Br(τ → eγ) < 3.3 × 10 −8 and Br(τ → µγ) < 4.4 × 10 −8 [1]. In the near future, these limits are expected to be improved to the level Br(τ → lγ) < 1.0 × 10 −9 at Belle-II [26]. Under the assumptions we set out at Section 2, the µ → eγ decay is more sensitive to SUSY particles, and hence we focus on µ → eγ in this paper.
Other important cLFV processes include l + i → l + j l + j l − j and µ-e conversion in nuclei. As for the former process, the branching ratio can be related to that of the l i → l j γ decay as [23] Br Br(l i → l j γ) , (3.13) and hence Br(l + i → l + j l + j l − j ) can be calculated once Br(l i → l j γ) is obtained. The current experimental limit for µ + → e + e + e − is Br(µ + → e + e + e − ) < 1.0 × 10 −12 [1], and this is expected to be improved to Br(µ + → e + e + e − ) < 1.0 × 10 −16 at the Mu3e experiment at PSI [27]. Concerning the µ-e conversion in nuclei, there is a simple relation between the conversion rate B µe (N ) and Br(µ → eγ) when the photon mediation diagram gives the dominant contribution [28], is the conversion rate normalized to the muon capture rate Γ(µ − N → capture), and R(Z) is a parameter which depends on the atomic number Z of the nucleus which captures the muon. The current limits are B µe (Ti) < 4.3 × 10 −12 , B µe (Au) < 7 × 10 −13 [1]. The near future experiments are the COMET experiment at J-PARC [29] and the µ2e experiment at FNAL [30], and the PRISM/PRIME experiment at J-PARC [31], which are expected to improve the bounds to B µe (Al) ∼ 7 × 10 −17 , B µe (Al) ∼ 6 × 10 −17 , B µe (Al) ∼ 10 −18 , respectively. Since the R(Z) factors for these experiments are R ∼ 0.0025 for Al and R ∼ 0.0040 for Ti [28], these experiments are expected to go beyond the corresponding limit of the µ → eγ decay by 3 ∼ 4 orders of magnitude, and this will be very useful to probe broader parameter region of new physics.
Constraints on the Parameters in the Model
In this section we discuss constraints on the parameters in the NMSSM + ν R model. Some of the issues below are already discussed in literature [7].
Tadpole conditions
In the NMSSM, there are three tadpole conditions. At tree-level they read: where µ eff = λs and B eff = A λ + κs. We can use these relations to determine three parameters from other parameters. For example, we can use these relations to determine µ eff , B eff and m 2 S from the other parameters. Later we will discuss which parameters we use as input.
Maximal Tree-level Higgs Mass condition
One of the advantages of the NMSSM over the MSSM is that there is a parameter region in which the lightest Higgs boson mass can be made larger than that of the MSSM. As can be seen from Eq. (1.2), in order for the Higgs boson mass to be larger, it is favorable to have large λ and small tan β. The approximate formula Eq. (1.2) is obtained by neglecting the mixings between the MSSM Higgses and the singlet Higgs in the CP-even Higgs-boson mass matrix, where v u ≡ v sin β and the lower-left components are related to the upper-right components by the condition (M 2 S,Tree ) ij = (M 2 S,Tree ) ji . If we take the mixing to the singlet Higgs into account, the lightest Higgs-boson mass reads [7]: As can be seen from this equation, the mixing to the singlet Higgs makes the tree-level lightest Higgsboson mass smaller. The λ dependence of the lightest Higgs-boson mass mainly comes from the second and third terms, and too large value of λ makes the Higgs boson very small. There are two ways to decrease the mixing with the singlet: One way is to assume a small λ ( 0.1), and the other is to tune the parameters to satisfy the relation 2 ,
Conditions from positive CP-even and CP-odd Higgs boson mass-squared
The (3, 3) element in the CP-odd Higgs-boson mass matrix is given as, At broad parameter region, the third term on the right-hand side gives the dominant contribution. Therefore, in order for the CP-odd Higgs mass-squared to be positive, we must have the condition, κsA κ 0 , (4.8) in the approximation that the first and second terms in Eq. (4.7) are negligible compared to the third term.
Another condition is that the (3, 3) element of the CP-even Higgs-boson mass-squared matrix should be positive: where we have worked in the approximation s ≫ v u , v d . This condition comes from the requirement that the singlet Higgs-boson mass-squared must be positive in the approximation that the mixing between the singlet and any of the MSSM Higgs doublets is neglected. Summing up, the condition which A κ should satisfy is −4(κs) 2 κsA κ 0 . In the numerical analysis presented in this paper, we give A κ as an input parameter at the SUSY scale.
Constraint from non-vanishing VEV of S
There is a condition on the model parameters from the requirement that the singlet Higgs S has a non-zero VEV, S ≡ s = 0. When s ≫ v u , v d , the potential for S reads: If we require that this potential has a minimum at S = s = 0, and that the value of V (S) at S = s is smaller than V (0), we obtain the condition [7],
Constraint from Perturbativity of λ
The tree-level Higgs boson mass becomes larger for larger value of λ unless we take the mixing with the singlet into account. However, there is a limit on the size of λ which comes from theoretical consideration. Namely, in order for λ not to blow up below the GUT scale, the value of λ at the SUSY scale must be smaller than ∼ 0.7 [7].
Condition from the SM-like lightest Higgs boson
In this paper, we identify the lightest CP-even Higgs boson as the Higgs boson discovered at the LHC [1]. The properties of the discovered particle such as the decay branching ratios are known to be consistent with those of the Higgs boson in the minimal SM. This means that we have to require that the lightest CP-even Higgs boson in the model we consider should not be singlet-like but like the lightest Higgs boson in the MSSM which is known to become SM-like in the decoupling limit.
Numerical Results
In this section, we give our numerical results. First, we explain how we choose independent input parameters. To maximally keep the similarity to the cMSSM, we choose tan β at the SUSY scale and m 0 , M 1/2 and A 0 at the GUT scale as input parameters. In addition, since the parameter λ directly enters in the expression for the lightest Higgsboson mass, we choose λ at the SUSY scale as input. If we further choose either κ or A λ as input, we can use the two tadpole conditions Eqs. (4.1) and (4.2) to determine µ eff (= λs) and B eff (= κs + A λ ), and then use Eq. (4.3) to fix m 2 S by using the value of A κ as an additional input. Below we consider two cases: in one case we choose κ at the SUSY scale as input, and in the other case we take A λ at the GUT scale as input. Summing up, we consider two sets of input parameters. In one case, we choose the parameters below as input, tan β , λ , κ , A κ at the SUSY scale , m 0 , M 1/2 , A 0 at the GUT scale , (5.1) which we call the case 1, and in the other case, we take the parameters below as input: tan β , λ , A κ at the SUSY scale , which we call the case 2.
Case 1
In this case, we determine the parameters s = µ eff /λ and A λ = B eff − κs by using the tadpole conditions. If we are to use Eq. (4.6), we have to tune κ to satisfy Eq. (4.6). The value of κ in this case is This equation means that for large tan β and for large λ, the κ parameter becomes too large, and then λ at the scale higher than the weak scale becomes too large to be perturbative, and eventually it blows up below the GUT scale 3 . We therefore do NOT assume Eq. (4.6) for the case 1, and assume small λ (∼ 0.1) to make the mixing of the MSSM Higgses with the singlet Higgs smaller, in order not to decrease the tree-level Higgs-boson mass.
Numerical Results
Our numerical results for Br(µ → eγ) and the Higgs boson mass in the case 1 are given in Figs. 2 (a) and (b). In the figures (a) and (b), κ at the SUSY scale is taken to be 0.09 and 0.05, respectively. The rest of the input parameters are taken to be the same in the two figures, and the input SUSY parameters are λ = 0.1, A κ = −50 (GeV) at the SUSY scale and A 0 = −500 (GeV) at the GUT scale. We take m 0 = M 1/2 , and the right-handed neutrino Majorana mass is taken to be M ν = 5.0 × 10 14 (GeV).
Also shown in Figs. 2 (a) and (b) are the contours for the lightest Higgs boson mass. From the figures, we find that smaller κ makes the Higgs boson mass smaller. We have numerically confirmed that the difference in the Higgs boson mass mainly comes from the values of κ, and the difference in the values of the other parameters like A λ are not very important for the difference in the predictions for the Higgs boson mass. This dependence of the Higgs boson mass on κ can be understood from Eq. (4.5). Namely, large κ makes the (3, 3) element of M 2 S,Tree larger and the mixing between the MSSM Higgses and the singlet Higgs, which makes a negative contribution to the lightest Higgs boson mass, smaller.
If we assume (4.6), then a large λ induces a large κ via RGEs, and λ can develop the Landau pole below the GUT scale depending on the parameters. For small tan β, the large top Yukawa coupling makes the right-hand side of the RGE for λ large, and this makes it easier for the Landau pole for λ to occur. From the figures, we find that there is a parameter region which is favored from the Higgs boson mass measurement where the predicted value of Br(µ → eγ) is within reach of the near-future experiment even if m 0 is as large as ∼ 4 TeV.
We here comment on the dependence of the Higgs boson mass on κ. In the figures, we take κ only down to 0.05. For smaller values of κ, for example, κ 0.03 for λ = 0.1, the Higgs boson mass sharply decreases for decreasing κ. This sharp κ dependence comes from the factor (λ/κ) 2 in the third term of the right-hand side of Eq. (4.5). If we take smaller value of λ, this sharp decrease of κ happens at smaller value of κ, and hence we can take smaller κ as well.
Case 2
In this case, if we are to use Eq. (4.6), the value of κ is determined to be, similarly to the case 1, Similarly to the reasoning in the case 1, the equation above implies that if tan β or λ is too large, the κ parameter at higher scale blows up and becomes non-perturbative. Therefore, if we are to use Eq. (4.6), we need small λ and small tan β, but this choice makes the Higgs boson mass very similar to the MSSM case and hence is not very interesting. We therefore do NOT use Eq. (4.6) in the case 2, either.
Numerical Results
In Figs From the figures, we find that only in Fig. 3 (b), there is an extra Higgs-mass favored parameter space at the region where tan β and m 0 (= M 1/2 ) are both large. This difference between the two figures mainly comes from the difference in the value of κ, and the differences in the other parameters like A λ enter only indirectly through the value of κ in the prediction for the Higgs boson mass.
We now explain why the changes in the input value of A λ at the GUT scale affect the value of κ at the SUSY scale.
To do so, we first explain the dependence of κ on m 0 (= M 1/2 ) and tan β with fixed value of A λ (M GUT ). Below we will show that κ becomes smaller for larger m 0 and for larger tan β at the region tan β ≫ 1 in the parameter space shown in Figs. 3 (a) and (b). At the upper-right region of Fig. 3 (b), the value of κ becomes κ 0.03, where the Higgs boson mass decreases relatively quickly for decreasing κ, as discussed at the end of the discussion for the case 1 in this section 4 . In the parameter region shown in Fig. 3 (a), the value of κ is larger than 0.03, and hence this relatively fast decrease does not happen. Then we have to explain why κ is smaller in Fig. 3 (b). This is because A λ (M SUSY ) is larger in Fig. 3 (b) since A λ (M GUT ) is larger. The relation between A λ (M SUSY ) and κ are given by κ = (B eff − A λ )/s and hence larger A λ means smaller κ.
Let us discuss the change in κ for different m 0 (= M 1/2 ) for fixed values of tan β and A λ (M GUT ). The value of A λ at the SUSY scale is given by solving the RGE, For our sample parameters, A λ (M SUSY ) becomes larger 5 for larger m 0 (= M 1/2 ) and fixed tan β. Therefore, for a fixed value of tan β, larger m 0 (= M 1/2 ) makes κ smaller through the relation, κ = (B eff −A λ )/s. Next, we discuss the dependence of κ on tan β, fixing the values of m 0 (= M 1/2 ) and A λ at the GUT scale. Since here we are mainly interested in the difference at large tan β region, in this paragraph we assume tan β ≫ 1. For large tan β, A λ (M SUSY ) becomes larger for larger tan β since the fourth term of the right-hand side of Eq. (5.7), which involves the bottom Yukawa coupling, becomes more important. This increase in A λ (M SUSY ) for larger tan β makes κ smaller for fixed m 0 since κ = (B eff − A λ )/s. Another reason which makes κ smaller for larger tan β comes from the values of µ eff and B eff , although this effect is less important for large tan β. The values of µ eff and B eff at the SUSY scale are obtained by solving the tadpole conditions, and the solutions at the tree-level are, Both µ eff and B eff become smaller for larger tan β for our sample parameters. From the relation µ eff = λs, a smaller µ eff means a smaller s for fixed λ. From κ = (B eff − A λ )/s, the variation of κ comes from that of s (= µ eff /λ) and that of B eff . For our sample parameters, since the decrease in B eff due to increase in tan β has a larger effect on κ than that of s, κ becomes smaller for larger tan β.
As for Br(µ → eγ), also in the case 2, we find that there is a parameter region which is favored from the Higgs boson mass and in which the predicted value of Br(µ → eγ) is within reach of near-future experiment even if m 0 ∼ 4 (TeV), which has not yet been probed at the LHC.
Summary
In this paper, we have studied the cLFV in the semi-constrained NMSSM+ν R model, taking into account the recent results on the Higgs boson mass determination. We have considered the boundary conditions at the GUT scale to be MSSM-like and semi-constrained in the sense that the SUSY breaking parameters A λ , A κ , m 2 S which are specific to the NMSSM are not necessarily equal to A 0 , A 0 , m 2 0 , respectively. We have considered two cases: in one case the parameters (s, κ, m 2 S ) are determined from the tadpole conditions, which we call the case 1, while in the other case (s, A λ , m 2 S ) are determined from other input parameters, which we call the case 2.
One of the advantages of the NMSSM is that the tree-level lightest Higgs boson mass can be taken to be larger than that of the MSSM by taking a large value of λ. In addition to this effect, there is another new effect in the Higgs boson sector of the NMSSM, namely, we also have to take into account the mixing with the singlet Higgs. This mixing can decrease the Higgs boson mass depending on the parameters. In the semi-constrained scenario we have considered, we find it is difficult to realize both large λ and small mixing with the singlet at the same time. Hence in this paper we have assumed a small λ (∼ 0.1) which makes the mixing with the singlet small.
In the case 1, we have obtained the results similar to those in the MSSM + ν R model. We have also shown that the Higgs-boson-mass favored parameter region depends on the value of κ. As the case 2, we have considered the case where the κ parameter is not an input parameter but is a parameter determined from other parameters via the tadpole conditions, and we have obtained a partly different favored region from the case 1. In both cases, we have shown that in the NMSSM+ν R model there is a parameter region in which the predicted value of Br(µ → eγ) is so large that the µ → eγ decay can be observable at the near-future experiment even if the SUSY mass scale is about 4 TeV. | 8,205.4 | 2015-01-21T00:00:00.000 | [
"Physics"
] |
Detecting Treatment Interference under the K-Nearest-Neighbors Interference Model
We propose a model of treatment interference where the response of a unit depends only on its treatment status and the statuses of units within its K-neighborhood. Current methods for detecting interference include carefully designed randomized experiments and conditional randomization tests on a set of focal units. We give guidance on how to choose focal units under this model of interference. We then conduct a simulation study to evaluate the efficacy of existing methods for detecting network interference. We show that this choice of focal units leads to powerful tests of treatment interference which outperform current experimental methods.
Introduction
Randomized experiments have long been viewed as the gold standard for causal inference [1].In epidemiology, researchers may want to study the effect of vaccines on a target population to protect individuals who are at risk of an infectious disease [2].Technology companies such as Google, Amazon, Facebook, LinkedIn, Netflix, Twitter, and others run online randomized controlled experiments to evaluate the effect of a new feature or product on user engagement [3,4,5].However, in such settings, units under study may interact with each other; for example, a user assigned a new feature may interact with one not assigned the feature, thereby impacting the response of the latter user.This interaction poses challenges in estimating and inferring treatment effects under traditional causal inference methodologies [6].
In particular, a fundamental assumption in the traditional causal inference framework is that there is only a single version of each treatment status and the response of a unit is unaffected by the treatment status of any other unit (see Imbens and Rubin [1] for a review).This is known as the stable unit treatment value assumption (SUTVA) [7].SUTVA is violated under settings in which there is treatment interference-that is, when a treatment assigned to a unit affects the response of other units.Effects on response due to treatment interference are also known as spillover, peer influence, social interaction, or network effects.
The dependence of a unit's outcome on other units' exposures or treatments poses statistical challenges because the potential outcome of a unit-the hypothetical outcome of a unit given a realized treatment assignment-is not only affected by its own treatment status but also by the treatment conditions received by other units.In some settings, interference can be considered as a nuisance parameter, and experiments may be designed in such a way to mitigate this interference, thereby reducing the bias in treatment effect estimates [8].Although these designs may minimize the effect of interference, such designs are not always possible.On the other hand, in other settings, estimating the causal effect in the presence of interference is of interest itself.Examples of this include studies on the efficacy of vaccines in which vaccinated and non-vaccinated members of a population interact with each other and researchers are interested in the overall infection rates.Under these latter settings, considerable work has been devoted to the development of reasonable models of interference in order to ensure identification of both the direct effect of treatment and the effect of treatment spillover on the response [9,10,11,12,13].
In this paper, we introduce a model of treatment interference called the K-nearest neighbors interference model (KNNIM).Under KNNIM, the response of a unit is affected only by the treatment given to that unit and the treatment statuses of its K nearest neighbors (KNN).Such models of interference may be reasonable, for example, under social network settings, where only a few of the observable potential interactions (e.g.accounts that a Twitter user follows) may be influential on a unit's response, and the strength of interaction may be measured by the amount of engagement between users.
We then perform a simulation study to determine how existing methods, and one newly developed method, for detecting treatment interference perform under data generated under a KNNIM model.While these methods were originally developed to detect arbitrary interference [14,15,16,4,5], it is reasonable to assume that the efficacy of these methods may vary depending on the structure of interference.However, little work has been done to assess how these methods perform under various interference models.We repeatedly simulate data under a KNNIM model and apply these methods to the simulated data.We then assess the power of these methods to successfully detect treatment interference when it is present and their likelihood of concluding insignificant interference when it is omitted.Results suggest that methods which incorporate structured selection of focal units [14,15] tend to perform reasonably well on this type of data.We then apply the existing methods to a study on the efficacy of an anti-conflict intervention in schools to determine their strength to detect interference on a real dataset.
The rest of this paper is organized as follows.A motivating example is provided in Subsection 1.1.An overview on causal inference under interference is presented in Section 2. KNNIM is introduced in Section 3. Applying conditional randomization tests for detecting interference is discussed in Section 4.An algorithm on the selection of the focal units under KNNIM is provided in Section 5. Section 6 gives a summary of current methods of detecting interference.Our proposed test statistic for detecting interference under KNNIM is given in Section 7. Section 8 evaluates current methods as well as our test under KNNIM model through a simulation.The application of our method to our motivating example is given in Section 9. Section 10 concludes.
Motivating Example: An Anti-Conflict Program in New Jersey Schools
To motivate our approach, we refer to a recent randomized field experiment assessing the efficacy of an anti-conflict intervention aimed to reduce conflict among middle school students in 56 schools in New Jersey [17].In particular, the experiment was explicitly designed to determine whether benefits of the program can be propagated through social interactions between students.The intervention was administered through "seed" students-those that are selected to actively participate and advocate for the anti-conflict program.These students attended meetings with the program staff every two weeks to address conflict behaviors in their schools and to talk about strategies to mitigate peer conflict.Additionally, seed students were encouraged to publicly reflect their opposition to conflict in their school-for example, identifying a common conflict in their school and creating a hashtag about it-and were also asked to distribute orange wristbands with the intervention logo to students that demonstrate anti-conflict attitudes.
Seed students were randomly assigned as follows.First, within each of the 56 schools, between 40 and 64 students were identified as being eligible to be seed students.Then, from the 56 schools in the study, 28 schools were randomly assigned to receive the anti-conflict program.Finally, within each of these assigned schools, half of the eligible students were selected to be seed students.Analysis was performed only on students that were eligible to be seeds (N = 2,451).
Of particular note, to assess potential pathways for treatment interference, students were asked to identify, in order, the 10 other students that they spent the most time with during the previous few weeks.These students include both seed and non-seed students.Specifically, the survey asks the following question: "In the last few weeks I decided to spend time with these students at my school: (in school, out of school, or online) -Number 1 is for the person you spent most time with, then number 2, then number 3...You don't have to fill in all the lines!To make it easier, you can write down their initials here, then find their number.It can be boys and girls!" [17].Students' responses to this question may include both seed and non-seed students.This yields a unique dataset in which the strength of the interaction between two individuals under study is explicitly recorded.Hence, statistical analyses may benefit from an interference model, such as KNNIM, that allows for direct incorporation of the relative strengths of the interactions.For this dataset, KNNIM models with K up to 10 may be applicable.
An analysis performed by Aronow and Samii [9] estimated the indirect effect of being a seed student on wearing an orange wristband to be about 0.15 with a 95% confidence interval between about 8 and 23 percentage points.That is, students exposed to treated peers were about 15% more likely to report wearing an orange wristband in comparison to students in control schools.
Background and Related Work
The Neyman-Rubin Causal Model (NRCM) is a popular model of response in causal inference [18,1,7,19].Consider a simple experiment on N units, numbered 1, . . ., N , where all units are given either a treatment or a control condition.The NRCM assumes that the response of unit i, denoted Y i follows the model Here, y i (W i ) is the potential outcome under treatment status W i ∈ {0, 1}-the hypothetical response of unit i had that unit received treatment status W i -and W i is a treatment indicator: W i = 1 if unit i receives treatment and W i = 0 if unit i receives control.Inherent in this model is the no interference assumption or stable unit treatment value assumption (SUTVA).This assumption states that there is only a single version of each treatment status and that a unit's outcome is only affected by its own treatment status and is not affected by the treatment status of any other unit [20,7].
In many settings, SUTVA is not plausible, and considerable work has been performed on analyzing causal effects when SUTVA is violated.Sobel [6] showed that violating SUTVA can lead to wrong conclusions about the effectiveness of the treatment of interest.Forastiere et al. [10] derive bias formulas for the treatment effect when SUTVA is wrongly assumed and show that the bias that is due to the presence of interference is proportional to the level of interference and the relationship between the individual and the neighborhood treatments.
When interference is present, the effect of a treatment on a unit's response may occur through direct application of the treatment to that unit, indirectly through application of treatment to units that interact with the original unit, or both [2].We can extend the potential outcomes framework to account for both direct and indirect treatment components.Let y i (W) = y i (W i , W −i ) denote the potential outcome of unit i under treatment allocation W ∈ {0, 1} N , where unit i is given treatment W i , and the remaining treatment statuses are allocated according to W −i .Responses Y i satisfy where 1(W = w) is an indicator variable that is equal to 1 if and only if the observed treatment status W is equal to the hypothetical treatment status w.
The average direct effect τ dir is the average difference in a unit's potential outcomes when changing that unit's treatment status and holding all other units' treatment status fixed.It may be defined as where 1 denotes a vector of all 1's.In contrast to direct effect, the average indirect effect τ ind is defined as the average difference in a unit's potential outcome when changing all other treatment statuses from control to treated, holding its own treatment fixed.It may be defined as where 0 denotes a vector of all 0's.The average total effect τ tot measures the average difference in potential outcomes between all units receiving treatment and all units receiving control: Summing (1) and (2) yields the expression Alternatively, the quantities τ dir and τ ind may be defined respectively as )) while still ensuring that (3) holds.These quantities may differ from (1) and ( 2) if there is interaction between direct effects and indirect effects-that is, if the differences y i (1, W −i ) − y i (0, W −i ) differ depending on the allocation of treatment given to W −i .Moreover, direct effects may be defined for each possible however, such definitions may prevent a decomposition of the total effect into direct and indirect effects [2].Finally, when SUTVA holds, τ tot = τ dir and τ ind = 0.
There are a variety of strategies for designing and analyzing experiments under treatment interference.One approach is to view interference as a nuisance parameter and to reduce the effect of treatment interference on causal estimates through effective experimental design.This line of work aims to use available information on potential interaction of units to design an experiment that mitigates the effect of this interaction.Often, this is done through forming clusters with high within-cluster interaction and randomizing treatment across clusters rather than individual units [8,3,21].However, knowledge of the interaction network may not necessary to make progress on this problem-Sävje et al. [22] investigate methods for consistent estimation of treatment effects when the structure of interference is unknown.This approach may not be ideal when indirect effects are of interest to the researcher.
Rather than considering interference as a nuisance, some researchers tend to relax SUTVA and allow for different models of interference, considering interference effect as of primary interest.One significant example of this involves experiments in the efficacy of vaccines where the likelihood of a person contracting an infectious disease depends on others in the same population who are vaccinated [23,2,24].Under this setting, interference is allowed within groups but not across groups-this is referred to as a partial interference assumption [6], i.e., SUTVA is assumed between groups [25,2,26,27,6,28].
A similar approach to partial interference assumes that treatment interference on a unit can only occur within a small closed neighborhood of that unit [12]-the K-nearest-neighbors interference model (KNNIM) introduced in this paper is a variant of this setting.Another common approach is to assume that the treatment condition can only "spill over" and affect the response of a control unit if a certain number or fraction of potential interactors of that unit receive treatment [3,13].Finally, in its least restrictive form, Aronow and Samii [9] consider the use of Horvitz-Thompson estimators for estimating treatment effects under arbitrary forms of interference.
Another research direction focuses on the development of hypothesis tests to detect the presence of treatment interference in an experiment.Aronow [14] introduces a framework for conditional randomization tests for detecting treatment interference.Athey et al. [15] extend this approach to develop tests for more general forms of treatment interference.Basse et al. [16] build on this work and consider the validity of the test by conditioning on observed treatment assignment of the subset of units who received an exposure of interest.Saveski et al. [5] and Pouget-Abadie et al. [4] develop an experimental framework to simultaneously estimate treatment effects and test whether treatment interference is present within an experiment.
K-Nearest Neighbors Interference Model
To obtain meaningful estimates and inferences on treatment effects under interference, interference models often assume some kind of structure restricting how interference can propagate across units.Otherwise, if a model allows for arbitrary interference, each unit will have a unique type of exposure depending on the treatment assignment for all N individuals.This results in distinct 2 N potential outcomes for each unit and N 2 N potential outcomes for the experimental population in total.However, we only observe N of these potential outcomes, and many causal quantities of interest will be unidentifiable under arbitrary interference.
Thus, the assumptions that researchers make about interference often lie strictly between assuming SUTVA and assuming arbitrary interference, and often greatly reduce the number of potential outcomes for each unit [9,12,13,21].Many of these models specify that the units' outcomes are affected by the number/fraction of treated neighbors, but do not specify which neighbors impact unit response and how they affect the response.
We now propose an interference model-the K-nearest-neighbors interference model (KNNIM)-where the treatment status of a unit j can affect the response of a unit i only if j is one of i's K-nearest neighbors.This model allows for neighbors of i to contribute differing effects on the response of i depending on the proximity of their relationship-neighbors that are "closer" to unit i may have a larger influence on the response of i.Additionally, this model restricts the number of potential outcomes to be 2 K+1 for each unit.
Interaction Measure
We begin formally introducing KNNIM by introducing an interaction measure d(i, j) that measures how strongly unit i associates with unit j.This measure does not necessarily need to be computed across every pair of units (i, j); however, we assume that at least K values of d(i, j) can be computed for each unit i, j ̸ = i.Here, d(i, j) may be measured explicitly.For example Section 1.1 describes an example where respondents assign numbers to 10 students, from 1 to 10, where 1 denotes the closest connection, 2 denotes the second closest connection, etc. [17].Alternatively, d(i, j) may combine several interaction measures to form a proxy for overall interaction.For example, an experiment on a social network may define d(i, j) to be an index variable aggregating the number of comments, likes, and other forms of engagement performed by user i and directed towards user j.Smaller values of d(i, j) may correspond to stronger or weaker interactions from i towards j depending on researcher preference.In this paper, we assume smaller values correspond to stronger interactions.
Of particular note, the dissimilarity measure is allowed to be asymmetric; that is, d(i, j) and d(j, i) may differ.Such a property may be necessary if one user strongly influences another user, but not vice versa.A common instance of this involves social media moguls; a mogul i may induce strong engagement from millions of followers j, but may interact sparingly with the vast majority of these followers.This would suggest that followers of the mogul may be strongly impacted by an intervention given to the mogul-indicated by a small value of d(j, i)-but the mogul's behavior may not be altered by their followers-indicated by a large value of d(i, j).
Additionally, it may also be the case that the same absolute value of d(i, j) may be interpreted differently across users.For example, suppose that d(i, j) is an index variable for engagement on a social media platform.If two users i and i ′ interact with the same user j in identical ways, we may have d(i, j) = d(i ′ , j).However, if i engages with the platform often and i ′ does so sparingly, then d(i, j) may be relatively large for user i (that is, i may interact even more with close users j * , leading to smaller values of d(i, j * )), but d(i ′ , j) may be relatively small for user i ′ .
Remarks
Note, when we define our interaction measure d(i, j), we assume that these interactions can be measured precisely and without error.This assumption may be reasonable under certain settings-for example, the motivating example in Section 1.1-but may be unlikely to hold in others.For example, although a social network may have an error-free record of interactions between users-and thus, it may be possible to exactly determine d(i, j) on that networkan external observer of the network may only have a small fraction of these observations to determine the strength of interactions between users.Moreover, even in the presence of perfect information, useful estimates and inferences still require careful selection of d(i, j) to ensure it accurately measures the strength of the interaction between users.Settings under which these interactions are measured with error have been previously considered [22,29]; such a consideration is outside of the scope of this paper but may be an area of further research.
Additionally, previous work on treatment interference has considered models where the interaction is determined by the absolute value of d(i, j), rather than its value relative to d(i, j * ) for other units j * [29].While such a model may be plausible under certain settings, the aforementioned examples suggest scenarios for which a model that relies on the relative value of d(i, j) rather than its absolute value may be more appropriate.
K-Neighborhood Interference Assumption
Let d(i, (j)) denote the jth smallest value of {d(i, j * ), For ease of exposition, we assume that all values of d(i, j) are unique (in practice, ties may be broken arbitrarily).The K-neighborhood of unit i, denoted N iK , is the set of the K "closest" units to unit i: Define N −iK = {1, . . ., N }\(i∪N iK ) as the set of units that are outside of i's K-neighborhood.Note that the sets {i, N iK , N −iK } form a partition of the N units.
Recall that W i is a treatment indicator for unit i, and let W = (W 1 , W 2 , . . ., W N ) = {W i , W N iK , W N −iK } denote the vector of treatment assignments given to all units N .Addi-tionally, recall that y i (W) denotes the potential outcome for unit i under treatment allocation W ∈ {0, 1} N .Now we give the following assumption that defines the K-nearest neighbors interference model: Assumption 1. (K-Neighborhood Interference Assumption (K-NIA)).Units under study satisfy the K-Neighborhood Interference Assumption (K-NIA) if and only if, for each unit i and for all treatment allocations W N −iK , W ′ N −iK , the potential outcomes satisfy, Assumption 1 states that the potential outcome of unit i is only affected by its treatment and by the treatments assigned to its K-nearest neighbors.Changing treatments for other units outside the K-neighborhood will not affect the potential outcome of unit i.This is a special case of the neighborhood interference assumption (NIA) described in Sussman and Airoldi [12].In its most general form, the K-nearest neighbors interference model (KNNIM) assumes only that the treatment interference structure satisfies Assumption 1.For convenience, we will suppress the treatment statuses in W N −iK when referring to the potential outcomes y i .
For ease of exposition, it is often convenient to view units under study as a mathematical graph.For KNNIM, let has weight equal to the interaction measure d(i, j).In this paper, we may refer to G KNN as the weighted adjacency graph.Throughout this article, the terms vertex, unit, and individual will be used interchangeably.
Let A denote the N × N adjacency matrix of G KNN , which indicates the presence or absence of an edge ⃗ ij in the graph G KNN .That is, A ij = 1 if ⃗ ij ∈ E KNN and A ij = 0 otherwise.Note that the diagonal elements of the adjacency matrix are zero; that is, A ii = 0 for all i.
Choosing the neighborhood size K
The choice of K for a given study may vary depending on the studies' field, the purpose of the study, and the availability of data.The experimenter may also use prior knowledge from previous studies to help choose K-for example, if previous studies have indicated that a person's behavior is influenced by their two closest friends, setting K = 2 may be appropriate.When possible, the K should be selected in early phases of the study to help construct the adjacency matrix A when collecting data.
However, another factor that should be addressed when choosing the size of K is the sample size needed to accurately quantify, estimate, and draw inference on the K-nearest neighbors indirect effects.As mentioned above, number of possible exposures to treatments under KNNIM is 2 K+1 .Hence, to ensure sufficient power, many methods that incorporate KNNIM will require a sufficient number of units assigned to each of these exposure levels.From our experience, a good heuristic is to require roughly 30 observations for each treatment exposure.Under this heuristic, most studies may find models with K = 2 or 3 to be most useful.
Issues may arise if responses are used to inform the value of K.For example, a post-hoc selection of K could lead to inaccurate detection of treatment interference due to inherent multiple testing issues (inferences must account for testing both the appropriateness of K and the presence of interference in the model) and/or bias in indirect effect estimates.It may be possible to incorporate additional structure into KNNIM to allow for a rigorous treatment of this problem, but such work is outside of the scope of this paper.See Alzubaidi and Higgins [30] for additional information about the estimation of indirect effects under KNNIM.
Randomization Inference for Detecting Interference
We now describe the framework for randomization inference for testing the presence of treatment interference under KNNIM.Recall that W is the treatment assignment vector and y i (W) is the potential outcome of unit i under treatment W. Let T = T (W, y(W)) denote a test statistic-a random variable where the randomness follows from the random treatment assignment vector W. Let W obs and Y obs = Y(W obs ) denote the observed treatment assignment vector and the observed outcome vector respectively.Then, T (W obs , Y obs ) is the observed value of the test statistic.We aim to test the null hypothesis of no treatment interference for each unit Typically, randomization tests under the potential outcome framework assume a sharp null hypothesis of no unit-level treatment effects, and potential outcomes are able to be inferred under this sharp null across randomizations [31].However, since the hypothesis (4) does not make assumptions about direct effect of treatment on each unit, the potential outcome y i (W i , W N iK ) may not be imputable for randomizations under which W i ̸ = W obs i .Progress can be made by conditioning on a set of randomizations Ω and choosing a test statistic T such that T is imputable under randomizations in Ω [16].Afterward, a conditional p-value is obtained by computing, for example, the fraction of randomizations W ′ ∈ Ω such that Following Aronow [14] and Athey et al. [15], this conditional randomization inference can be performed by first selecting a subset of units under study called focal units and then only considering randomizations of treatment W that do not affect the treatment status of the focal units.Only variant units-those that are not focal units-can have differing treatment statuses across randomizations.In other words, we simulate draws from the random treatment assignment vectors conditional on the fixed treatment of the focal units.Thus, the null hypothesis of no interference is sharp on the focal units since only treatment statuses of variant units-only those units that can impose indirect effects-are randomized.The test statistic T is only computed on the outcomes of the focal units and hence, the test statistic is imputable under alternative treatment assignment vectors.
Randomization tests tend to be the preferred approach for testing for interference under the potential outcome framework.Asymptotic results for statistics for testing interference can be challenging to derive for a number of reasons, including having to account for inherent dependencies between units' treatment allocations induced through the adjacency matrix A. Hence, the use of asymptotic tests tends to be restricted either to settings that rely on strong distributional assumptions or for carefully designed studies.
Finally, while these approaches were originally developed for tests of treatment interference, Basse et al. [16] extend this work to build a framework for randomization tests for more general forms of causal effects.
Selection of the Focal Units
Although the choice of the focal units does not affect the validity of randomization tests for interference, it plays a key role in determining the power of these tests [15].More precisely, there is a trade-off between the size of the focal set (the set of focal units) and the size of the variant set (the set of variant units).Adding additional focal units allows for larger sample sizes when testing for treatment interference-thereby increasing the power of these tests-but will decrease the number of potential randomizations on the variant units-which decreases their power.For general interference models, several useful heuristics for choosing focal units have been proposed, varying widely in complexity.We now outline a few of these methods.
The most basic approach, suggested by Athey et al. [15], is to simply select at random half of the units in the sample to be focal units-the other half are variant units.Note, this rule does not take into account, in any way, the interference model being assumed.
For models in which interference only exists between units with d(i, j) ≤ r (see Section 3.1.1),Aronow [14] suggests a rule to ensure a significant amount of treated and control variant units within each focal units' neighborhood: where N F is the number of focal units and N T,var,r and N T,var,r are the number of treated and control units in the variant set respectively within a "distance" of r from a randomly selected focal unit.
Finally, when the adjacency graph G = (V, E) is known, Athey et al. [15] proposes using an ε-net as the set of focal units-a set of units such that there is path of ε edges or fewer in G from any variant unit j to some focal unit i [32].Note, this is equivalent to choosing a maximal independent set of units in the graph G ε = (V, E ε )-an edge ⃗ ij ∈ E ε if and only if there is a path of ε edges or fewer from i to j in G.
Under KNNIM, we suggest choosing focal units in a way such that the K-neighborhoods of the focal units do not overlap.This can be done by creating a 2-net on the undirected adjacency graph , where E KNN is the edge set of the directed weighted adjacency graph G KNN .The 2-net can then be used as the focal units.This will enable us to remove dependencies between outcomes of focal units induced by indirect effects.In fact, if treatment is Bernoullirandomized across units, the responses of the focal units will be independent of each other.Additionally, a substantial fraction of focal units may still be selected under this condition, increasing the power of the the randomization inference.
We now describe a simple algorithm to obtain a 2-net on the undirected adjacency graph G * KNN .
Algorithm 1.Given a K-nearest neighbors undirected adjacency graph G * KNN = (V, E * KNN ), the following algorithm will obtain a 2-net on G * .
1.
Step 1: (Initialize) Let U = V.Initialize the set of focal units F = ∅.Initialize the set of variant units I = ∅.
2.
Step 2: (Select focal unit) While |U| > 0, choose one vertex i ∈ U at random.Set i as a focal unit: i ∈ F.
3.
Step 3: (Find nearest neighbors) Set I equal to all units j such that ij ∈ E * KNN .
4.
Step 4: (Find neighbors of neighbors) Find all units k ∈ V \ I such that, for some unit j ∈ I, jk ∈ (E * KNN ) 2 .Set these units k ∈ I.
5.
Step 5: (Remove units) Remove all vertices in F and I from U.
6.
Step 6: (Repeat or terminate) If |U| = 0, stop.The set of focal units F is a 2-net for G * KNN .Otherwise, set I = ∅ and return to Step 2.
Current Methods for Detecting Interference
Current methods for detecting interference include conditional randomization tests [14,15] (as outlined in Section 4) and carefully designed experiments performed with the intention to detect interference [4,5].We now provide a summary of these methods for testing for interference.For randomization tests, we focus on the choice of test statistic used.For experimental design methods, we describe both experimental setup and the test statistic.
Test Statistics for Randomization Tests
Aronow [14] introduced the randomization inference approach for testing for interference between units, where units are affected by their own treatment and by the treatment assigned to their immediate neighbors.In this test, the treatment status for a subset of focal units remains fixed; the rest of the units are the variant subset.The randomization inference is conditional on the observed treatment status of the fixed subset.That is, this test is on indirect effects resulting from the treatment allocation on the variant subset of units.A variety of test statistics may be used under this framework.The Pearson correlation coefficient ρ between the outcomes of the fixed units (Y F ) and the "distance" to the nearest unit of a particular treatment status in the variant subset (D nearest ) may be used as the test statistic: A common choice of distance is the Euclidean distance between pretreatment covariates.This distance can be incorporated into the KNNIM framework through the interaction measure d.Aronow [14] advocates for computing Pearson correlation coefficient on the ranks of these quantities; however, preliminary simulations suggest that the statistic ρ tends to be more powerful for the models considered in Section 8. Athey et al. [15] extend this work and develop tests for more general realizations of interference (e.g.no higher-order interference).As part of this work, they suggest additional test statistics for detecting interference.The edge-level contrast statistic T elc -a modification of a test statistic proposed by Bond et al. [33]-is the difference between the average outcomes of the focal units with treated neighbors and the focal units with control neighbors.Here, T elc averages over edges ij where i is a focal unit and j is not a focal unit: , where F i is an indicator variable satisfying F i = 1 if and only if i ∈ F.
A second test statistic is the score test statistic T score [15].This statistic is motivated by a model of treatment interference in which the indirect effect is proportional to the fraction of treated neighbors [34,11].The score test begins by computing for each focal unit i ∈ F, where Y obs F,1 and Y obs F,0 are the average outcome for the treated and control focal units respectively.Then, T score is the covariance between these r i terms and which is the fraction of treated neighbors for unit i.This statistic is computed across only focal units that have at least one treated neighbor: Finally, Athey et al. [15] consider the has-treated-neighbor test statistic T htn , a modification of Pearson correlation coefficient (5).Instead of using the distance to the nearest treated neighbor, this statistic uses an indicator variable E i for whether any of a unit's neighbors in the variant subset are treated: that is, Then T htn is the correlation between this indicator and the outcomes for the focal units F: where Y obs F and S Y obs F are the sample mean and standard deviation of the outcomes for focal units respectively and S E is the sample standard deviation of the E i variables.
Experimental Design Approach
Saveski et al. [5] and Pouget-Abadie et al. [4] present a two-stage experimental design to test for the presence of interference.In this design, the units under study are divided into two groups and two experiments are performed simultaneously: for one group, treatment is assigned completely at random, and for another group, units are clustered and treatment is assigned across clusters rather than units.Then, estimates of the average direct effect are computed under the assumption of no interference for both the completely randomized and cluster randomized designs.Finally, a standardized difference T exp is computed between these estimates: where τcr and τcbr are the estimates of the direct effect under the completely randomized and cluster randomized designs respectively and σp is a pooled standard deviation of responses from both the completely randomized and cluster randomized designs [5].Large values of T exp imply the presence of indirect effects.
A conservative test of the null hypothesis of no treatment interference can be performed at the α significance level by rejecting the null hypothesis if and only if T exp ≥ α −1/2 .Additionally, as the number of units n → ∞, it can be shown that T exp converges to a standard normal distribution (provided that cluster sizes remain fixed).Thus, an approximate size α test can be conducted by rejecting the null hypothesis of no interference if T exp ≥ z 1−α/2 , where z 1−α/2 is the 1 − α/2 quantile of the standard normal distribution.
K-Nearest Neighbors Indirect Effect Test Statistic
We now propose an additional test statistic designed to detect K-nearest neighbors indirect effects.Let Y obs (W i , W ℓ=1 ) and Y obs (W i , W ℓ=0 ) denote the average response of observed units that are assigned to treatment status W i and have their ℓth nearest neighbor assigned to the treatment condition and the control condition respectively.The K-nearest neighbors indirect effect test statistic T knn is obtained by computing differences in potential outcomes between focal units that receive the same treatment status but differ on the status of their ℓth nearest neighbor, and summing these differences across each of the K nearest neighbors.That is, for W i ∈ {0, 1} and ℓ ∈ {1, . . ., K}, define and define T knn,ℓ as a weighted average of these terms: where N F t and N F c are the number of treated focal units and control focal units respectively.We then can define T knn as a sum of these T knn,ℓ statistics: T knn,ℓ .
Note that, under the null hypothesis of no treatment interference, each of the T knn,ℓ (W i ) terms should be close to 0. Thus, since T knn is a linear combination of these terms, values of T knn that are relatively large in magnitude provide evidence against this null hypothesis, and so, |T knn | may be effective as a test statistic.Additionally, note that the statistic T knn,ℓ may be used directly for a test of interference stemming from treatments assigned to the ℓth-nearest neighbor.
Simulation
We now conduct a comparison and evaluate the performance of the methods covered in Section 6 and 7 for testing the null hypothesis of no interference under the K-nearest neighbors interference model.
Data Generation Procedure
We generate the responses under the following model which satisfies KNNIM with K = 3: In this model, we assume that the closest three neighbors affect the response Y i ; we use W iℓ to denote the treatment status of the ℓth nearest neighbor of unit i.The covariates X j , j = 1, 2, 3, are independent and identically distributed N ormal(0, 1) random variables.We use the Euclidean distance between the covariates X i and X j as the interaction measure d(i, j)-units with more similar values of covariates are more likely to interact with each other.Note that the model (8) defines the set of the potential outcomes for each unit i. Simulated data is then generated by randomizing treatment across units.Different models are obtained through varying the β = (β 1 , β 2 , β 3 , β d ) coefficients and the sample size N .We consider sample sizes of N = 256 and N = 1024.
For each choice of sample size, we consider sixteen different models of interference.We describe these models in Table 4 in terms of the coefficients vector β.The first 3 elements of β represent the indirect effect contributed by first, second, and third-nearest-neighbor respectively.The last element β d is the unit's direct effect.In all models considered, the closer the relationship to unit i, the greater the indirect effect: The indirect effects in every set of three models represent the degree of interference starting from no interference in the first 3 models, followed by very weak interference in the second three models, weak interference in the next three models, moderate interference in the next three models, and finally strong interference in the last four models.
For datasets with N = 256 observations, 1,000 realizations of potential outcomes following each model are generated.Tests of indirect effects are then applied to each of the 1,000 realizations.Results for N = 256 are given in Section 8.4.Due to computational limitations, only 100 realizations are generated for models containing N = 1024 units.Results for N = 1024 are given in the Supplementary Material.
Simulation for Randomization Tests
We compare the performance of both conditional randomization tests and experimental design approaches for detecting interference.For the conditional randomization tests, for each set of generated potential outcomes, treatment is initially assigned completely at random to units, with half of the units receiving treatment and the other half receiving control.Then, focal units are selected according to Algorithm 1.We then proceed with randomization tests as described in Sections 4 and 6.1.We evaluate the performance of the following test statistics: the Pearson correlation coefficient (Pearson) [14], the edge level contrast statistic (ELC), the score statistic (Score), the has-treated-neighbor statistic (HTN) [15], and the K-nearest neighbors indirect effect test statistic (KNN).
Test statistics are computed across 1,000 randomizations for each realization of the potential outcomes; for each randomization, treatment statuses are fixed for focal units and are completely randomized across variant units.For each set of potential outcomes and for each choice of test statistic, we obtain a p-value for the null hypothesis of no treatment interference.Thus, for N = 256, we obtain a distribution of 1,000 p-values for each test statistic under each model.The power of the tests can also be estimated by computing the fraction of p-values that fall beneath a pre-specified significance level α.
Simulation for Experimental Design Approach
In addition, we follow the experimental design in Saveski et al. [5] (described in Section 6.2) to determine its efficacy for testing whether SUTVA holds under KNNIM.For each set of generated potential outcomes, we divide the units into clusters of four units using a heuristic algorithm for the clique partitioning problem with minimum clique size requirement from Ji [35] (Algorithm 4).This clustering is performed once per set of potential outcomes.
We then randomly select half of the clusters to be cluster randomized; for this group, treatment is assigned at the cluster level, with half of the clusters receiving treatment and the other half receiving control.For units belonging to the remaining clusters, each unit's cluster assignment is ignored, and treatment is completely randomized across all of these remaining units.Again, half of these units receive treatment and the other half receive control.For each set of potential outcomes, the random selection of clusters and the treatment randomization is performed 1,000 times.
For each randomization, the statistic T exp in ( 7) is computed.We then perform a test of the null hypothesis of no treatment interaction at the α = 0.05 significance level.A conservative test rejects this null hypothesis if T exp ≥ α −1/2 and an asymptotic test rejects the null if T exp ≥ z 1−α/2 .Thus, for N = 256, we perform a total of 1,000,000 tests: that is, 1,000 tests for each of the 1,000 generated potential outcomes.By computing the fraction of rejected null hypotheses, we are able to assess the Type I Error (Models 1-3) and the power (Models 4-16) of the experimental design approach.
Discussion
Figure 1 provides a visual comparison of the distribution of p-values for the randomization tests to detect interference under KNNIM.Table 5 provides the estimated Type I Error and power of these tests (conducted at significance level α = 0.05) across the 16 considered models.As is expected by design [36], the p-values of all randomization tests under models without treatment interference (Models 1-3) are approximately distributed uniformly between 0 and 1.All tests lack of power under very weak interference (Models 4-6) where the highest power is 0.110 for KNN test followed by 0.108 for Score test.Under weak interference (Models 7-9), the ELC, Score, and KNN tests seem to outperform the Pearson and the HTN tests; the p-values are smaller overall for these three tests.Similar trends hold under moderate interference (Models 10-12) and strong interference (Models 13-16).In particular, under strong interference, Score, KNN, and ELC tests have near 100% power to detect treatment interference.
However, the ELC and HTN tests seem to have some difficulty with detecting indirect effects when direct effects become large.For example, the p-values for these three tests under Models 9 and 12-models that have comparatively larger direct effects-are substantially larger than under Models 7 and 8 and Models 10 and 11 respectively.The Score and KNN tests do not suffer from this loss of power as direct effects increase.For example, for Model 9, the Score and KNN tests have an estimated power of 0.844 and 0.839 respectively where the ELC and HTN tests have an estimated power of 0.553 and 0.249 respectively.Thus, for the considered tests, the Score and KNN tests seem to have the best combination of power in detecting treatment effects and isolating indirect effects in the presence of direct effects.Similar comparisons between the methods hold for datasets with N = 1024 and/or when focal units are selected from only one treatment condition (see the Supplementary Material for details).
Figure 3 gives box plots of the estimated rejection rate across all 1,000 generated potential outcomes for both the conservative and asymptotic tests using the experimental design method [4,5] with N = 256 and significance level α = 0.05.This plot also shows the estimated power of the considered randomization tests under these 16 models.Table 5 includes the median values of the rejection rates across the 1,000 generated potential outcomes for these tests.The conservative experimental approach appears to lead to a very conservative test; the true Type I Error is much smaller than α = 0.05, and the test appears to have weak power under very weak, weak and moderate interference.Even under Models 13-16, which exhibit strong interference, the conservative test only has a median power of approximately 0.6965.
The asymptotic test yields much more desirable results for our simulated data.Overall, the Type I Error seems quite close to the nominal α = 0.05.The asymptotic test outperforms the Pearson and HTN randomization tests for almost all models of interference, and has a power close to 1 of detecting interference under Models 13-16.However, the power of the asymptotic test still is behind that of the Score, KNN, ELC tests across all models.
When we increase the sample size to N = 1024, the conservative approach seems to be powerful for moderate and strong interference while the asymptotic approach is powerful for all interference models except the very weak interference models.However, both approaches remain comparatively less powerful than the Score, KNN, and ELC randomization tests (see the Supplementary Materials for details).
Analysis of Anti-Conflict Program Experiment
In this section we reanalyze data from the motivating study described in Section 1.1 designed to reduce conflict among middle school students in New Jersey.Following Paluck et al. [17], we only perform our analysis on seed-eligible students-hence, the adjacency matrix A only contains information about connections between seed-eligible students.We then select a set of focal units following the procedure in Algorithm 1.
For this study, randomization inference is then performed assuming complete randomization of treatment to the non-focal units.Note, this is a simplification of how treatment was originally assigned to seed-eligible students-specifically, treatment was block-randomized with the schools serving as blocks.However, as our focus is more on discussing the implementation of these randomization tests on data rather than confirming the results of Paluck et al. [17], we allow this simplifying assumption.
Selecting K
Recall that the K = 10 closest connections were identified for each student.However, implementing a KNNIM model with K = 10 is impractical for this example.For a study of this size (N = 2,451), such a model would result in too many potential exposures for each unit (2,048 in total) to allow for meaningful inference to be performed on the indirect effect.Moreover, seed-eligible students often identify connections with ineligible students which are not included in A-in fact, most seed-eligible students have fewer than 3 connections with other seed-eligible students.This complicates the implementation of KNNIM with K = 10, which (from Section 3) is only well-identified when each observation K has at least 10 connections.
To determine whether a choice of K is appropriate for this application, we first subset all seed-eligible students that have at least K connections with other seed-eligible students.We then calculate how many of these students are exposed to each of the 2 K+1 treatment exposures.Finally, we choose the largest K that yields sufficient sample sizes (at least 30 students) for each exposure for our KNNIM model.
To make this explicit, suppose we consider a KNNIM model with K = 2.This sample contains N = 348 units-that is, there are 348 seed-eligible students that interact with at least two other seed-eligible students.Moreover, there are eight treatment exposures possible for each student in this sample; in Table 1, we see that each possible exposure has at least 34 students assigned to that exposure.Hence, K = 2 seems to be an acceptable choice.Now, suppose we restrict our analysis further to only eligible students in treated schools who have at least K = 3 seed-eligible nearest neighbors.In this case, the sample size is reduced to only 100 students.Additionally, from Table 2, we see that there are an insufficient number of units assigned to each exposure-in fact, there is only one student in the sample that for which that student and all its three seed-eligible nearest neighbors are all treated.We conclude that K = 3 yields an inappropriate model, and continue our analysis using a KNNIM model with K = 2.
Assessing indirect effects using randomization tests
We evaluate the performance of the randomization tests for the following statistics: the Pearson statistic (Pearson), the edge level contrast statistic (ELC), the score statistic (Score), the has-treated-neighbor statistic (HTN), and the K-nearest neighbors indirect effect test statistic (KNN).We choose focal units according to Algorithm 1 and treatment is re-randomized across non-focal units 1,000 times.The p-value is the proportion of the replications where the absolute value of the simulated test statistic is greater than the absolute value of the observed test statistic.Results are given in Table 3.For context, an analysis of this experiment by Aronow and Samii [9] estimated the indirect effect to be 0.154-that is, the probability that a non-seed student wears a wristband increases by about 15% if they have a connection with a seed student.Failure of these permutation tests to detect an indirect effect do not negate the findings of the original study.For example, from Section 8, we find that permutation tests struggle to detect indirect effects of similar sizes consistently.Additionally, this modified demonstration dramatically reduces the sample size of the original study, further decreasing the power of these tests.
Conclusion
Traditional causal inference methodologies may fail to make reliable causal statements on treatment effects in the presence of interference.A substantial amount of recent work has been devoted to causal inference under interference, including methods for detecting treatment interference [14,9,15,16,10,11,4,5,12,13].
We consider a new model of treatment interference-the K-nearest-neighbors interference model (KNNIM)-in which the treatment status of a unit i affects the response of a unit j only if i is one of j's K closest neighbors.We give advice for selecting focal units for conditional randomization tests for detecting interference under KNNIM, and suggest a new test-statistic-the K-nearest neighbors indirect effect test statistic (KNN)-for these randomization tests.We then perform a simulation study to compare the efficacy of both the randomization tests and experimental design approach for detecting interference under KNNIM.
Figure 1 :
Figure 1: Boxplots of p-values for the Pearson test (Pearson), has treated neighbor test (HTN), edge level contrast test (ELC), score test (Score) and K-nearest neighbors indirect effect test (KNN) under various KNNIM models.We use N = 256 units and K = 3 nearest neighbors.The p-values are estimated using 1,000 randomizations for each of the 1,000 generated potential outcome realizations.
Figure 3 :
Figure 3: Boxplots of the estimated rejection rates under the experimental design approach for both the conservative and asymptotic tests of the null hypothesis of no treatment interference under various KNNIM models.Plots also contain the estimated Type I Error (Models 1-3) and power (Models 4-13) for the Pearson test (Pearson), edge level contrast test (ELC), score test (Score), has treated neighbor test (HTN) and K-nearest neighbors indirect effect tests (KNN).We use N = 256 units and K = 3 nearest neighbors.The rejection rates are estimated using 1,000 treatment assignments for each of the 1,000 generated potential outcomes.Tests are performed at significance level α = 0.05.
Figure 4 :
Figure 4: Boxplots of p-values for the Pearson test (Pearson), has treated neighbor test (HTN), edge level contrast test (ELC) and K-nearest neighbors indirect effect test (KNN) under various KNNIM models using only control focal units.We use N = 256 units and K = 3 nearest neighbors.The p-values are estimated using 1,000 randomizations for each of the 1,000 generated potential outcome realizations.
Figure 5 :
Figure 5: Boxplots of p-values for the Pearson test (Pearson), has treated neighbor test (HTN), edge level contrast test (ELC) and K-nearest neighbors indirect effect test (KNN) under various KNNIM models using only control focal units.We use N = 1024 units and K = 3 nearest neighbors.The p-values are estimated using 1,000 randomizations for each of the 100 generated potential outcome realizations. .
Figure 6 :
Figure 6: Boxplots of p-values for the Pearson test (Pearson), has treated neighbor test (HTN), edge level contrast test (ELC), score test (Score) and K-nearest neighbors indirect effect test (KNN) under various KNNIM models.We use N = 1024 units and K = 3 nearest neighbors.The p-values are estimated using 1,000 randomizations for each of the 100 generated potential outcome realizations.
Figure 7 :
Figure 7: Boxplots of the estimated rejection rates under the experimental design approach for both the conservative and asymptotic tests of the null hypothesis of no treatment interference under various KNNIM models.Plots also contain the estimated Type I Error (Models 1-3) and power (Models 4-13) for the Pearson test (Pearson), edge level contrast test (ELC), score test (Score), has treated neighbor test (HTN) and K-nearest neighbors indirect effect tests (KNN).We use N = 1024 units and K = 3 nearest neighbors.The rejection rates are estimated using 1,000 treatment assignments for each of the 100 generated potential outcomes.Tests are performed at significance level α = 0.05.
Table 1 :
Number of units in each exposure of Anti-Conflict Program Experiment with K = 2
Table 2 :
Number of units in each exposure of Anti-Conflict Program Experiment with K = 3
Table 3 :
Data Analysis of Anti-Conflict Program Experiment.For this modified experiment, all randomization tests fail to detect an indirect effect.The p-value is smallest for the ELC test (p = 0.14), followed by the Score test (p = 0.22) and the KNN test (p = 0.34).
Table 5 :
Estimated Type I Errors and power for tests of treatment interference for sample size N = 256.Errors (Models 1-3) and estimated power (Models 4-16) for simulated data under KNNIM.Results are provided for the score test (Score), K-nearest neighbors indirect effect test (KNN), edge level contrast test (ELC), has treated neighbor test (HTN) and the Pearson test (Pearson). | 12,681.2 | 2022-03-30T00:00:00.000 | [
"Economics",
"Mathematics"
] |
Task offloading exploiting grey wolf optimization in collaborative edge computing
The emergence of mobile edge computing (MEC) has brought cloud services to nearby edge servers facilitating penetration of real‑time and resource‑consuming applications from smart mobile devices at a high rate. The problem of task offloading from mobile devices to the edge servers has been addressed in the state‑of‑the‑art works by intro‑ ducing collaboration among the MEC servers. However, their contributions are either limited by minimization of ser‑ vice latency or cost reduction. In this paper, we address the problem by developing a multi‑objective optimization framework that jointly optimizes the latency, energy consumption, and resource usage cost. The formulated problem is proven to be an NP‑hard one. Thus, we develop an evolutionary meta‑heuristic solution for the offloading problem, namely WOLVERINE, based on a Binary Multi‑objective Grey Wolf Optimization algorithm that achieves a feasible solution within polynomial time having computational complexity of O ( M 3 ) , where M is an integer that determines the number of segments in each dimension of the objective space. Our experimental results depict that the devel‑ oped WOLVERINE system achieves as high as 33.33%, 35%, and 40% performance improvements in terms of execu‑ tion latency, energy, and resource cost, respectively compared to the state‑of‑the‑art.
Introduction
The proliferation of seamless internet connectivity technologies, such as WiFi, 4G, 5G, or LTE, as well as the availability of high processing capabilities at the mobile edge, has pushed the horizon of a new computing paradigm called mobile edge computing (MEC) [1][2][3].In recent years, the penetration of computation-intensive real-time applications has increased with the rapid rise of massively connected heterogeneous mobile devices (MDs) [4].According to [5], Cisco predicts that by 2030, almost 500 billion gadgets will be associated with the Internet of Things (IoT).Frequent access to cloud services results in an increase in mobile data traffic as well as backhaul latency, which in turn diminishes the Quality of Experience (QoE) of the application users [1].The MEC alleviates these problems by bringing the resources closer to the end users [6].The benefits of MEC can further be extended by introducing collaboration among edge servers located in different geographical regions, called collaborative mobile edge computing (CoMEC) [7].Not only do the edge servers participate in resource sharing, but vertical collaboration [8] also takes place among the three layers of CoMEC.Vertical collaboration in the MEC environment signifies collaboration among multiple layers of IoT computing infrastructure, including the IoT devices at the bottom, the edge cloud servers at the middle, and the master cloud at the top, as shown in Fig. 1.
While CoMEC increases the sustainability of edge computing, service caching at the MEC layer favors the QoE of the real-time application users [9].Service caching refers to caching the information that must be known by the edge server to complete the task execution.This information includes system settings, the heavy program code of the application, and their related databases/ libraries [10].Figure 1 illustrates some real-life use cases where caching is exploited in MEC for better QoE.One such case is where the MEC can be exploited for intelligent transportation systems (ITS), such as extending the connected vehicle cloud into the mobile network [11].As a result, roadside applications operating directly at the MEC may receive local messages from vehicles and roadside sensors, process them, and broadcast alerts (e.g., an accident) to nearby vehicles within the shortest possible time [12].The second case is of virtual reality and facerecognition data processing in various applications that require frequent database access.Both of these applications are data-intensive and need to deliver output in real time to ensure higher QoE to users.In all of the aforementioned cases, service caching can go a long way to ensure fast services to users.Caching prevents the same data from being offloaded multiple times, thus, both transmission latency and energy consumption can be reduced.
Computation offloading to a CoMEC network considering service caching may improve the overall QoE by reducing the associated system costs in terms of the queuing delay of tasks, energy consumption of devices, monetary costs, and so on [13,14].Additionally, it is not realistic to offload all tasks of MD to MEC all the time as the limited storage and computing resources of MEC significantly affect the time delay of the offloaded tasks.Therefore, an optimal task offloading decision needs to be formulated to achieve an efficient network model while keeping the aforementioned system costs minimal.A large number of researches have been done on caching strategies [15,16] and CoMEC.Content caching, computation offloading, and resource allocation problems have been jointly considered in [4] to reduce users' overall task execution time but it lacks collaboration among the edge servers.An AI-based task allocation algorithm namely iRAF has been proposed in [17] for the CoMEC Fig. 1 Real-life applications of service caching in MEC network where the average latency and energy have been optimized.Here, either one of the objectives is optimized by associating binary weights that create unfairness in the result.In [18], monetary cost and execution delay has been optimized using the particle swarm optimization (PSO) algorithm for a vehicular network.However, addressing mobile energy consumption still remains an issue.Three prime objectives, that is, execution time, energy consumed, and monetary cost have been optimized in a multi-user multi-server environment using a multi-objective evolutionary algorithm (MOEA/D) combining simple additive weighting (SAW) and multi-attribute decision making (MDM) in [19].This work too lacks collaboration among servers and cache resource allocation which can be crucial to addressing QoE.
This research endeavors to bridge notable gaps that have persisted in the existing body of knowledge in the MEC environment.In a dynamic environment, where heterogeneous mobile devices and edge servers are involved in optimizing multiple objectives simultaneously, no existing solutions can effectively address the problem.Several challenges are encountered while optimizing conflicting objectives together in a complex environment where multiple real-time applications operate on different user devices.Firstly, real-time applications require faster processing than others.If they are computationally expensive, offloading associated data and codes frequently creates a significant overhead.Secondly, handling offloading decisions while executing tasks can slow down the services of edge servers, especially if the resources of the edge servers become saturated, thus degrading QoE.Thirdly, since multiple objective parameters are targeted for optimization, they can be conflicting in nature.Thus, an exhaustive exploration of potential solution combinations becomes imperative.Most of the studies done so far have opted for single-objective optimization associating scalar weights to multiple objective parameters.Some of these depend on multiple decision criteria for selecting solutions [19].The parameters for such decision-making variables require meticulous fine-tuning and the environment saturated with realtime applications cannot afford to create extra overhead as such.Finally without service caching, every request for a particular service or content would need to travel from the user's device to the edge server or even further to the cloud, resulting in higher latency.This delay can be especially problematic for real-time delay-sensitive applications.
In this paper, we investigate a problem of joint optimization of task execution time, energy, and resource usage cost while offloading tasks in a CoMEC network.A task offloading framework based on grey WOLf optimization that exploits VERtical collaboration IN Edge computing, namely WOLVERINE system is devised to solve the problem.The WOLVERINE stands out from other taskoffloading frameworks due to its innovative features and advantages.Traditional task offloading frameworks suffer from several drawbacks, which can be categorized into three main areas: 1) lack of reproducibility of offloaded application codes, 2) lack of collaboration among the edge servers, and 3) inability to optimize multiple crucial parameters simultaneously.These limitations have negative implications for network systems, resulting in decreased QoE, underutilized resources, and suboptimal network performance.In response to these challenges, WOLVERINE introduces a novel task offloading scheme for real-life computationally intensive applications, utilizing an evolutionary algorithm.This scheme addresses the collaboration among servers and leverages cached application code to minimize time, energy, and resource costs in edge computing environments.The main contributions of the WOLVERINE framework are listed below: • We design a collaborative task offloading framework that effectively utilizes cached and computational resources to enhance user QoE in a CoMEC system where real-time applications are executed.The rest of this paper is organized as follows."Related works" section illustrates the major existing works."System model" section describes the system model of WOLVERINE."Design details of WOLVERINE" section elaborates the computational model, multi-objective problem formulation, and meta-heuristic task offloading scheme."Performance evaluation" section describes the environmental setup and results of experimental analysis.Finally, "Conclusion" section summarizes the key outcomes of our work and some future research directions.
Related works
Several works in the field of collaborative edge computing have been done, including optimal task caching and task allocation while optimizing a single objective function, trade-offs between two or more objectives, and multi-objective optimization.
The first category of works in the literature focused on single-objective optimization in collaborative edge computing, for example, energy, time, or resource cost allocation.In [2], a genetic algorithm based on a dataaware task allocation strategy has been proposed that considers the network congestion control for allocating sub-tasks.In [20], the authors have focused on the reduction of energy consumption for task assignments by considering the heterogeneity of users using a heuristic-based greedy approach.An architecture has been proposed in [21] that considers unloading resourceintensive tasks from client devices in the cooperative edge space or to the remote cloud depending on users' desire and resource availability.An AI-driven intelligent Resource Allocation Framework (iRAF) [17] has been designed to solve complex resource allocation problems considering the current network states and task characteristics.Another group of authors in [22] have utilized a deep reinforcement learning method to solve computation offloading and resource allocation problems in a blockchain-based multi-UAV-assisted dynamic environment.
Computation offloading that focuses on the minimization of system cost comprising the trade-off between energy and task execution delay in the form of a weighted sum has been proposed in [15].Collaboration among MEC servers for (data) cache and computational resource allocation are noteworthy in [15].However, caching the content or code of applications is not enough due to the limited computational capacity of user devices as well as the delay associated with transmitting cached data or code.Hence the idea of jointly task offloading and caching needs to be considered.In [16], a joint service caching, task offloading, and system resource allocation scheme to minimize system cost comprising of time and energy have been formulated using a MILP problem.In [23], a priority-based task offloading and caching scheme is proposed for the MEC environment, where computing a task while reducing energy cost and delay time efficiently is the main priority.A new low-complexity hyper-heuristic algorithm has been proposed in [24], where content caching is performed along with computation offloading in an MEC network to optimize the service latency for all ground IoT devices.Mobility and user preferenceaware content-caching in MEC are orchestrated in [25].The authors in [26] introduce an enhanced binary PSO algorithm, which is designed for optimizing task offloading and content caching in MEC networks.It focuses on jointly optimizing task completion delay and energy consumption.Additionally, an enhanced binary particle swarm optimization (BPSO) algorithm is proposed for content caching in parallel task offloading scenarios.An alternating-iterative algorithm has been developed in [27] for jointly optimizing task caching and offloading in a resource-constraint environment to minimize energy consumption.Here task caching indicates caching of a completed application and relevant data.Subsequently, in [4], content caching, computation offloading, and resource allocation problems have been jointly considered to reduce users' overall task execution time.However, caching a complete application, i.e., content caching is often incompatible with user requirements.Hence, the idea of caching data codes for joint task offloading and data caching using the Lyapunov algorithm for minimizing task computation delay has been introduced in [28].The authors have formalized joint service caching and task offloading decisions to minimize computation latency while keeping the total computation energy consumption low.
Multi-Objective Optimization problems are adopted for computation offloading in edge cloud by the authors of [29] which focused on the offloading probability of tasks to edge cloud from an MD.To optimize execution time, energy, and resource cost to maximize utility for resource providers in IoT networks, energy harvesting properties of unnamed aerial vehicles (UAV) are used in [30].A deep reinforcement learning (DRL) based solution is used for this system network that is managed by blockchain.Multi-objective optimization problems have multiple Pareto-optimal solutions which are obtained by trade-offs.Hence, evolutionary algorithms can play a significant role in reaching a single-preferred solution [31].In [32], time, energy, and cost were minimized for an edge cloud environment using the genetic algorithm NSGA-II.Minimization of average latency and energy consumption simultaneously for offloading tasks using the Cuckoo search algorithm has been proposed in [33].In [34], Grey-Wolf Optimization is used to perform a trade-off between the minimization of energy consumption and response time in an MEC environment.An Improved Multi-Objective Grey Wolf Optimization (IMOGWO) is used for sub-task scheduling in an edge computing environment introduced in [35] to optimize makespan, load balance, and energy simultaneously.Computation time and cost minimization have been performed in [18] using the Particle Swarm Optimization (PSO) algorithm for a Vehicular Edge Computing (VEC) environment.In [19], a tri-objective problem has been considered in a multi-user and multi-server task offloading environment where an application is divided into multiple independent sub-tasks.A Multi-objective Evolutionary Algorithm based on decomposition (MOEA/D) has been developed for optimizing the time, cost, and energy expended in the execution of a particular sub-task.MOEA/D is also used to minimize latency and energy in [36] for the MEC environment, where the ordering of subtasks exists as a constraint.It is also used for minimization of latency and maximization of rewards for servers and tasks in [37].However, the direct assignment of sub-tasks from mobile devices to a server is costly in terms of energy and offloading decision-making.The works mentioned above that addressed multi-objective optimization do not have a system environment similar to that of CoMEC handling real-life applications.
The summary of the state-of-the-art works has been listed in Tables 1 and 2. Most of the existing literature works have either performed single-objective optimization or weighted optimization in multi-user multi-server networks with and without cache or have performed multi-objective optimization without caching and collaboration among servers.The problem of jointly optimizing three basic objectives: execution latency, device energy, and resource cost has not yet been resolved in the CoMEC system incorporating service caching.The generation of Pareto-optimal solutions for optimizing multiple objectives simultaneously in a resource-constrained environment where servers collaborate and cache service is yet to be done.These observations have driven us to design a task offloading framework in the CoMEC environment for generating Pareto-optimal solutions for multi-objective optimization by exploiting service caching of computational resources.
System model
In this section, we describe the different entities of a CoMEC network and the interactions among them.
Entities of CoMEC network
We consider a CoMEC network consisting of a set of collaborative edge servers (CESs), E and a set of mobile devices (MDs), U , as shown in Fig. 2. Each mobile device k ∈ U is connected with one edge server j ∈ E , which is termed as its primary edge server (PES).Let τ be the set of M tasks arrived at a PES from mobile devices.Each task i ∈ τ is denoted by a four-parameter tuple, b i , B i , T max i , δ i , where b i is the input data size, B i is the size of related data codes, T max i is the task deadline and δ i is the task budget.In this work, data code is considered to consist of application-related program code, system settings, and related databases/libraries.
Each mobile device k has computational resources and each edge server j is considered to consist of both computational and cached resources.Table 3 contains major notations.A task generated from an MD can be executed either on the MD itself or at any edge server where edge servers are borrowing resources from the cloud while needed.
Collaboration among entities
Upon receiving a set of task requests, τ from the mobile devices, the PES communicates with the other CESs for task-related information and checks the availability of the resources, i.e., cached and computational resources required for the execution of the tasks.After getting the resource availability information, the PES runs the WOLVERINE task allocation decision algorithm and determines the appropriate resource providers to execute the tasks considering their requirements.If none of the servers has enough resources to complete a task, it is forwarded to the master cloud for execution, implementing a vertical collaborative computation environment.
Design details of WOLVERINE
In this section, we unfold different design components of WOLVERINE.First, we present a computational model of the proposed WOLVERINE system, then we formulate the task offloading problem as a multi-objective optimization problem; and finally, we devise a binary multi-objective grey wolf optimization-based solution.
Computational model of WOLVERINE
In this section, we unfold different design components of WOLVERINE.First, we present a computational model of the proposed WOLVERINE system, then we formulate the task offloading problem as a multi-objective optimization problem; and finally, we devise a binary multi-objective grey wolf optimization-based solution.
Figure 3 depicts the functional modules of the proposed WOLVERINE system, where an individual module is responsible for performing a specific function.The main functional modules of the PES can be grouped into two categories: the PES service module and the CES service module.The PES service module handles the task requests from the MDs and determines the optimal task offloading policy with the help of the CES service module.The responsibility of the CES service module is to manage collaboration between the PES and the CESs.Note that any collaborative edge server can work as a primary server by installing the PES service module to achieve the corresponding functionalities.The functionalities of each module are described below: • Task Profiler receives the task-offloading requests from the MD first and then checks for the required Radio bandwidth allocated to task i by server j Size of the data code related to task i ∈ τ σ i,j Cached resource availability for task i at server j among multiple edge servers and acts as a communication medium between the server and the MDs to share task data and computational results.
Multi-objective problem formulation
In this section, we calculate total latency T ij , energy consumption E ij and monetary cost C ij for offloading task i ∈ τ to edge server j ∈ E or for local computation.Finally, we formulate the task offloading problem of WOLVERINE as a multi-objective optimization problem.
Calculation of T ij
Two different cases for calculating T ij : In the first case, the mobile device executes the task locally, thus, experiencing no communication delay.So, the task computation delay, t k ij for executing task i ∈ τ on the mobile device k ∈ U locally is calculated as, Here, c i is the number of computation cycles required to compute the task, µ k i is the ratio of CPU cycles allocated by k th mobile device to complete i th task and f k is the CPU-cycle frequency of k th mobile devices.
For the second case, the input data and/or data code are offloaded to the MEC servers.If the data code is cached at the offloading server, then only the input data needs to be transmitted; otherwise, the device sends the input data along with the code to the server.For wireless transmission between the mobile device and collaborative edge server that follows Orthogonal Multiple Access (OMA), we consider the Rayleigh channel, and the transmission rate is calculated as, where, B ij is the allocated radio bandwidth, p k is the transmission power, h k is the channel gain ( k ∈ U ) and N 0 is the variance complex of white Gaussian channel noise.Now, we calculate the communication latency, t c ij for offloading task i to edge server j as follows, where, σ ij ∈ {0, 1} .Its value is 1 when the cached resources i.e., data code available in the offloading server, otherwise 0. Here, b i and B i denote the size of the input parameters and data code, respectively.Next, we calculate the execution time of task i at the edge server j as, where, ij is the resource of server j allocated to task i and f j is the total resource of the j th MEC.Finally, we calcu- late the total latency for completing task i using the following equation: When calculating execution latency for real-time computation-intensive applications in edge computing, addressing delivery or downloading latency is crucial.However, in this particular scenario, the emphasis (1) is placed more on upload speeds and network latency rather than download times.Besides, the execution result has typically limited data size and thus it has negligible impact on resource parameters.
Calculation of E ij
For calculating total energy consumption E ij , two pos- sible cases have been brought under consideration.In the first case, the mobile device executes the task locally.Hence, we consider only task computation energy and it is calculated as follows, where, κ is a co-efficient that depends on device's chip architecture [17] and f k is the CPU-cycle frequency of k th mobile device.For the second case, the task is executed at the server, hence, task computation energy is ignored.Thus, the energy the device expends due to transmitting input data and/or code to the MEC server is calculated as, where, p k is the power of k th mobile device and t c ij is the time required to transmit i th task to j th server.Now, the total energy consumption for offloading task i to server j is calculated as, The overall energy consumption can include the energy consumed for transmitting the tasks to the servers.We have prioritized device energy consumption owing to the limited battery resources and computational capabilities of user devices.As a result, energy consumed for executing tasks by servers has been less emphasized.
Calculation of C ij
Similar to latency and energy, the calculation of monetary cost for task computation can also have two possible cases.If the device performs the task locally instead of offloading, then it incurs no monetary cost.In the case of offloading, the cost of computational resources, i.e., CPU cycle and/or storage resources, i.e., memory, sums up the total monetary cost.For executing i th task at j th server, the storage cost is calculated as follows, Here, σ ij ∈ {0, 1} determines the availability of cached resource.If the value of σ ij is 1, storage cost will be incurred (6 for the device; otherwise, no storage cost is required.η j is the storage cost of per bit resource.Next, we calculate the cost of computing i th task at j th server as follows, where, γ j is the unit CPU cycle cost of server j.Finally, the total monetary cost for executing task i at server j can be calculated as, We have not considered cloud servers in our problem formulation.Although cloud server adds significant benefits related to scalability, server-health management, backup, and service provisioning capabilities, they can create hindrances in real-time application environments due to long-distance communication where exceptional QoE needs to be achieved.Uploading and executing tasks in the cloud require extra latency and energy, which impeded performance.Hence execution of tasks in user mobile devices and edge servers adds leverage to network performance.Cloud servers are typically utilized within an edge server network only when all other edge resources are overwhelmed or during network malfunctions.
Objective function formulation
Our aim is to execute each task i ∈ τ at local or remote resource j ∈ E so as to minimize the total execution latency, energy expenditure, and incurred monetary cost.Thus, WOLVERINE formulates the task execution problem as a multi-objective minimization problem as follows, where, Here, X ij is a binary decision variable whose value is 1 if task i is allocated to edge server j, otherwise 0. And X ij ∈ − → χ w , where − → χ w is a D-dimensional vector, − → χ w = (x 1 , x 2 , ..., x D ) .Each entry x d ∈ − → χ w corresponds to the aforementioned decision variable X ij , ∀i ∈ τ , ∀j ∈ E .T( − → χ w ), E( − → χ w ), and C( − → χ w ) denotes the objective func- tions related to task execution latency, execution energy, (10) and monetary cost respectively.Equation (12), which is a multi-objective linear optimization problem, is subject to the following constraints: • Assignment Constraint: Task will be executed in either an edge server or in the user device.No partial assignment of tasks to multiple servers will be done.
• Budget Constraint: Constraint (17) denotes that the monetary cost of task t for executing it to server j cannot exceed the task budget, δ i .
• Energy Constraint: Constraint (18) refers to the energy expenditure of a device in executing a task is limited by a threshold, E max i .
• Latency Constraint: Constraint (19) denotes that a task t needs to be completed within its deadline, T max i .
Theorem 1
The WOLVERINE task offloading problem formulated in Eq. ( 12) is NP-hard.
Proof
The WOLVERINE task offloading problem aims at minimizing three objectives, yielding a set of Pareto optimal solutions.The optimization problem in Eq. ( 12) can be regarded as an assignment problem.To prove the NPhardness of the WOLVERINE task offloading problem, we first convert Generalized Assignment Problem (GAP), a well-known NP-hard problem [39], into a multi-objective problem.The GAP assigns M tasks to N agents to minimize the overall assignment costs as follows: Subject to: (16) i∈τ j∈E Here, C indicates the assignment cost of task m ∈ M to an agent n ∈ N , A is the resource capacity function that indicates the resource used by task m ∈ M and B indi- cates the available capacity of an agent.To convert GAP to a multi-objective assignment problem, we first consider a bi-objective assignment problem where resource and cost constraints of GAP is to be satisfied by converting the three objectives of WOLVERINE to a single one as follows: where,
Subject to:
Here, the value of ǫ i is chosen in such a way that mini- mizing Z ij yields the same result as the multi-objective functions.The function u(X) is defined as, u(X) = 1 if x ≥ 0 and 0 otherwise.
Note that, we do not consider resource limitation constraints of GAP as the constraints of the multi-objective optimization problem, rather we consider it as an objective to be optimized.If the resource limitation constraints are satisfied, then z 1 is equal to zero and the cost of assignment z 2 will be considered.If there exists a bet- ter solution in GAP, a better solution also exists in the corresponding multi-objective problem.We consider (23) j∈N where cost z 1 < z 2 .These two costs produce solutions (0,z 1 ) and (0,z 2 ) in multiobjective assignment problems.If we consider lexicographical minimum, then z 1 < z 2 .Hence (0,z 1 ) is a better solution.Thus, GAP is convertible to a multi-objective assignment problem.Therefore, it is shown that GAP can be converted to a multi-objective optimization problem.Since GAP is a well-known NP-hard problem, the WOLVERINE task offloading problem is also an NP-hard one.
Meta-heuristic task offloading
As the number of MDs or servers increases, the WOL-VERINE system experiences exponential growth in execution time.Many 5G applications can not tolerate a single second of delay.The proposed WOLVERINE framework attempts to optimize multiple objectives, such as minimizing latency, reducing energy consumption, and minimizing monetary costs.These objectives can be conflicting, meaning that improving one objective may degrade the other.Pareto optimal solutions help find a set of solutions where no single objective can be improved without worsening at least one other objective.Evolutionary algorithms help in solving problems that involve Pareto-optimality as the solution choice is based on the population approach [31].Therefore, in this section, we develop a smart task offloading policy using Binary Multi-Objective Grey Wolf Optimization that determines the suitable set of resources to allocate the computational tasks in polynomial time.
Preliminaries
The Grey Wolf Optimization (GWO) [40] is a bioinspired meta-heuristic algorithm that is designed based on the social leadership and hunting techniques found in grey wolves.To mathematically model the social hierarchy of the wolves, the fittest solution is considered the alpha ( α ) wolf.The second and third best solutions are named beta ( β ) and delta ( δ ) wolves, respectively.The leader selection and position updating of the rest of the search agents are done in each iteration, eventually converging to a set of Pareto-optimal solutions.Binary Multi-objective Grey Wolf Optimization (BMOGWO) is a special variant of MOGWO that allows search agents to move in a binary space instead of a continuous spectrum [41].In our specific case, where we aim to optimize execution time, energy consumption by devices, and monetary cost simultaneously, the BMOGWO algorithm demonstrates superior performance compared to other evolutionary algorithms such as MOPSO, BAT, and WHALE optimization algorithms [42][43][44] for tackling multi-objective problems, efficiently addressing the optimization of objectives concurrently.It also outperforms Ant-Colony Optimization (ACO) and Whale Optimization (WO) in scenarios where task offloading is required to edge servers [34].The BMOGWO also surpasses other evolutionary algorithms in scenarios where Pareto-optimal solutions are generated due to better performance in the exploration of solution space and prevention of convergence to local optima [45].
Defining the position vector
We consider a population of wolves denoted by P where each wolf, w ∈ P represents a candidate solution [46].The position of a wolf w in the search space is denoted by a D-dimensional binary position vector − → χ w where D = τ × E .The D-dimensional vector is denoted by − → χ w = (x 1 , x 2 , ..., x D ) where each entry x d ∈ − → χ w corresponds to a decision variable X ij , ∀i ∈ τ , ∀j ∈ E such that, d = (i − 1) × E + j.
Updating positions of the wolves
In GWO, the position of each ω wolf is updated by con- sidering the positions of α , β , and δ wolves.Let − → χ α , − → χ β , − → χ δ and − → χ ω denote the position of α , β , δ and ω wolves, respectively.Now we calculate the distance of the ω wolf from the other three leader wolves as follows.
Here, − → C is a position vector with values in the range [0, 2].The position vector associates weight to each prey (29 item, in our case, the three best solutions.The value of C is chosen randomly to favor exploration by introducing randomness in the algorithm's behavior.This vector controls the effect of prey, in this case, the effect of the three best solutions on the updating search agents.| − → C | > 1 emphasizes the effect of best solutions more on the ω wolves whereas | − → C | < 1 de-emphasizes the effect.This prevents local optimum convergence and ensures that the entire search space is covered.Besides, the random selection of values in C emphasizes exploration not only in the initial stages but also during the final iterations [40].The value of C is determined as . Now the updated position of the ω wolf with respect to alpha, beta, and delta wolves is calculated as follows.
Here, − → A is the co-efficient vector that governs conver- gence or divergence towards the prey, or the best solu- Note that each entry x d ∈ − → χ w corresponds to a binary decision variable of the MOLP problem and is only allowed to have a value of either 0 or 1 as follows.
where, sigmoid(x d ) is defined as, Here, the rand() function provides a uniformly distributed random number in the range of [0, 1] that improves search space exploration with the goal of avoiding local optima.The convergence and diversity of the Paretofront generated by MOGWO for Pareto-optimal solutions in tri-objective problem are higher than that of Multi-Objective Particle Swarm Optimization (MOPSO) [45].Here, convergence indicates how close the obtained solutions are to the true Pareto-front.Diversity demonstrates how thoroughly the search space has been explored.It shows how much an algorithm is comparing (32) the trade-offs and setting the wide range of options.Higher diversity indicates a greater number of options have been explored through a different balance between the objective parameters.Grey-Wolf Optimization strikes a balance between the two of these.It converges toward the true Pareto-front by iteratively computing the solutions.As the algorithm progresses, the positions of α , β , and δ wolves are updated based on their fitness values.These three best solutions found so far guide the search process toward finding better solutions and helps to converge towards Pareto-front through optimal trade-offs.
The exploration and randomness of Grey Wolf Optimization prevent convergence to local optima and provides a better exploration of a wide range of trade-offs.
Algorithm 1 Archive controller the archive
For incorporating multi-objective optimization in GWO, an archive of fixed size is used.It is a simple storage for storing or retrieving Pareto-dominant solutions obtained so far, which is shown in Algorithm 1.
In line 1, for each w ∈ P , a set is initialized that stores the archive solutions dominated by − → χ w .A flag is also initialized to check if any solution from the archive dominates − → χ w .Line 5 checks for the archive members dominated by − → χ w and the dominated mem- bers are added to .Line 7 checks the opposite and sets the flag to 1.In case there is no archive member that dominated − → χ w , i.e., flag = 0, the archive is updated using procedure UpdateArchive(A, �) in line 12.Lines 2-13 iterate for every member of the population and the updated archive is returned.
In Algorithm 2, UpdateArchive(A, �) procedure is sum- marized.In lines 2-4, the dominated solutions are removed from the archive.The capacity of the archive is checked in line 5.If it is not full, then the current non-dominated solution is added to the archive in line 6; otherwise, the solution from the most crowded segment is removed and the current non-dominated solution is added to the archive in line 9.In line 12, if a particular solution is an outlier, the grid is updated adaptively to cover the new solution.
Adaptive grid mechanism
An adaptive grid made of hypercubes [47] is generated using the archive, where the dimension of each hypercube is equal to the number of optimization objectives.The grid mechanism divides the objective space of the problem into a grid.Each hypercube is interpreted as a geographical region that contains the solutions [47].For our WOLVER-INE task offloading problem, which has three objectives, therefore, the adaptive grid consists of three-dimensional hypercubes.The boundary of the objective/target space at t-th iteration is determined as (minT t , minE t , minC t and maxT t , maxE t , maxC t ) .Now, we calculate the modulus of the grid using the same approach [47] as follows, Here, M is an integer that determines the number of segments in each dimension of the objective space.Therefore, the total number of hypercubes is M 3 .
We employ a strategy in which non-dominated solutions are removed from the most crowded segments of the archive and leader selection is performed from the less crowded segments [45].Both of these operations are based on probabilities to avoid local optima in search spaces.The solution density in each segment plays an important role in calculating these probabilities [47].The more non-dominated solutions there are in a segment, the higher the probability of removing one solution and the lower the probability of choosing a leader.The probability of choosing the i-th segment to remove a solution is calculated as follows: where N i is the number of obtained pareto-optimal solu- tions in i-th segment.Note that Eq. ( 40) assigns a higher probability to a crowded segment.On the other hand, the probability of selecting a leader from the archive is calculated in the opposite manner.The roulette-wheel approach is used for the selection based on the likelihood for each hypercube [45], as expressed by the following equation: (37 From Eq. ( 41), it is clear that a segment with fewer solutions has a higher probability of being chosen as the leader.
BMOGWO-based task execution
The steps of the BMOGWO-based task execution scheme of WOLVERINE are presented in Algorithm 3. (41 First, we initialize the archive in line 2. Next, we initialize a population of random position vectors and calculate their fitness values in lines 4 and 5.The archive is populated with a set of non-dominated solutions generated using Algorithm 1 in line 7. Line 8 selects three different leaders using a grid mechanism.For each dimension of every wolf, the positions are updated in line 13.Parameters a, A, and C are updated in line 16.
Next, we calculate the fitness values of the updated position vectors in line 18 and update the archive with updated positions using the Algorithm 1 in line 20.
Hence, from the updated archive, three new leaders are selected using Eq. ( 41) in line 21.Lines 11-21 repeat until a maximum number of iterations I max is reached.Finally, the value of entry x d of the best solution − → χ α is assigned to the corresponding decision variable in lines 25-26 and the decision vector X is returned.
Complexity analysis
In this section, we analyze the complexity of the three algorithms used in WOLVERINE.In Algorithm
Convergence analysis
In this section, we analyze the convergence of the developed WOLVERINE system, which is measured using Inverted Generational Distance (IGD).IGD is a metric used for assessing the quality of a set of solutions produced by an optimization algorithm, particularly in the context of multi-objective optimization.It measures the convergence and diversity of the obtained solutions concerning the true Pareto front, which represents the optimal trade-off between conflicting objectives.The IGD metric calculates the average distance from each point in the obtained solution set to the nearest point in the true Pareto front.A lower IGD value indicates a better convergence and diversity of the obtained solutions.
If the IGD value between the obtained Pareto front ρ and the true Pareto front ρ * is IGD(ρ, ρ * ) , then the con- vergence ratio (CR) C can be defined as, where, ρ t and ρ t+1 denote the Pareto Front value after t and (t + 1) iterations, respectively.
Theorem 2
The convergence ratio C of the developed BMOGWO-based WOLVERINE system is bounded by
Proof
This proof can be done by inductive hypothesis.We need to proof that CR C ≤ g(P, It can be mathematically denoted as, Here, g(P, − → C , − → A , τ , U, E, t) indicates the upper bound of the solution, where the solution of the algorithm is the farthest from the true Pareto front ρ * , which can be mathematically represented as follows, where, d( ̺, ̺ * ) denotes the distance between the two solu- tions ̺ , and ̺ * in the solution space.The IGD value of solution ρ t after iteration t can be calculated similar to [48] as follows,
Basis
Step: Let us assume that ρ 0 denotes the initial Pareto front approximation and IGD(ρ 0 , ρ * ) be the initial IGD value.Then, Eq. ( 43) can be modified as follows, where, P 0 , − → C 0 , − → A 0 denote the initial population size, position vector, and co-efficient vector, respectively.(42 Equation (46) confirms that the induction hypothesis holds true for the base step.
Inductive
Step: Assume that the theorem holds up to the t-th iteration i.e., IGD(ρ t , ρ * ) ≤ g(P, − → C , − → A , τ , U, E, t) .Now, we need to express the improvement in performance from iteration t to t + 1 , which can be mathemati- cally represented as, where, h(.) denotes the improvement function.As (47).Thus it confirms that Eq. ( 43) holds true for all t and convergence ratio C of the developed WOLVERINE system is bounded by g(P,
Performance evaluation
In this section, the performance of our proposed multiobjective task offloading with the caching approach is compared with some of the existing strategies in the literature: MGBD [4], iRAF [17] and MOEA/D [19].The work presented in [4] focuses on jointly addressing the content computation offloading, and resource allocation problem to reduce users' overall task execution time.An AI-driven resource allocation framework (iRAF) has been developed in [17] to tackle intricate resource allocation problems by considering current network conditions and parameters to optimize either execution time or energy consumption.In a multi-user and multi-server task offloading environment, a tri-objective problem is addressed in [19], where time, device energy, and cost are optimized using Multi-Objective Evolutionary Algorithm (MOEA/D).However, caching the data codes has not been considered in this work.The environmental setup, performance metrics, and results are discussed below.
Environmental setup
We have implemented our proposed algorithm and performed empirical numerical evaluation using Python 3.6.0[49].For evaluation purposes, we consider a ( 47) scenario where a stationary edge server is centered in a 1000 × 1000m 2 urban area.A number of collaborative edge servers are randomly located around the primary edge server and several mobile devices are connected to the edge servers.The path loss model between the mobile devices and servers is assumed to follow a lognormal distribution.In addition to the above metrics, we model packet loss on each path using the Gilbert loss model [50] and the channels handle the re-transmission of lost packets using TCP protocol.20 channels are employed, each with a bandwidth of 2MHz.Our study is focused on real-time, delay-sensitive, and computationintensive applications, including interactive video gaming, AR/VR applications, medical image processing, and face recognition.The task arrivals pattern follows a Poisson distribution.The whole experiment has been run 50 times and the average of all these results is taken to plot each graph.Major environment setup parameters used in this paper are shown in Table 4.In our simulation setting environment, we have ensured that resources are allocated proportionately across different systems.All the methods from the literature were implemented and performance metrics data were collected in a system environment consistent with that of ours.
Performance metrics
We have measured the performance of our algorithm based on the following metrics: • Average latency is defined as the ratio of the total delay experienced by the tasks to the number of tasks.• Average Energy Consumption is the average amount of energy consumed by each edge device.
• Average Cost Savings is calculated as the difference between a device's budget and the monetary cost paid by it divided by the number of tasks.The higher value indicates a higher system performance.• Task Completion Reliability (TCR) is the ratio of the number of tasks completed to the submitted ones.
Result analysis
In this section, we have discussed the performance of our proposed system by varying the number of tasks, the number of servers in the system, and the average computation power per task.
Impact of a varying number of tasks
In this experiment, we vary the number of tasks of the overall network system from 10 to 250 and keep the number of servers fixed at 12. The result and comparison are shown in Fig. 4. Figure 4(a) shows that as the number of tasks increases, the average latency also increases.Initially, latency increases slowly for a smaller number of tasks.However, as the number of tasks exceeds 160, latency increases exponentially.Latency is lower in MGBD and WOLVER-INE cases than the iRAF because the former two have implemented caching.In the case of MOEA/D, the performance is close to the WOLVERINE.A single mobile device user decomposes an application into multiple independent sub-tasks and offloads them to various servers, depending on resource availability.However, as the sub-tasks are executed in parallel, the total latency considered for completing a task is the maximum latency among the sub-tasks, and a risk of high delay remains in case the system reaches its saturation point.Besides, the absence of server-to-server collaboration makes it difficult to share sub-tasks.Our proposed WOLVERINE exploits both collaborative edge computing and caching.Therefore, if the required data for a specific task is not cached at a server or computational resources are not present, the server can pass the task to another collaborative server where the task data is cached already, which decreases the service delay significantly.Therefore, our proposed WOLVERINE outperforms the state-of-the-art approaches.
The impact of varying numbers of tasks on average energy consumption is depicted in Fig. 4(b).With the increasing number of tasks, the energy toll is also increasing because a large number of tasks need to share the same bandwidth and require higher latency to reach the edge.Both WOLVERINE and MGBD perform better than iRAF because of exploiting caching, which helps the system's users reduce backhaul latency and energy.However, the energy consumption gap increases significantly The required CPU cycles to complete task [6 × 10 9 -9 × 10 10 ]Hz The CPU-cycle frequency of MD 300MHz The computation capability of edge servers The bandwidth of one channel 2MHz
Size of input data [3MB-50MB]
Number of Iteration ( I max ) 50 Population size 50 between WOLVERINE and MGBD when the number of tasks rises from 110 to 160 in the network, as MGBD needs to request the cloud for task processing owing to the unavailability of resources.For MOEA/D, a higher number of tasks means sub-tasks are executed in mobile devices more frequently, which increases overall energy consumption in the system.Besides, when sub-tasks are offloaded to multiple servers, the data code needs to be offloaded as well.Thus, energy expended for offloading data code to edge servers also occurs as an overhead, and offloading tasks frequently also incurs some communication costs with the increasing number of sub-tasks.
For WOLVERINE, all the tasks requested can be cached either at different servers or the data code for computing the tasks needs to get transmitted at the servers; that is, no access to the cloud is necessary, thus, reducing energy consumption.Besides, collaboration among servers facilitates lower energy consumption.In Fig. 4(c), we can observe the impact on average cost savings when task numbers are varied.As the number of tasks escalates, the average cost savings is reduced as the number of offloading tasks is increased, which in turn increases monetary costs for memory and computation.For the iRAF and MGBD, with increasing tasks, the cost goes up faster than WOLVERINE.The reason behind the increasing cost in the case of iRAF is the use of DNN and Monte Carlo Tree, which incur memory and computation costs.For MGBD, with an increasing number of tasks, device budget savings decrease due to the lack of collaboration among servers and offloading to the cloud in case of unavailability of server resources.In the case of MOEA/D, the cost is lower when the number of tasks is high as many of those are locally executed.However, offloading to multiple servers from a single user device can incur higher costs in terms of memory and computation depending on the availability of server resources.The proposed WOLVERINE offers a higher percentage (50%-95% ) of savings compared to MGBD and iRAF due to exploiting service caching, binary offloading, and collaboration among servers.
In Fig. 4(d), we see that increasing the number of tasks reduces Task Completion Reliability (TCR) in all of the methods.This happens due to the scarcity of resources and the delay sensitivity of tasks.For the iRAF, the TCR falls steadily when the number of tasks increases from 10 to 110 but falls sharply with increasing tasks from 110 to 260.As the iRAF allows partial offloading, therefore, with the increasing number of tasks, the tendency to offload the greater portion of a task is also increased, which in turn also enhances the task drop rate.The higher task drop in iRAF is the higher training time using the DNN and Monte Carlo Tree, creating a latency overhead that Fig. 4 Impacts of varying number of tasks may cause many applications to exceed their deadlines.For MGBD and WOLVERINE, the TCR falls gradually with an increasing number of tasks due to caching.However, a depth-first-search tree is constructed in MGBD, which incurs some overhead, resulting in crossing the deadline for some tasks.Hence, TCR is lower in comparison to WOLVERINE.For MOEA/D, with the gradually rising number of tasks, the drop rate of sub-tasks can be increased due to the lack of resources and higher queuing delay of mobile devices.Since collaboration among edge servers and caching cater to the task completion rate better, WOLVERINE performs better than MOEA/D in system environments that contain rapidly offloaded tasks.
Impact of a varying number of servers
The impact of varying numbers of servers on the objective parameters is represented in the graphs of Fig. 5.For this scenario, the number of tasks is fixed at 50.
For a fixed number of tasks, as the number of resources increases, the average latency decreases for all schemes as shown in Fig. 5(a).The iRAF has higher latency in comparison to both MGBD and WOLVERINE because of the higher computational time of DNN and Monte Carlo Tree.In the case of MGBD, the construction of the search tree and exhaustive searching procedure affect the overall latency.In the case of MOEA/D, a task is disintegrated into multiple sub-tasks, which incurs higher latency overhead for server-to-device communication, and sometimes it faces difficulty to find the most suitable server for executing some sub-tasks.On the other hand, WOLVERINE exhibits better performance with the increasing number of servers as it is a joint implementation of edge server collaboration and caching.
WOLVERINE also performs better in terms of energy consumption, as depicted in Fig. 5(b).With the increasing number of servers, the energy consumption of MDs is significantly minimized in all studies.In WOLVER-INE, more tasks are offloaded to the edge servers when the number of collaborative edge servers increases along with the increasing availability of cached data.That is why, up to a certain increment in the number of servers, energy consumption reduces.After that, the energy level hits a plateau or does not decrease significantly as the amount of cached data and computational resources increase with the increasing number of servers.
With the increasing number of servers for all schemes, the cost of allocating tasks is increased and the average cost savings is decreased, which is depicted in 5(c).In WOLVERINE, the cost of allocating tasks increases owing to memory cost and monetary cost for computation in various CoMEC servers.Nevertheless, the average savings remain greater than that of MGBD since the local computation of tasks also occurs here, which may incur no cost at all, and the cached resource size in MGBD is Fig. 5 Impacts of varying number of servers high along with the exhaustive search cost of DFS trees.On the other hand, iRAF involves DNN and Monte Carlo Tree in an optimization algorithm that occupies some extra memory.Therefore, the overall computation and memory cost is higher than our proposed method.
The impact on TCR (Task Completion Reliability) for varying numbers of servers is demonstrated in Fig. 5(d).For WOLVERINE and MGBD, the TCR escalates with the increasing number of servers due to exploited caching.However, the content caching in MGBD faces some resource constraint issues for highly resource-intensive applications.On the other hand, the aforementioned issues for iRAF may incur task drops due to exceeding the deadline in this scheme.For MOEA/D, TCR is relatively stable in comparison to the MGBD and iRAF as sub-tasks are offloaded more frequently with increasing tasks in the system.However, it is still not better as WOLVERINE due to the absence of server-to-server collaboration.For WOLVERINE, TCR improves due to incorporating caching as well as server collaboration.
Impact of caching
Caching the data code for computation-intensive tasks instead of the entire code itself creates a certain impact on objective parameters that are to be optimized and the impact is depicted here in Fig. 6.In this experiment, we varied the average computation per task while fixing the number of tasks and servers at 50 and 12, respectively.
Figure 6(a) indicates that if the average computation cycles per task increase, then average latency will increase exponentially without caching.Here, a considerable amount of time will be required for the computation of tasks along with the offloading of the tasks to collaborative servers.On the other hand, if caching is performed, then it is observed that the time required will be less as some of the data code is already available on the cached server; only the input data needs to be transmitted.Similarly, for average energy consumption, such changes are observed in Fig. 6(b), i.e., if the tasks are cached, less energy is wasted in communication overhead, which in turn reduces overall average energy consumption.Therefore, it is pretty clear from the experiments that service caching leverages task completion notably.
Impact of geographical proximity of users
In this experiment, the geographical area is varied in an edge computing environment to measure the performance of user service latency and energy consumption.A larger area results in an augmented physical distance between edge servers and users, leading to extended transmission times and subsequently higher average latency.Additionally, expanded areas tend to experience heightened network congestion as a consequence of increased user traffic, exacerbating latency concerns.This congestion contributes to elevated communication overhead, necessitating higher transmission power and, consequently, increased energy consumption for devices.Furthermore, the scarcity of resources in an extended area often necessitates a greater execution of tasks by the devices themselves, resulting in amplified energy usage at the user end.
A gradual rise in average latency and device energy is observed for all the schemes shown in Figs.7(a) and 7(b), respectively.However, for WOLVERINE, the increase in average latency and energy are significantly lower than the rest of the schemes.Since both collaborative edge computing and caching are exploited in this scheme, service delay and energy consumption are lowered as increased area multiplies the chances of finding appropriate edge servers and cached resources.For the rest of the schemes, the drawbacks for a higher value of energy and latency can be attributed to transmission to cloud, task dependency, higher computation, memory
Ablation experiment
As a strategy to retain superior non-dominated solutions and to expand the exploration of a broader search space, the WOLVERINE system incorporates an adaptive grid mechanism.This technique facilitates leader selection and enhances the quality of solutions through probability-based elimination methods.To conduct an ablation experiment, we have adjusted the average number of computations per task, maintaining a fixed number of tasks and servers at 80 and 16, respectively.Subsequently, we have analyzed the effects on latency and energy consumption with and without the adaptive grid mechanism.
The graphs in Fig. 8 state that as the computation cycles per task increase, both average latency and average energy consumption experience an exponential rise when the adaptive grid mechanism is not utilized.Conversely, its inclusion leads to reduced latency and energy consumption.These are achieved by accelerated convergence and exploitation of the most effective solutions.It has also notably decreased the number of trial-and-error attempts.
Hypervolume and inverted generational distance
In this section, we measure the performance to evaluate the quality of Pareto-optimal solutions obtained by the developed WOLVERINE system.
In multi-objective evolutionary algorithms (MOEAs), hypervolume is a commonly used performance metric, which measures the volume of the objective space that is dominated by the solutions in the Pareto front approximation.The hypervolume indicator assesses the effectiveness of a given set of solutions by calculating the volume of the objective space that it covers.It provides a single scalar value that represents the spread and diversity of the Pareto front approximation.Higher hypervolume values indicate better coverage and a more comprehensive representation of the Pareto front.
In Fig. 9(a), the hypervolume region of the Pareto front has been demonstrated in a 3D graph, which is computed based on the covered space by the non-dominated solutions, relative to a predefined reference point.This reference point represents an ideal state without any Fig. 7 Impacts of geographical proximity of users Fig. 8 Impacts adaptive grid mechanism necessary trade-offs between objectives, which is marked green color in the graph for clarity.The shaded region, inclusive of the reference point, visually represents the hypervolume region, signifying the extent of the objective space covered by the set of non-dominated solutions.For hypervolume in this case, we have considered 10 servers with 60 tasks and the scalar value of hypervolume is 24.59 after 50 iterations.This is the highest value obtained which became steady after 50 iterations.
For convergence analysis of the developed WOLVER-INE system, we have calculated IGD in terms of the number of iterations.Figure 9(b) illustrates how the IGD values change throughout the execution, indicating the performance and convergence of the algorithm.From this graph, we can observe that the IGD initially starts with a high value and gradually decreases throughout the first 50 iterations.Subsequently, it stabilizes at a particular value, indicating that convergence has been achieved.Higher values of IGD suggest that the solution set obtained for a certain number of iterations is not close to convergence.As the optimization algorithm explores more of the search space, lower values of IGD are obtained, signifying improved convergence and proximity to the true Pareto front.We have compared IGD values of MOEA/D with that of WOLVERINE.It is observed that IGD values for MOEA/D are higher than those of WOLVERINE for similar iteration numbers.The poorer distribution of the Pareto front in the case of MOEA/D can be attributed to its higher IGD values in comparison to WOLVERINE [45].The value of IGD becomes stable after 50 iterations for WOLVERINE whereas for MOEA/D the value stabilizes after 65 iterations and at a higher value than that of WOLVERINE.Thus the graphical representations point towards a better convergence of WOLVERINE in comparison to MOEA/D, indicating that WOLVERINE achieves convergence faster.
Conclusion
This paper introduced an efficient task offloading framework, namely WOLVERINE, that brought about a collaboration among the edge servers to share computational resources while penetrating real-time applications in edge devices with optimal energy consumption and resource cost.The multi-objective optimization problem was proven to be NP-hard; therefore, we formulated a Binary Multi-objective Grey Wolf Optimization-based meta-heuristic solution that deduced the Pareto optimal solutions for time, energy, and cost objectives i.e., the tri-objective optimization problem in polynomial time.The performance analysis results carried out in Python and demonstrated significant performance improvement as high as 33.33%, 35%, and 40% in terms of execution latency, energy, and resource cost, respectively compared to the state-of-the-art.
An improved version of GWO can be exploited on the developed system through dynamic weight association to multiple objectives and modification to the convergence factor.New scopes can be added by considering data loss, security of executed tasks, and so on.Deployment of a deep-learning model to accurately predict the task arrival rate, allocate the tasks, and adjust the cache resources following that prediction can be interesting future works.Furthermore, we can enhance our current framework by hybridizing different evolutionary algorithms to address the strengths of these algorithms in a dynamic environment.Consideration of robustness and fault tolerances in case of points of failure also adds a new edge to our current work.
Fig. 2
Fig.2The structure of CoMEC network over fiber-wireless connection
γ jd
Per unit CPU-cycle cost of server j ∈ E η j Per unit storage cost of server j ∈ E χ w Position vector of wolf w ∈ P x w Position of wolf w ∈ P at d th dimension cached resources for each task using the Resource Availability Database (Path 2) and propagates the task and resource data to the Optimal Task Allocator module (Path 3) for optimal resource allocation.• Optimal Task Allocator is the core computational block of the PES service module.It collects the task's descriptions from the task profiler, queries the resource availability of the Collaborative Edge Servers (CES) to the Resource Availability Checker (Path 4) whose result comes through the Resource Availability Database (Path 5-6-7-8-9), formulates the WOLVERINE task offloading problem and communicates the associated task offloading decision vectors to the MDs.• Resource Availability Database records the availability of the computational and cached resources of the CES that comes through the Communication Module and Resource Availability Checker (Path 6-7-8).• Resource Availability Checker queries resources to other neighboring CESs and updates the cached and computational resources periodically or when triggered by the Optimal Task Allocator (Path 16).• Task Execution Module executes the computational tasks offloaded to it by utilizing the available computational resources (Path 14) and cached data administered by Caching Management Module (Path 11-12-13).• Caching Management Module supplies the cached data to the Task Execution Module from the Cached Data module (Path 12-13) and maintains the cached data repository by performing maintenance functions.• Cached Data Repository stores the cached data code from the Computational Resource module for further use (Path 15).• Computational Resources module stores the server's available resources, such as CPU cycle and memory, for usage by the Task Execution Module.• Communication Module establishes collaboration
Algorithm 2
Algorithm for updating archive Algorithm 3 BMOGWO based task offloading
Fig. 6
Fig. 6 Impacts caching on the performance
Table 2
Summary of targeted performance parameters
Table 3
Description of notations Notation DescriptionUSet of mobile devices in the system τ , E Set of tasks and set of servers, respectively Line 20 again calls Algorithm 2. Lines 11-22 are also enclosed within a loop that iterates for I max times.The rest of the algorithm takes constant time to run.Thus, the total computational complexity of Algorithm 3 is 2, Line 3 is enclosed within a loop that iterates |A| times in the worst case.Line 8 requires M 3 time.The rest of the statements are of constant time complexity.Thus, the overall complexity of Algorithm 2 is O(|A| + M 3 ) .Next, we define the complexity of Algorithm 1. Lines 5-9 are enclosed within a loop that iterates |A| times.Line 12 updates the archive using Algorithm 2 that takes O(|A| + M 3 ) .Lines 2-13 are also enclosed within a loop that takes |P| times.Hence, the computational complexity of Algorithm 1 is O(|P| × (|A| + M 3 )) .Finally, we analyze the complexity of Algorithm 3. Lines 4 and 5 are enclosed within a loop that iterates for |P| times.Line 7 updates the archive that requires O(|P| × (|A| + M 3 ))) times.Line 13 is enclosed within a nested loop that iterates |P| × | − → χ | times.Line 18 is enclosed in another loop that iterates for |P| times.
Table 4
Evaluation parameters | 14,518 | 2024-01-23T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Mode Interference Effect in Optical Emission of Quantum Dots in Photonic Crystal Cavities
Radiation properties of a pointlike source of light, such as a molecule or a semiconductor quantum dot, can be tailored by modifying its photonic environment. This phenomenon lies at the core of cavity quantum electrodynamics (CQED). Quantum dots in photonic crystal microcavities have served as a model system for exploring the CQED effects and for the realization of efficient single-photon quantum emitters. Recently, it has been suggested that quantum interference of the exciton recombination paths through the cavity and free-space modes can significantly modify the radiation. In this work, we report an unambiguous experimental observation of this fundamental effect in the emission spectra of site-controlled quantum dots positioned at prescribed locations within a photonic crystal cavity. The observed asymmetry in the polarization-resolved emission spectra strongly depends on the quantum dot position, which is confirmed by both analytical and numerical calculations. We perform quantum interferometry in the near-field zone of the radiation, retrieving the overlap and the position-dependent relative phase between the interfering free-space and cavity-mode-mediated radiative decays. The observed phenomenon is of importance for realization of photonic-crystal light emitters with near unity quantum efficiency. Our results suggest that the full description of light-matter interaction in the framework of CQED requires a modification of the conventional quantum master equation by also considering the
I. INTRODUCTION
Any pointlike source of light changes its properties when placed in a nanoscale optical cavity due to modification of the photonic states into which the source can radiate.A well-known consequence of this is the Purcell effect [1] that reflects the dependence of the photon emission rate on the local density of states (LDOS).While the Purcell effect is a well-studied phenomenon, the influence of the interference of different photonic states on the source emission is much less studied, especially experimentally because of the relatively high technological requirements.
Recently, semiconductor quantum dots (QDs) integrated with photonic crystal (PC) cavities and waveguides have enabled the realization of deterministic single-photon sources with high purity and record brightness [2][3][4][5][6][7], near unity QD-waveguide coupling efficiency [8,9], and narrowband emission filters [10], comprising the key elements for on-chip optical information processing and quantum computing.In these schemes, deterministic generation of single photons, e.g., required for optical quantum computing applications [11], is achieved via the adjustment of LDOS.Ideally, increased LDOS enhances the coupling of QD emission to a prescribed mode [1,12] via Purcell's effect and strongly reduces frivolous QD emission into free space [13][14][15][16].The latter comprises radiation losses of QD-PC devices that should be reduced to the level of other losses, e.g., QD nonradiative recombination [17,18] and coupling to the semiconductor matrix [19], via engineering of LDOS of nonconfined modes.
The subwavelength spatial features of the LDOS related to confined states of cavity modes were widely investigated, using various techniques based on single dipole probes, inelastic electron scattering, or scanning near-field optical microscopy (SNOM).In particular, site-controlled Ge QDs [20] and DNA nanoparticles [21] mapped the inplane LDOS, whereas cathodoluminescence [22] and electron energy-loss microscopy [23] probed the out-of-plane LDOS of modes confined in PC cavities.SNOM-based Fano imaging relying on quantum interference between different photon scattering paths displayed the in-plane electric field in PC cavities [24][25][26] and molecules [27].Magnetic and electric field components of the cavity modes were probed simultaneously using near-field plasmonic perturbation imaging [28].The measured near-field profiles qualitatively agree with numerical simulations [20][21][22][23][24][25][26][27][28].However, the LDOS of free-space modes (FMs) has remained elusive, as direct QD-free-space emission is typically obscured by the nonradiative decay in time-resolved photoluminescence (PL) traces [29], and the cavity mode electric field mixes with the free-space contribution to LDOS maps obtained in PC cavities [24][25][26].
In this work, we reveal an important role of the quantum interference between different decay paths of a QD exciton in a PC cavity.These decay paths correspond to the direct and cavity-mode-mediated emission of the QD into free space.The interference exhibits specific spectral features resembling Fano resonances [30].The strong dependence of the characteristic features of the observed Fano resonances on the location of the QD in the cavity implies the corresponding strong spatial dependence of the QD radiative loss.The latter significantly affects the quantum efficiency of the intracavity QD.Hence, in general, tuning the radiative loss via the QD positioning can be used to improve PC-based quantum sources of light.The observed phenomenon is reproduced using 3D finite element method (FEM) simulations that allowed spanning the large set of design parameters to find essential counterparts affecting the interference.Expanding the concept of Fano imaging [24][25][26][27], we perform quantum interferometry of the freespace radiation modes of QD excitons in linear PC cavities using the exciton-cavity mode coupling as a reference.This technique extracts the direct QD free-space radiative decay from the nonradiative recombination, provides access to the near-field profile of the free-space modes, and yields the coupling phases and strengths between the QD exciton and the cavity and free-space modes.
This paper is organized as follows.In Sec.II, we describe our experimental setup and observations that reveal the mode interference phenomenon.In Sec.III, the effects observed in the experiments are demonstrated theoretically.In Sec.IV, we use the experimentally obtained data to retrieve several key characteristics of the QD emission into the relevant spatial modes.Section V characterizes the quantum efficiency of a QD in a PC cavity as a function of the QD position, and Sec.VI summarizes and discusses our observations and the related analysis.
II. EXPERIMENTAL RESULTS
The studied system consists of a single, site-controlled InGaAs=GaAs pyramidal QD positioned at a prescribed location within a modified-L 3 photonic crystal membrane cavity [31][32][33].A scanning electron microscope (SEM) image of the cavity is shown in Fig. 1(a) (see also Appendix A, Figs. 7 and 11).Figures 1(b) and 1(c) shows the x-and y-polarized electric field patterns of the fundamental cavity mode (CM) calculated at a half-membrane height.The CM electric field is nearly perfectly y-polarized at the symmetry axis y ¼ 0 of L 3 PC cavities [Figs.1(b) and 1(c)].In different devices, the ∼20-nm-diameter QDs are placed at nominal distances Δ ¼ 0, 60, 90, 120, or 180 nm from the cavity center [Fig.1(a) and crosses in Fig. 1(c)], corresponding to different overlaps with the electric field profile of CM.The experimental accuracy of QD positioning is estimated to be better than 10 nm (see Appendix A, Figs.[8][9][10].The normalized CM near-field amplitude distribution Θ CM ðΔÞ can be approximated as Θ CM ðΔÞ ¼ −e −β CM jΔj cosð2πΔ=λ CM Þ.Here, the parameter β CM ¼ 7.9 × 10 −4 nm −1 and the CM effective wavelength λ CM ≈ 340 nm in the PC cavity defect are obtained by fitting the curve to the 2D finite-difference time-domain (FDTD) CM profile at y ¼ 0 [Fig.1(c)].The wavelength λ CM corresponds to the effective refractive index n CM ≈ 2.59.
Figure 1(d) shows the used microphotoluminescence (μPL) setup.The QDs were optically excited using a Ti: sapphire laser emitting at 730 nm wavelength.The laser beam was focused to a ∼1.5 μm wide spot using a microscope objective with 50 times magnification, 0.55 numerical aperture, and 3.6 mm working distance.Photoluminescence spectra were measured with the samples placed in a He-flow optical cryostat using the laser in a continuous wave mode and a "Jobin Yvon Triax 550" spectrometer equipped with a charge coupled device (CCD) detector providing a spectral resolution of 80 μeV.The residual excitation light was filtered with a low-pass (LP) optical filter [see Fig. 1(d)].An infrared camera was used to observe the position of the excitation spot on the sample surface.The x and y coordinates of the spot were controlled with a motorized high-precision (50 nm) xy-position stage.The fine-tuning of the sample position with respect to the excitation spot was achieved by maximizing the QD emission intensity.Time-resolved measurements were carried out using the laser in the mode-locked mode that provided 3 ps laser pulses at 80 MHz repetition rate.A part of the excitation beam was sent to a fast photodiode, serving as a reference for the timing measurements.We used a PicoQuant τ-SPAD-FAST avalanche photodiode (APD) positioned at the monochromator's output.For each spectrally filtered photon arriving at the avalanche photodiode, the time delay from the pump laser pulse was counted by a fast pulse time counting unit (Time Harp 260 TCSPC board, with a 25 ps time bin width).
Polarization-resolved spectra were obtained using a λ=2 wave plate and a linear polarizer [see Fig. 1(d)].They were used to calculate the degree of linear polarization (DOLP) given by D ¼ ½ðI y − I x Þ=ðI y þ I x Þ, where I x and I y are the intensities of the x-and y-polarized components of the emission.The CM was tuned across the QD optical transitions using temperature variations and water vapor condensation [34].The fundamental mode CM and the nextorder mode CM 1 were found to be separated by ∼20 meV.The CM quality factor varied from 1500 to 3000 (CM damping rate κ between 0.4 and 1 meV) depending on the device.The QDs emit photons of ∼1.42 eV energy at 10 K, with ensemble inhomogeneous broadening of ∼10 meV and ∼100 μeV wide excitonic transitions.
The polarization-resolved and the corresponding DOLP spectra for a typical structure with QD position Δ ≈ 0 are shown in Fig. 2(a).The PL spectra exhibit neutral exciton (X 0 ), negatively charged exciton (X − ), and biexciton (XX) lines with the energy detuning from the CM depending on the PC hole size and sample temperature.The QD spectrum typically included the contributions from either a negatively or positively charged exciton, a neutral exciton, and a biexcitonic transition that was observed at higher excitation energies.The exciton-CM detuning δ X of an excitonic complex (X) was defined as its recombination energy E X relative to the CM energy E CM , δ X ¼ E X − E CM .Exciton and CM energies were obtained using Lorentzian fitting of the exciton and CM photoluminescence peaks.For sufficient QD exciton-CM detuning [35,36] (larger than ∼10 meV), the cavity mode is not visible in the emission spectra and the QD emission is coupled to the unpolarized optical modes in the photonic band gap [Fig.2(a)].For Δ ≈ 0, strong X − -CM overlap near the central CM antinode [see Fig. 1(c)] results in the efficient X − -CM coupling.Hence, for a small exciton-CM detuning [like charged exciton-CM detuning δ X − ¼ 0.7 meV in Fig. 2(b)], strong X − -CM linear copolarization [29] is evident.
Figure 2(c) shows X − biexponential decay traces obtained at the X − -CM resonance [29] (δ X − ¼ 0) for Δ ¼ 8, 75, and 180 nm.The fast X − decay component in Fig. 2(c), induced by the Purcell's effect, comprises both optical and nonradiative recombination processes [29].The slow component in the X − decay traces, corresponding to ∼3 ns decay time at T ¼ 10 K, is due to QD refilling by carriers captured from charge centers and excitonic bath in the GaAs environment [37].The equal absolute values of x and y components of the X − transition dipole [38] directly map the X − decay rate γ to the CM electric field profile along the symmetry axis at y ¼ 0 [Figs.1(b) and 1(c)].Figure 2(d) shows the X − -CM coupling strength g as a function of Δ extracted from the X − decay traces using γ decay rate, γ FM is the rate for the X − direct decay into free space, and γ nonrad , κ and γ d are, respectively, the nonradiative decay rate, the CM damping rate, and the X − dephasing rate [12].We set γ FM þ γ nonrad ≈ 0.43 μeV corresponding to the ∼1.5 ns X − decay time obtained from the temperature-dependent X − dynamics in L 3 PC cavities [29].The uncertainty on γ FM and γ nonrad rates is insignificant for QDs placed near the CM antinodes.The spatially resolved X − -CM coupling strength gðΔÞ shown in Fig. 2(d) is in good agreement with a model accounting for the 2D FDTD simulated CM electric field profile, gðΔÞ ¼ −g 0 Θ CM ðΔÞ, and assuming that g 0 ¼ 30 μeV at Δ ¼ 0 [39][40][41].Figure 3(a) shows the polarization-resolved emission spectra at positive, zero, and negative detuning of a charged exciton relative to the CM (δ X − ) measured in structures with Δ ≈ 0. The spectra are displayed versus the energy E of emitted photons relative to the CM energy, i.e., E − E CM .The exciton-CM detuning δ X − was set by adjusting both temperature and water vapor condensation.We repeatedly observed strongly x-polarized (y-polarized) excitonic transitions at positive (negative) exciton-CM detuning for Δ ¼ 0 leading to an asymmetry in the DOLP spectra.The DOLP asymmetry with respect to zero exciton-CM detuning is clearly visible in the DOLP spectrum shown in Fig. 3(a) for δ X − ¼ −2.1 meV.This asymmetry is unexpected in the framework of nonoverlapping emission via the CM and directly into free space.In the latter case, the DOLP of the QD transition that has unpolarized emission in the L 3 PC cavity band gap [see Fig. 2(a)] does not drop below 0, as the y-polarized emission via the CM only reduces the total probability of the direct emission into free space and does not affect the near unity ratio of the direct emission probabilities via the x-and y-polarized radiation modes.
The negative DOLP at sufficiently large, positive exciton-CM detuning corresponds to a suppressed excitonic emission at the CM polarization.We proved the reproducibility of this phenomenon by probing the DOLP at different exciton-CM detunings in a statistical manner.Multiple spectra were obtained by measuring the exciton emission DOLP in PC cavity arrays incorporating single QDs at Δ ¼ 0, 90, 120, and 180 nm (Appendix A, Fig. 12), while polarizationresolved PL spectra, measured for devices with different PC hole radii, spanned exciton-CM detuning range in the limits of AE20 meV (see Appendix A, Fig. 13).Figure 3(b) shows the DOLP asymmetry in the statistically yielded emission spectra for structures with Δ ≈ 0 and 180 nm superimposed with the DOLP spectra at Δ ¼ 0 and 170 nm numerically simulated using a 3D finite element method (see Appendix B).These DOLP spectra were obtained by modeling the polarization-resolved emission of x-and y-oriented point dipoles in an L 3 PC cavity as a function of the photon energy and the dipole position denoted by Δ (see Appendix B, Figs.[14][15][16].The observed S-shaped asymmetry in both experimental and numerical DOLP spectra displays predominantly the x-polarized excitonic emission at a positive exciton-CM detuning greater than 3 meV.The best agreement between experimental and 3D FEM DOLP was obtained by accounting for the QD recess (see Appendix B, Fig. 17).
Figure 3(c) shows the S-shaped DOLP curves obtained by tuning the QD optical transitions [as for Fig. 3(a)] across the CM energy in five PC structures, with Δ ¼ 8, 75, 85, 115, and 180 nm, using temperature variation and gas deposition tuning [34].We were able to tune the excitonic complexes across the energy range wide enough for extracting the extremum exciton-CM detuning δ ext at which the DOLP reaches its lowest negative value corresponding to the maximum exciton-CM cross-polarization [see gray arrows in Figs.3(b) and 3(c)].Figure 3(d) shows the experimentally and numerically obtained extremal exciton-CM detuning δ ext taking place at minimum DOLP values.The indicated QD positions Δ were verified using SEM micrographs (see Appendix A, Fig. 10).For all QD positions, except Δ between 70 and 90 nm, experimental δ ext is positive.The dependence δ ext ðΔÞ, obtained from the 3D FEM simulated DOLP spectra, is similar to the experimental one.While for almost all values of Δ we find positive δ ext , negative δ ext is observed for Δ ranging from 80 to 85 nm and from 270 to 275 nm (see Appendix B, Fig. 18).Note that the CM has its first and second node at around Δ ¼ 85 and 255 nm [see Fig. 1(b)] resulting in lower DOLP at the exciton-CM resonance due to smaller CM-QD overlap as visible in Fig. 3(c) for Δ ¼ 85 nm.The observed S-shaped DOLP curves are explained by the Fano-like interference between different QD emission channels in PC cavities [42], as discussed below.
III. THEORETICAL DEMONSTRATION OF THE MODE INTERFERENCE EFFECT IN POLARIZATION-RESOLVED QD EMISSION
Yamaguchi et al. theoretically predicted a significant asymmetry in the QD exciton decay rate [43] and emission spectra [42,44] with respect to the zero exciton-CM detuning induced by quantum interference between decay paths through confined cavity modes and nonconfined, free-space modes.The decay rates directly bear on the DOLP asymmetry, as elaborated here by the model schematically illustrated in Fig. 4(a).The exciton is represented by a twolevel system (TLS), coupled to a y-polarized CM and x-and y-polarized FMs.Thus, the QD-confined exciton and the fundamental CM considered in this work comprise the classical scheme of two interacting oscillators coupled to the same Markovian bath consisting of FMs [45,46].Below, the model exciton-CM detuning is denoted by δ X .
Quantum interference between the direct and cavitymediated exciton decay paths [Fig.4(a)] yields a Fano-like resonance that introduces a characteristic asymmetry in the excitonic emission rate W y ðδ X Þ [30,42] through y-polarized FMs with respect to zero exciton-CM detuning δ X .Allowing for an arbitrary relative phase difference φ ¼ φ g þ φ ξ − φ η , the total exciton emission rate into the y-polarized FMs is given by χ is the 3D overlap of the direct and CM-mediated radiation patterns [30,42] (see Appendixes C 1 and C 2). Figure 4(b) shows the computed ratio W y ðδ X Þ=γ x , where γ x is the decay rate into x-polarized FMs, for different exciton-CM coupling strengths.Destructive mode interference is manifested by the reduction in the emission rate . The total exciton emission rate W y ðδ X Þ through y-polarized FMs is significantly higher than the emission rate through x-polarized FMs (γ x ) at negative detuning δ X due to constructive interference of the FM-and CM-mediated decays through y-polarized modes.
The observed effect is sensitive to the relative phase φ: the emission rates are reduced for negative (positive) detuning when φ ¼ 0 (φ ¼ π).
We took into account the QD exciton dephasing by numerically solving the quantum master equation (QME) that included the interference between the overlapping FMand CM-mediated decays into y-polarized modes.The interactions among the exciton, CM, and FMs are formulated within the Jaynes-Cummings Hamiltonian [47] (see Appendix C 3) using complex coupling strengths g ¼ jgje iφ g (for X-CM coupling), ξ k;y ¼ jξ k;y je iφ ξ (CM-FM coupling), η k;y ¼ jη k;y je iφ η (X-FM coupling), and η k;x (X-FM coupling), where k stands for the FM wave vector.We assumed jη k;x j ¼ jη k;y j ensuring isotropic (equal) radiative decay rates through the x-and y-polarized FMs.Coupling between CMs and x-polarized FMs is neglected and the X-FM and CM-FM coupling phases φ η and φ ξ are assumed to be independent of the wave vector k.The CM and X damping rates were assumed independent of the photon energy.We set phases φ g ¼ 0 and φ η ¼ 0 at Δ ¼ 0 for the negative y component of CM and FM electric fields and the positive y component of the transition dipole moment at the center of the cavity [48].We assume the same form of the normalized spatial profile Θ FM ðΔÞ for all nonconfined modes, i.e., η k;p ðΔÞ ¼ −η k;p ð0ÞΘ FM ðΔÞ, where p ¼ fx; yg.Also, we restrict our analysis to real values of Θ FM ðΔÞ.
The response of the experimental setup to the emitted light was taken into account using the 3D FDTD simulated collection efficiencies p FM and p CM of the FM-and CM-mediated excitonic emission [49].Figure 4(c) shows the numerically computed polarization-resolved spectra for negative and positive X-CM detuning δ X , exciton dephasing rate γ d ¼ 60 μeV, phase φ ¼ π, and emission overlap within the objective lens aperture χ A ¼ 1 [49].Destructive interference between different exciton decay channels into the y-polarized FMs results in nearly complete suppression of y-polarized exciton emission at δ X ¼ 2.27 meV [Fig.4(c)].The latter leads to the cross-polarized X-CM emission at positive exciton-CM detuning, closely capturing the behavior observed in the experimental polarizationresolved spectra [Fig.3(a)].Figure 4(d) displays DOLP spectra extracted from the numerically simulated polarization-resolved spectra as in Fig. 4(c) for several values of the X-CM coupling strength g at φ ¼ 0 or π.Whereas a clear X-CM copolarization (DOLP ≈ 1) occurs for sufficiently small detuning δ X due to Purcell enhancement, the negative DOLP values represent the X-CM cross-polarization, for certain positive (negative) detuning if φ ¼ π (φ ¼ 0).The extremal detuning δ ext and the DOLP minimum value D ext can be obtained analytically in the Weisskopf-Wigner approximation in the limit of jδ ext j ≫ γ d (see Appendix C 2) as where κ is the CM damping rate.This analytical approximation agrees well with the numerically modeled DOLP spectra [see green circles in Fig. 4 The values of δ ext and D ext , extracted from experimental DOLP traces, permit retrieving the overlap χ A , the relative phase φ, and the direct excitonic emission rate γ FM ¼ γ x þ γ y through both x-and y-polarized nonconfined modes given by as derived from Eqs. ( 2) and (3) for γ y ≪ κ. Figure 5(a) shows the direct X − -FM emission rate γ FM ðΔÞ into free space obtained from measured δ ext ðΔÞ [Fig.3(d)] and the X-CM coupling strength gðΔÞ [Fig.2(d)].The relative FM LDOS in Fig. 5(a) is given by ½ρ FM ðΔÞ=ðρ bulk Þ ¼ ½γðΔÞ=γ bulk , where ρ bulk is the FM LDOS in bulk GaAs and the X − emission rate γ bulk ≈ 0.43 μeV corresponds to the observed ∼1.5 ns X − decay time in bulk GaAs [50] at T ¼ 10 K.The X − emission rate and the corresponding FM LDOS [Fig.5(a)] are strongly modulated revealing an order of magnitude difference between Δ ≈ 0 and 80 nm.Such spatial modulation in the FM LDOS is particularly significant in membrane PCs, as previously observed in both experiments [22] and simulations [51].The span of ρ FM ðΔÞ=ρ bulk in Fig. 5(a) is comparable with the depth of the band gap in membrane PCs at different positions of the dipole emitter [18,51] and consistent with previously observed inhibition of the excitonic emission rate in PC nanocavities [13,14].Taking into account that the excitonic emission rate through nonconfined modes γ FM ðΔÞ ¼ ½ð2πÞ=ℏ FM ðΔÞ, where γ 0 ¼ 0.236 μeV, is in good agreement with the experimental results [see Fig. 5(a)].Discontinuity in the experimental δ ext ðΔÞ, observed for Δ ≈ 70 nm [see Fig. 3(d)], additionally points toward the y-polarized FM node positions, since δ ext ðΔÞ ∼ 1=Θ FM ðΔÞ.In contrast, δ ext ðΔÞ ∼ Θ CM ðΔÞ and approaches zero at the CM node Δ ≈ 90 nm.Here, we modeled the FM normalized spatial profile using the same form as for Θ CM ðΔÞ, i.e., Θ FM ðΔÞ ¼ −e −β FM jΔj cos½ð2πΔÞ=ðλ FM Þ, where β FM ¼ 3.6 × 10 −4 nm −1 and λ FM ¼ 266 nm.The effective wavelength λ FM approximately corresponds to the effective refractive index n FM ¼3.27, close to the mode index of a 250-nm-wide GaAs slab [52].
Thus, the X-CM and X-FM coupling phases are given by φ g ðΔÞ ¼ π − arccos½Θ CM ðΔÞ=jΘ CM ðΔÞj and φ η ðΔÞ ¼ π − arccos½Θ FM ðΔÞ=jΘ FM ðΔÞj.We explain the observed CM-FM coupling phase φ ξ by the mutual orientation of CM and FM electric fields at the side holes of the PC cavity [jyj ≈ 375 nm in Figs.1(c) and 5(c)] playing the major role in the radiative CM coupling to free space [53].Therefore, φ ξ can be modeled as φ ξ ¼ arccos½Θ CM ðLÞΘ FM ðLÞjΘ CM ðLÞΘ FM ðLÞj −1 , where L ¼ 375 nm.The resulting relative phase φðΔÞ ¼ φ g ðΔÞ þ φ ξ − φ η ðΔÞ agrees well with the experiments [see the blue line in Fig. 5(b)].
Figure 5(d) shows the relative phase φ, CM, and xpolarized FM LDOS simulated using 3D FEM for discrete values of Δ spanning from 0 to 300 nm.The positions of the CM LDOS maxima, spaced by ∼170 nm, correspond to the maxima in the CM near-field profile [CM characteristic wavelength is λ CM ∼ 340 nm; see Figs. 1(c) and 5(c)].The x-polarized FM LDOS also shows an oscillatory behavior with a period of about 190 nm corresponding to λ FM ∼ 380 nm (see also Appendix B, Fig. 14).The observed phase shifts in Fig. 5(d) are of the same nature as the phase shifts observed in the experiments [Fig.5(b)], i.e., caused by the difference in the node positions of CM and FM profiles.Quantitatively, the second phase jump takes place at a larger value of Δ than in the experiments.We explain this difference by a much smaller number of PC hole rows used in 3D FEM simulations that alters nodes' locations of the FM LDOS.It should be noted that the QD recess strongly affects the relative phase φ as observed in 3D FEM simulations (see Appendix B, Fig. 17).Therefore, the experimentally observed FM profiles include the impact of the residual QD recess.
Precise knowledge of the FM near-field profile paves the way for engineering QD-PC devices with improved functionality via control of direct and CM-mediated decay channels.In particular, the second CM antinode is close to the second FM node [Fig.5(c)] due to significant difference between CM and FM effective refractive indices (n CM ≈ 2.59 and n FM ≈ 3.27).QDs placed near FM nodes at Δ ¼ 65 and 200 nm have a vanishing radiative coupling to FMs, but significant QD-CM overlaps (40% and 70%).Thus, QD-PHC devices with near unity quantum efficiency can be engineered via optimization of the γ FM ðΔÞ=γ CM ðΔÞ ratio using the QD position Δ as a tuning parameter.
V. QUANTUM EFFICIENCY OF A SINGLE QUANTUM DOT IN A PHOTONIC-CRYSTAL CAVITY
Figure 6 shows the quantum efficiency [12,54] of a charged exciton X − in a PC cavity, QE ¼ γ CM ðΔÞ= ½γ CM ðΔÞ þ γ FM ðΔÞ þ γ nonrad , as a function of QD position Δ modeled in a bad cavity regime g ≪ κ=4.
Here, γ nonrad is the is the X − emission rate at the X − -CM resonance, CM damping rate is κ ¼ 400 μeV, and X − dephasing rate is γ d ¼ 100 μeV.The function γ FM ðΔÞ and gðΔÞ were assumed to be the same as in Figs.2(d) and 5(a).The modeled quantum efficiency strongly varies with Δ due to both spatial dependence of the QD direct and CM-mediated emission rates.QDs placed at Δ ¼ 65 and 200 nm have a vanishing radiative coupling to FMs, but significant QD-CM overlaps (40% and 70%) [see Figs.1(c), 2(d), and 5(a)].As a result, the Purcell enhancement is only 2 times weaker at jΔj ¼ 200 nm than at Δ ¼ 0 prior to at least 70% QD-CM overlap.The resulting maximum QE at jΔj ¼ 200 nm exceeds the values of QE at Δ ¼ 0 for realistic nonradiative losses γ nonrad < 0.1 μeV (∼6 ns nonradiative decay time) [17,18] and can reach a value close to 1. Therefore, the spatial dependence of the QD exciton direct radiative decay in L 3 PC cavities opens a novel pathway for optimizing the quantum efficiency of the PC-cavity-based single-photon emitters.
VI. DISCUSSION
Our results constitute the first experimental demonstration of quantum interference between overlapping direct and CM-mediated dissipation channels of QD excitons in photonic cavities.The latter leads to a strong exciton-CM cross-polarization effect that well agrees with the predicted interference effect in emission spectra of solid-state emitters in photonic cavities [42].Commonly used approaches ignore the interference of overlapping dissipation channels and fail to describe the related phenomena, among which are the cross-polarization effect, asymmetry in vacuum Rabi spectra [42,49], as well as Fano effect in decay rates [30,42] and emission [42] spectra.Our results show that the full description of light-matter interaction in the framework of cavity quantum electrodynamics requires modification of the QME approaches by introducing the cross-term Liouville superoperator [42] or calculating the cross-term contribution in the power spectrum [49] (see Appendix C 3).
Our observations verify the key assumptions in the derivation of the Markovian QME approach accounting for the quantum interference: (1) the coupling constants ξ k;y and η k;y can be written as ξ k;y ≅ ξðωÞ and η k;y ≅ ηðωÞ, and (2) ξðωÞ, ηðωÞ, and the density of states of the continuum DðωÞ ≡ P k δðω − ω k Þ are smooth function of ω [42].The assumption (1) corresponds to the phases φ ξ and φ η independent of the wave vector k as was suggested above.As a result, the resonant feature in the spectrum is not averaged over the wave vectors and a clear relative phase φ can be extracted from the experimental data.The assumption (2) is confirmed by the great conformity of the numerically calculated and experimental DOLP spectra [see Figs.3(b), 3(c), and 4(d)].Moreover, our results show nearly unity overlap χ A within the objective aperture [Fig.5(b)].We expect that the total overlap χ between direct and CMmediated decay channels is very close to the calculated χ A .
Quantum interferometry, based on mapping of interference features with site-controlled QDs, allows probing both phase and amplitude of the near-field pattern of free-space modes in the PC cavity.FM LDOS had pronounced minima, which can be fruitful for designing QD-PC single-photon sources with improved quantum efficiency.Analysis of the spatially resolved relative phase φ between the direct FM and CM-mediated emission channels revealed the CM-FM coupling phase φ ξ ¼ π, characteristic to the PC cavity design.Remarkably, experimental observations well agree with the 3D FEM simulations highlighting the strength of the method for studying light-matter interaction in photonic devices.
ACKNOWLEDGMENTS
This work was supported by Swiss National Science Foundation and Academy of Finland (Grant No. 308394).
APPENDIX A: EXPERIMENTS 1. Alignment between QD and PC cavity arrays
The InGaAs=GaAs QDs are self-formed during metalorganic vapor phase epitaxy in pyramidal recesses etched on (111)B GaAs=AlGaAs=GaAs substrates [31][32][33] and their nucleation site is fixed with an accuracy of ∼10 nm using electron beam lithography [see Figs.7-10).The modified L 3 cavities consisted of three missing holes in a triangular array of holes (pitch a ¼ 200 nm) etched in a ∼250 nm GaAs suspended membrane layer.The QD and PC hole patterns were fabricated using electron beam lithography with relative alignment accuracy of ∼10 nm.For each value of the QD position Δ, the radius of the PC show SEM micrographs obtained for the device "R2C3s7p6} using small and large magnifications.
We repeatedly achieved QD-PC cavity alignment accuracy of better than 10 nm [55] as shown in Fig. 8.
Within a square array, x-and y-misalignment errors varied in the limits of about 10 nm as shown in Fig. 9.Each array was aligned independently with a unique set of alignment marks.It should be noted that these variations are mainly caused by the measurement error of the QD recesses position.
Figure 10 shows SEM micrographs of several structures with the measured QD position Δ.It should be noted that DOLP data for Δ ¼ 42 nm (SEM micrograph of structure M) was previously reported in Refs.[35,56].DOLP for the structure with Δ ¼ 175 nm (not shown here) was reported in Refs.[36,56].Δ ¼ 180, 120, and 90 nm.PC hole radii were designed to provide large positive and negative detunings of the CM resonance energy with respect to the QD emission energy measured prior to fabrication of PC cavities [55].statistically yielded DOLP traces shown in Fig. 13.DOLP statistics were obtained from the polarization-resolved emission spectra measured at the same excitation power P ¼ 170 W=cm 2 and temperature T ¼ 10 K in PC cavity arrays incorporating single QDs at nominal Δ ¼ 0, 90, 120, and 180 nm.The polarization-resolved spectra of QD emission in PC cavities with different PC radii were measured for up to ∼40 meV of the exciton-CM detuning (see Fig. 13).
APPENDIX B: NUMERICAL AND ANALYTICAL MODELING OF FANO EFFECT IN POLARIZATION-RESOLVED EMISSION OF QDs INTEGRATED WITH PC CAVITIES 1. 3D FEM modeling of the polarized point-dipole emission in a PC cavity
The FEM-based COMSOL MULTIPHYSICS software was used for modeling the emission rates used in Figs.2(b), 2(d), and 4(d).The calculations were performed in the frequency domain using a point dipole radiation source to represent the quantum dot [57,58].The refractive index of GaAs was taken to be 3.5, and the slight defect in the GaAs membrane's surface introduced by the fabrication process was modeled as a 30-nm-deep rectangular recess of a 100 × 100 nm 2 surface area.The size of the holes array of the photonic crystal was limited to 5 periods in every direction around the cavity.This allowed some radiation to escape into the GaAs membrane, yielding the cavity Q factor similar to that obtained in experiments.To make modeling feasible, boundary conditions were used on the xy and xz planes at z ¼ 0 and y ¼ 0 to achieve symmetry about these planes.Out-coupled power in different polarizations was determined from the electric field intensity distributions ca.900 nm above the membrane.
Figure 14 shows the x-and y-polarized emission intensities I x and I y of the x-and y-oriented dipoles d x and d y numerically simulated using 3D FEM.The d x contribution to the y-polarized emission intensity I y was insignificant [see Fig. Figure 15 shows the DOLP calculated from the emission intensities shown in Fig. 14, using the expression The I y ðd y Þ minima visible in Fig. 14(d) correspond to the DOLP minima in Fig. 15.
Figure 16 shows the DOLP modeled near the shift of the DOLP minimum at Δ ¼ 275 nm.At Δ ¼ 275 nm, the curve is flattened, and the minimum is shifted to a lower energy.DOLP at Δ ¼ 280 nm also exhibits a flattened DOLP with a minimum shifted to a higher emission energy.
Figure 17 shows the DOLP spectra modeled with and without the QD recess.The DOLP traces modeled without accounting for the QD recess show the relative phase φ ¼ 0 for Δ ¼ 0, 45, and 90 nm, which does not agree with the experiment (see Figs. 12 and 13).The agreement is achieved for the DOLP spectra modeled using the 3D FEM approach in the presence of the QD recess [see Fig. 15 and 17(a)].Points D ext1 and D ext2 are calculated using the factors Figure 19 shows the comparison between numerically modeled DOLP curves using the open Jaynes-Cummings model and the analytical approximation (see Appendixes C 2 and C 3).
APPENDIX C: THEORY 1. Excitonic emission rate W with Yamaguchi approach
Here, we consider independent emission through the y and x polarized modes.In the weak coupling regime and for detuning δ ≫ g in the strong coupling regime, emission through the x polarized modes happens at the excitonic energy with emission rate γ x .Assuming the same coupling strengths to x and y polarized free-space modes η k;x and η k;y , we obtain γ x ¼ γ y ¼ γ=2, where the emission rate γ is the total excitonic emission rate through nonbound modes.Following the Yamaguchi approach [30,42], we consider the Hamiltonian Ĥ ¼ Ĥ0 þ Ĥint þ ĤR that drives the interaction in the exciton-CM system coupled to the common reservoir of the y-polarized radiation modes.The terms Ĥ0 , Ĥint , and ĤR are defined as where σ, âCM , and bk are the annihilation operators of the QD exciton, CM, and free-space radiation modes.The coupling strengths g, ξ k;y , and η k;y can be written as g ¼ jgje iφ g , η k;y ¼ jη k;y je iφ k;η .and ξ k;y ¼ jξ k;y je iφ k;ξ .Considering the low pumping regime, we restrict the photon wave function basis to a set of Fock states corresponding to 0 or 1 photons in the CM and free-space radiation modes.The two-level QD excitonic transition is between ground and excited states jgi and jei.Then, the superposition wave function is written as where je; 0 CM ; 0 k i, jg; 1 CM ; 0 k i, and jg; 0 CM ; 1 k i are the Fock wave functions corresponding to a single excitation in the system, a single photon in the CM, and a single photon in the free-space mode with wave vector k.Looking for the evolution of the amplitude probabilities aðtÞ, cðtÞ, and b k ðtÞ, we solve Schrödinger's equation in the framework of the Weisskopf-Wigner approximation and obtain the following equations of motion: where γ and κ are dissipation rates of the exciton and CM through radiation modes, while the complex overlap term can be written as The complex overlap could be rewritten as χ ¼ χe −iφ ξη , where 0 ≤ χ ≤ 1 and φ ξη ∈ ½0; π.The complex phase φ ξη has the meaning of the phase difference between the CM and exciton coupling channels to the common radiation reservoir.In the following, we assume that φ k;η and φ k;ξ are independent on the wave vector k; that is, φ k;η ¼ φ η and φ k;ξ ¼ φ ξ .Thus, we obtain φ ξη ¼ φ ξ − φ η .Solving the eigenvalue problem of this set of differential equations, one can retrieve the emission rate of the exciton.The eigenenergies read where we introduced the relative phase φ ¼ φ g þ φ ξ − φ η that has the meaning of the phase difference between excitonic emission through bound and nonbound modes.The exciton decay rate can be found as For typical QD-PC cavity systems, we have κ ≫ γ, and the excitonic emission rate through y polarized modes reads The degree of linear polarization is given by D ¼ ðI y − I x Þ=ðI y þ I x Þ, where I x and I y are the x and y polarized emission intensities.For excitonic transitions with fractional or zero excited state pseudospins, the dipole projections on two perpendicular axes, e.g., x and y axis, have the same absolute amplitudes.Therefore, D ¼ ðW y − αγ x Þ=ðW y þ αγ x Þ, where α is the ratio between the x-and y-polarized emission collection efficiencies of the optical system.For α ¼ 1, we obtain The emission rate W y was simulated with γ y ¼ 0. In order to investigate our system at the interference maximum, we describe the X-CM system in a polariton basis and write jg; 0i ¼ jgij0 CM i, j−; 1i ¼ cos βjeij0 CM i− sin βjgij1 CM i, and jþ; 1i ¼ sin βjeij0 CM i þ cos βjgij1 CM i, where jgi and jei are the ground and excited excitonic states, j0 CM i and j1 CM i are the empty and occupied single-photon CM states, the parameter β is given by β − δÞ, and δ ¼ ω QD − ω CM is the X-CM detuning.The expected interference in two-level system decay through y polarized modes appears from the analysis of the equation of motion of the total system wave function: where j0 k;ρ i and j1 k;ρ i are the free-space photon wave functions (FMs), and k and ρ ¼ fx; yg are the FM wave vector and polarization, respectively.The system Hamiltonian is where ĉ− and ĉþ are the annihilation operators of polariton states j−; 1i and jþ; 1i and p k;ρ and f k;ρ are the polariton coupling strengths with the radiation modes.These coupling strengths can be rewritten in terms of coupling strengths η k;ρ and ξ k;ρ [see Ref. [30] and Fig. 1(a)] as Since the CM in our system is strongly y polarized, we consider the CM coupling only to the x-polarized FMs; i.e., ξ k;x ¼ 0. The coupling coefficients are g ¼ jgje iφ g , η k;y ¼ jη k;y je iφ η , and ξ k;y ¼ jξ k;y je iφ ξ .The relative phase between the direct and CM-mediated decay paths of the TLS is φ The polariton probability amplitudes AðtÞ and BðtÞ in the Weisskopf-Wigner approximation are driven by the following differential equations: where q is a complex number (jϑj ≤ 1) reflecting the overlap and the phase shift of radiation modes to which the polariton decays.For typical X-CM damping parameters, we have γ y ≪ κ and the emission rates Γ þ and Γ − have their minimum values at extremum exciton-CM detuning: if δ ext ðκ − γ y Þ > 0 and δ ext ðκ − γ y Þ < 0, respectively.The relative phase φ ¼ φ g þ φ ξ − φ η defines the detuning sign at which the Fano interference appears.For a typical X-CM system, we have κ ≫ γ y providing the detuning jδ ext j ≫ g.Therefore, the term ðγ x =2Þ sin 2β can be neglected at δ ext and the polariton decay rates through y-polarized modes are determined solely by Γ þ and Γ − .The emission at the exciton energy originates mainly from the exitonlike polariton state j−; 1i (jþ; 1i), if δ < 0 (δ > 0).Since for a typical QD-PC cavity system we have γ y ≪ κ, the detuning δ ext corresponds to the minimum of the excitonic emission rate W y through the y polarized modes.
For charged exciton and biexciton transitions or any other excitonic transition with zero or fractional pseudospin and equal dipole matrix elements along x and y axes, the degree of linear polarization D ¼ ðI y − I x Þ=ðI y þ I x Þ can be rewritten as D ¼ ðW y − α Ã γ x Þ=ðW y þ α Ã γ x Þ, where W y and γ x are the emission rates through y and x polarized modes and α ≈ 1 is the ratio between the y-and x-polarized emission powers collected by the objective [see Fig. 1(a)].The coupling efficiency α ≈ 1 provides D ¼ ðW y − γ x Þ= ðW y þ γ x Þ.Since transitions detuned by several tens of meV from the PC cavity modes are essentially unpolarized, we have D ≈ 0.
For detuning jδj ≫ jgj, the emission through the x-polarized modes appears mainly at the energy of the excitonic transition.For jδj ≫ jgj, the emission rate W y can be approximated as Γ AE , providing assuming that γ x ¼ γ y .The best approximation for the modeled DOLP traces is obtained for D ext;2 .
Numerical modeling of the DOLP curves using open Jaynes-Cummings model
The radiating quantum dot is modeled as a two-level system with Bohr frequency ω 0 =ð2πÞ coupled to a cavity mode at frequency ω CM =ð2πÞ.The detuning between the TLS and cavity mode energies is ℏδ ¼ ℏω 0 − ℏω CM .The Hamiltonian describing the system in the rotating wave approximation is where σ − (σ þ ) is the lowering (raising) operator of TLS and a (a † ) is the lowering (raising) operator of CM.Losses and solid-state specific phenomena are added using the master equation formalism.The evolution of the density matrix ρ of the system formed by the QD and the cavity is given by which is compatible with the framework used to perform the numerical simulations [59].We introduce the following quantum collapse operators: The rate Γ ph is a function of TLS-CM detuning and temperature.It was calculated using the following microscopic description of the exciton-phonon interaction [60,61].The transfer of excitations from the QD to the off-resonant CM via the absorption or emission of a phonon is described by the quantum collapse operator L ph ¼ ffiffiffiffiffiffi ffi Γ ph p σ − a † that accounts for the decay from the state jei ⊗ jn ¼ 0i to the state jgi ⊗ jn ¼ 1i at rate Γ ph , with jei (jgi) being the excited (ground) state of the exciton and fjgig n∈N being the Fock space of the quantized CM.Here we neglect the backscattering term ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Γ ph ð−δÞ p σ þ a describing the phonon-mediated feeding of the QD by the CM, which is reasonable in the bad cavity regime (κ ≫ γ).The phonon scattering rate Γ ph accounting for the phonon-assisted decay of the QD exciton into the CM is given in Ref. [47].
The master equation (C20) is solved numerically using the quantum optics toolbox QuTip to obtain the steady-state power spectra of the TLS and CM.Exciton, CM, and X-CM interference steady-state power spectra S X ðωÞ, S CM ðωÞ, and S int ðωÞ are given by In the above integrals, the correlation functions hAðt þ τÞBðtÞi are calculated numerically using the exponential-series-based solver essolve [59] that computes the nonunitary time evolution of the system operators A and B by solving the master equation.The Fourier transform of the steady-state correlation function is then performed semianalytically [59], giving the power spectrum which is then used to compute the degree of linear polarization of the excitonic spectrum.We obtain the x-and y-polarized excitonic emission intensities by fitting the x-and ypolarized emission spectra I x ðωÞ and I y ðωÞ calculated as Here, χ cos φ accounts for the interference between excitonic emission into free-space modes and the emission mediated by the CM.The degree of linear polarization is obtained as DðωÞ ¼ f½I y ðωÞ − I x ðωÞ=½I y ðωÞ þ I x ðωÞg.
FIG. 1 .
FIG. 1.(a) SEM image of a fabricated structure showing the displacement Δ of the QD from the cavity center.(b),(c) Simulated E x and E y electric fields of the fundamental cavity mode at a halfmembrane height and 1.42 meV energy.Crosses in (c) show the implemented Δ providing different exciton-CM coupling strengths gðΔÞ.The CM near-field patterns in (b) and (c) were calculated using a 2D FDTD method.(d) Schematics of the μPL optical setup.The laser beam used for excitation is highlighted in blue whereas collected photoluminescence is highlighted in red.BS in (d) stands for a nonpolarizing beam splitter.
FIG. 2 .
FIG. 2. (a),(b) Polarization-resolved PL (bottom) and corresponding DOLP (top) spectra of excitons emitting inside the photonic band gap and near the resonance with CM.(c) X − decay traces measured at X − -CM resonance for different Δ.(d) X − -CM coupling strength g versus Δ extracted from the X − radiative decay rates at the X − -CM resonance.
FIG. 3 .
FIG. 3. (a) QD polarization-resolved emission spectra at different X − -CM detunings δ X − versus the photon energy relative to the CM energy.The bottom and top panels in (a) show the polarized-PL spectra for a structure with Δ ≈ 0 and various values of δ X − and corresponding DOLP for δ X − ¼ −2.1 meV.The exciton-CM detuning was varied using water vapor deposition and temperature tuning.(b) Numerically simulated DOLP spectra for Δ ¼ 0 and 170 nm and experimental DOLP value statistics versus the exciton-CM detuning.The experimental points were extracted from polarization-resolved PL spectra measured at P ¼ 170 W=cm 2 and T ¼ 10 K for 61 and 96 different cavities with Δ ¼ 0 and 180 nm, correspondingly.(c) Exciton DOLP values versus the exciton-CM detuning for the QD positions Δ varying from 8 to 180 nm.DOLP traces were obtained by tuning the excitonic transitions across the CM energy as in (a) (see also Appendix A, Fig. 12).(d) Numerically simulated and experimental extremal exciton-CM detuning δ ext extracted from numerical and experimental DOLP traces for different values of Δ.In (b) and (c), δ X corresponds to the detuning of observed negatively, neutrally, and positively charged excitons and biexcitons relative to the CM energy.Gray arrows mark experimental δ ext .
FIG. 4 .
FIG. 4. (a) Schematic representation of radiative FM and CM channels to y-and x-polarized modes.(b) Calculated exciton emission rates Wðδ X Þ as a function of exciton-CM detuning δ X for different exciton-CM coupling strengths g and the phase differences φ.(c) Numerically simulated polarization-resolved spectra for negative and positive detuning δ X .(d) DOLP traces as a function of δ X for p CM ¼ 0.4, p X ¼ 0.29, different exciton-CM coupling strengths g, and relative phases φ.The green filled circles represent analytically calculated coordinates ðδ ext ; D ext Þ of minima in DOLP traces.
Figure 5 (
b) shows the relative phase φ and CM-FM overlap χ extracted from the measured δ ext and D ext .The negative and positive extremal detuning δ ext in Fig. 3(d) gives φ ≈ 0 for 70 < Δ < 90 nm and φ ≈ π for Δ < 60 nm or Δ > 90 nm resulting in two phase jumps by π at ∼70 and ∼90 nm [Fig.3(d)].The observed phase shift at Δ ≈ 85 nm is explained by opposite signs of the y component of the CM electric field at different sides of the CM node at y ¼ 0; that is, for Δ < 90 nm and Δ > 90 nm [Fig.1(c)].The change of the relative phase by π near Δ ≈ 70 nm [Fig.5(b)] defines the node position of the near-field profile of nonconfined modes as also manifested by the reduced FM LDOS [Fig.5(a)].The CM-FM coupling phase φ ξ is equal to the relative phase φ at Δ ¼ 0 [Fig.5(b)]; that is, φ ξ ≈ π for φ g ¼ 0 and φ η ¼ 0.
Figure 5 (
c) shows the FM spatial profile thus extracted from the FM LDOS ρ FM ðΔÞ and the relative phase φðΔÞ shown in Figs.5(a) and 5(b).The modeled excitonic emission rate through nonconfined modes, γ FM ðΔÞ ¼ γ 0 Θ 2
2 FIG. 5 .
FIG. 5. (a) Direct X − -FM emission rate γ FM ðΔÞ and the corresponding LDOS ρ FM ðΔÞ in units of homogeneous LDOS in bulk GaAs (right-hand ordinate).Spatial dependences of the experimental ρ FM ðΔÞ and relative phase φ are matched by the model based on the mapped FM spatial profile.(b) The extracted relative phase φ.The CM-FM overlap χ A is shown in the inset as a function of QD position Δ.(c) Normalized electric field profile of free-space modes at half-membrane height reconstructed from the extracted ρ FM ðΔÞ in (a) and the relative phase φðΔÞ in (b).The CM profile, extracted from the 2D FDTD simulations in Fig. 1(c), is shown for reference.Green and white areas in (c) correspond to φ ¼ 0 and π, respectively.Gray region denotes a side view of the PC hole [see Fig. 1(a)].(d) Relative phase φ, CM, and x-polarized FM LDOS ρðΔÞ relative to the bulk LDOS as a function of Δ calculated using 3D FEM.
FIG. 6 .
FIG.6.Quantum efficiency of a QD in a PC cavity as a function of Δ, simulated for a set of nonradiative decay rates γ nonrad .
FIG. 8 .
FIG. 8. QD-L 3 PC cavity alignment accuracy.The x-and ymisalignment errors measured for the corner devices of several square arrays.
FIG. 9 .
FIG. 9. QD-PC cavity alignment fluctuations within several square arrays.(a),(b) Histograms of x and y alignment deviations from Δ mean values calculated from alignment data measured in several PC cavities in each array (data from 14 different arrays, several devices per array).
FIG. 7 .
FIG. 7. Sample design.(a) An arrangement of 45 square arrays of PC cavities in 9 rows with 5 columns.(b) An arrangement of 200 PC cavities in a single array.(c),(d) A sketch and a SEM micrograph of an L 3 PC cavity integrated with a single QD.(e) SEM of a PC cavity with a visible QD position.Sacrificial QDs arranged in a triangular lattice with a period twice that of the PC hole lattice are removed by the PC cavity holes as visible in (d).
Figure 11 FIG. 10 .FIG. 11 .
Figure 11 shows the measured fundamental CM and the first excited CM 1 resonance energies as a function of PC hole radii measured for several square arrays with nominal
3 .FIG. 12 .
FIG.12.DOLP traces for different Δ values.CM-QD detuning was modified using a combination of temperature variations and a CM energy tuning using water vapor deposition during the sample cooldown process.
FIG. 14 .
FIG. 14. 3D FEM modeled x-and y-polarized emission intensity components I x and I y of the x-and y-oriented dipoles d x and d y as a function of the dipole emission energy and position.(a) The x-polarized emission intensity I x of the x-polarized dipole d x .(b) The xpolarized emission intensity I x of the y-polarized dipole d y .(c) The y-polarized emission intensity I y of the x-polarized dipole d x .(d) The y-polarized emission intensity I y of the y-polarized dipole d y .Intensities were normalized by the dipole emission intensity in the bulk GaAs.
FIG. 15 .
FIG. 15. 3D FEM modeled DOLP as a function of the dipole emission energy and position Δ.DOLP was calculated using the x-and y-polarized intensities I x and I y of the x-and y-oriented dipoles d x and d y shown in Fig. 14.
FIG. 19 .
FIG. 19.DOLP spectra for different values of the X-CM coupling strength g (a) and different overlap factors χ (b).The grayed lines were calculated analytically (see Appendix C 5). | 12,460 | 2022-05-20T00:00:00.000 | [
"Physics"
] |
Fe3O4 Hollow Nanosphere-Coated Spherical-Graphite Composites: A High-Rate Capacity and Ultra-Long Cycle Life Anode Material for Lithium Ion Batteries
The spherical-graphite/Fe3O4 composite has been successfully fabricated by a simple two-step synthesis strategy. The oxygenous functional groups between spherical-graphite and Fe3O4 benefit the loading of hollow Fe3O4 nanospheres. All of the composites as anodes for half cells show higher lithium storage capacities and better rate performances in comparison with spherical-graphite. The composite containing 39 wt% of hollow Fe3O4 nanospheres exhibits a high reversible capacity of 806 mAh g−1 up to 200 cycles at 0.5 A g−1. When cycled at a higher current density of 2 A g−1, a high charge capacity of 510 mAh g−1 can be sustained, even after 1000 long cycles. Meanwhile, its electrochemical performance for full cells was investigated. When matching with LiCoO2 cathode, its specific capacity can remain at 137 mAh g−1 after 100 cycles. The outstanding lithium storage performance of the spherical-graphite/Fe3O4 composite may depend on the surface modification of high capacity hollow Fe3O4 nanospheres. This work indicates that the spherical-graphite/Fe3O4 composite is one kind of prospective anode material in future energy storage fields.
Introduction
Lithium-ion batteries (LIBs) as the most advanced energy storage systems have been widely used in electric vehicles and some portable electronic appliances due to the good cycle life and high energy density [1,2]. Graphite is the commonly used anode material for LIBs, and natural graphite has gained great attention due to its high conductivity, large abundance, excellent cycling stability, and low cost features. However, its low theoretical specific capacity (372 mAh g −1 ) as an anode material is unable to meet the rising energy requirements of high energy and power densities [3][4][5]. Meanwhile, the voltage plateau of graphite is low-close to the lithium-which results in the growth of lithium dendrites on the graphite surface. The formed lithium dendrites can not only reduce the battery capacity, but also lead to serious safety accidents [6]. There are several strategies to improve the electrochemical performance of graphite: mechanical grinding [7,8], oxidation treatment [9,10], surface coating [11][12][13][14], and so on. As reported by Wu et al., oxidized natural graphite using air as an oxidant could improve the lithium storage performance of graphite [10]. A hybrid with a high capacity anode material could improve the lithium storage performance of graphite. For example, the S-doped graphite modified by CoO nanoparticles delivered a specific capacity of 440 mAh g −1 at a low current density of 150 mA g −1 after 100 cycles [12]. The Fe 2 O 3 /graphite composite reported by Wang et al. via ball milling method exhibited an initial charge capacity of 535 mAh g −1 at 0.1 A g −1 , and it could remain at 490 mAh g −1 after 55 cycles. However, Fe 2 O 3 was distributed inhomogeneously on the surface of graphite in the composites prepared by the ball milling method, which led to poor high-rate cycling stability [14].
It has been shown that the introduction of oxygenous functional groups (hydroxyl group, carboxyl group, and epoxy group) to the surface of graphite facilitates the combination of metal oxide and graphite [15,16]. Fe 3 O 4 is considered as the promising anode material due to the high theoretical specific capacity (924 mAh g −1 ), high electronic conductivity, low-cost, and eco-friendly characteristics [17,18]. Dependent on the surface modification of high capacity Fe 3 O 4 , the coating of Fe 3 O 4 on the surface of oxidized spherical-graphite (abbreviated as SGO) would obtain high performance electrode materials. Hollow nanostructures have been confirmed to effectively suppress the volume changes of active materials upon repeated cycles, avoiding pulverization of the materials [19,20].
In this work, hollow Fe 3 O 4 nanospheres were chosen to modify SGO. The spherical-graphite (SG) was first oxidized by sulfuric acid to obtain SG with oxygen-containing functional groups, and then numerous hollow Fe 3
Oxidation Treatment of SG
The SG was from the Qingdao Qingbei Carbon Products Co. (Qingdao, China). The particle size of SG was about 16 µm to 20 µm. In a typical preparation process, 10 g SG and 100 mL concentrated sulfuric acid (H 2 SO 4 , 98 wt%) were added to the flask, and then the mixture was kept for 10 h at 180 • C with stirring in an oil bath. After the above dispersion was cooled to room temperature, it was harvested by vacuum filtration and washed with alcohol (C 2 H 5 OH, 99.7 wt%) and deionized water several times to a pH of about 7. Then, the product was dried at 60 • C to obtain SGO.
Synthesis of SGO/Fe 3 O 4 Composites
In a typical synthesis, FeCl 3 was first dissolved in 30 mL ethylene glycol, then 0.55 g Polyethyleneglycol (PEG, M w = 2000), 2 g sodium acetate (NaAc), and 0.2 g SGO were added into the above solution, respectively. After that, the mixture was vigorously stirred at 60 • C for 2 h. Finally, the above suspension was sealed and transferred into a 50 mL Teflon-lined autoclave and held at 200 • C for 15 h. After the reaction was cooled to ambient temperature, the black precipitate was washed with deionized water and ethyl alcohol several times, and then dried under vacuum to obtain SGO/Fe 3 O 4 composites. During the preparation process, five different amounts of FeCl 3 (0.05, 0.1, 0.
Electrochemical Measurement
The electrode was composed of active material (70 wt%), acetylene black (20 wt%), and carboxylmethyl cellulose (CMC, 10 wt%). The average loading weights of electrodes were approximately 1.0 mg cm −2 . The areal loading weights and thickness of the electrodes for each sample from SGO/Fe3O4-1 to SGO/Fe3O4-5 as well as SGO and SG are shown in Figure S1. The CR2032-type coin half cells were finally assembled in an Ar-filled glove box (H2O, and O2 < 0.1 ppm), by using metallic lithium plate as the counter electrode, and a Celgard 2400 microporous polypropylene membrane as the separator. The electrolyte was 1 M LiPF6 in a mixture of dimethyl carbonate and ethylene carbonate (1:1, volume %). The coin cells were measured on a battery testing system (CT2001A, Wuhan, China) in a voltage range of 0.01 to 3 V. Cyclic voltammetry (CV) curves were acquired on a electrochemical workstation (CHI660E, Shanghai Chenhua Instruments, Shanghai, China) between 0.01 to 3 V with a scanning rate of 0.1 mV s −1 . Electrochemical impedance spectroscopy (EIS) was performed with an electrochemical workstation (PGSTAT302N, Metrohm, Herisau, Switzerland) by using an alternating current (AC) voltage of 10 mV in a frequency range of 100 kHz and 0.01 Hz. For full cells, the anode was made with the above experimental strategy and then it was electrochemically activated for three cycles. The cathode consisted of 80 wt% of LiCoO2 (areal loading weights and thicknesses of electrodes are shown in Figure S1), 10 wt% of acetylene black, and 10 wt% of polyvinylidene fluoride (PVDF), with 1-Methyl-2-pyrrolidone (NMP) as dispersant, and then the slurry was spread on metallic aluminum foil. The capacity ratio between anode and cathode was controlled at 1.2: 1.
Results and Discussion
X-ray diffraction (XRD) patterns of SG, SGO, and SGO/Fe3O4-4 composite are shown in Figure 2. As shown in Figure 2a and 2b, a strong diffraction peak at 26.4° is indexed to (002) the crystal plane of graphite, and no obvious changes of diffraction peaks for SG were observed after oxidation treatment, suggesting that no phase transition occurs in the preparation process of SGO. Figure 2c shows that all the other diffraction peaks are ascribed to
Electrochemical Measurement
The electrode was composed of active material (70 wt%), acetylene black (20 wt%), and carboxylmethyl cellulose (CMC, 10 wt%). The average loading weights of electrodes were approximately 1.0 mg cm −2 . The areal loading weights and thickness of the electrodes for each sample from SGO/Fe 3 O 4 -1 to SGO/Fe 3 O 4 -5 as well as SGO and SG are shown in Figure S1. The CR2032-type coin half cells were finally assembled in an Ar-filled glove box (H 2 O, and O 2 < 0.1 ppm), by using metallic lithium plate as the counter electrode, and a Celgard 2400 microporous polypropylene membrane as the separator. The electrolyte was 1 M LiPF 6 in a mixture of dimethyl carbonate and ethylene carbonate (1:1, volume %). The coin cells were measured on a battery testing system (CT2001A, Wuhan, China) in a voltage range of 0.01 to 3 V. Cyclic voltammetry (CV) curves were acquired on a electrochemical workstation (CHI660E, Shanghai Chenhua Instruments, Shanghai, China) between 0.01 to 3 V with a scanning rate of 0.1 mV s −1 . Electrochemical impedance spectroscopy (EIS) was performed with an electrochemical workstation (PGSTAT302N, Metrohm, Herisau, Switzerland) by using an alternating current (AC) voltage of 10 mV in a frequency range of 100 kHz and 0.01 Hz. For full cells, the anode was made with the above experimental strategy and then it was electrochemically activated for three cycles. The cathode consisted of 80 wt% of LiCoO 2 (areal loading weights and thicknesses of electrodes are shown in Figure S1), 10 wt% of acetylene black, and 10 wt% of polyvinylidene fluoride (PVDF), with 1-Methyl-2-pyrrolidone (NMP) as dispersant, and then the slurry was spread on metallic aluminum foil. The capacity ratio between anode and cathode was controlled at 1.2: 1.
Results and Discussion
X-ray diffraction (XRD) patterns of SG, SGO, and SGO/Fe 3 O 4 -4 composite are shown in Figure 2. As shown in Figure 2a,b, a strong diffraction peak at 26.4 • is indexed to (002) the crystal plane of graphite, and no obvious changes of diffraction peaks for SG were observed after oxidation treatment, suggesting that no phase transition occurs in the preparation process of SGO. Figure 2c shows that all the other diffraction peaks are ascribed to Figure S2). Raman spectra of three samples are presented in Figure S3. The intensity ratios of D and G bands (I D /I G ) of SGO (0.22) are lower than that of SG (0.26), which indicates that many defects in SG could be eliminated after being oxidized by sulfuric acid [19][20][21], while the I D /I G value of SGO/Fe 3 O 4 -4 increases in comparison with that of SG and SGO, which may be attributed to the formation of many defects and reduction of oxygen-containing groups during the preparation of SGO/Fe 3 O 4 composites [13,22,23].
patterns are also obtained for SGO/Fe3O4-1, SGO/Fe3O4-2, SGO/Fe3O4-3 and SGO/Fe3O4-5 ( Figure S2). Raman spectra of three samples are presented in Figure S3. The intensity ratios of D and G bands (ID/IG) of SGO (0.22) are lower than that of SG (0.26), which indicates that many defects in SG could be eliminated after being oxidized by sulfuric acid [19][20][21], while the ID/IG value of SGO/Fe3O4-4 increases in comparison with that of SG and SGO, which may be attributed to the formation of many defects and reduction of oxygen-containing groups during the preparation of SGO/Fe3O4 composites [13,22,23]. patterns are also obtained for SGO/Fe3O4-1, SGO/Fe3O4-2, SGO/Fe3O4-3 and SGO/Fe3O4-5 ( Figure S2). Raman spectra of three samples are presented in Figure S3. The intensity ratios of D and G bands (ID/IG) of SGO (0.22) are lower than that of SG (0.26), which indicates that many defects in SG could be eliminated after being oxidized by sulfuric acid [19][20][21], while the ID/IG value of SGO/Fe3O4-4 increases in comparison with that of SG and SGO, which may be attributed to the formation of many defects and reduction of oxygen-containing groups during the preparation of SGO/Fe3O4 composites [13,22,23]. The morphologies of SGO/Fe3O4 composite are displayed in Figure 3. The SEM images show that Fe3O4 nanospheres are uniformly coated on the surface of SG (Figure 3a and 3b), and no extra particle agglomeration of Fe3O4 is observed. Figure 3c presents the scanning electron microscope (SEM) image of SGO/Fe3O4 composite. It can be found that several Fe3O4 nanospheres are attached on the surface of SGO, and the average diameter of particles is about 140 nm. The magnified transmission electron microscope (TEM) image shows that Fe3O4 has a hollow nanostructure (Figure 3d). In the first cathodic scan, two peaks at about 0.085 V and 0.17 V are attributed to the Li + insertion into graphite layers. A sharp cathodic peak at approximately 0.68 V corresponds to the electrochemical reduction process of Fe 3+ and Fe 2+ to Fe 0 [24]. In the second cycle, the peak at 0.68 V shifts to 0.82 V owing to the irreversible structural changes [25]. The peak at around 0.85 V is likely attributed to the generation of solid electrolyte interface (SEI) film and electrolyte decomposition [26]. The cathodic peak at about 1.4 V is assigned to Li + insertion into the Fe 3 O 4 . The obvious anodic peak at about 0.21 V is attributed to Li + extraction from the SGO/Fe 3 O 4 -4 electrode [27,28]. The anodic peaks at around 1.55 V and 1.83 V are ascribed to the oxidation of Fe 0 to Fe 2+ and Fe 3+ , respectively [24]. Galvanostatic lithiation/delithiation curves of the SGO/Fe 3 O 4 -4 at the first five cycles at 0.1 A g −1 are exhibited in Figure 4b. The reversible capacities of SG, SGO, and SGO/Fe 3 O 4 -4 for the first cycle are 411, 426, and 808 mAh g −1 , corresponding to Coulombic efficiency (CE) of 81%, 82%, and 77% ( Figure S4), respectively. The large irreversible capacity was caused by the SEI layer formation due to the electrolyte decomposition and lithium being trapped in the active material, and so on [29][30][31][32]. The formed SEI film is the main factor in the irreversible capacity loss during the first discharge process. Fortunately, for the second cycle, the CE of three samples increased up to 97.1%, 95.2%, and 94.2%. Figure 4c shows the cycling performance of three samples at 0.5 A g −1 . It can be found that the SGO/Fe 3 O 4 -4 composite exhibits a high reversible capacity of 806 mAh g −1 after 200 cycles. The capacity increase may be attributed to the following reasons. The first reason is the self-activation process of Fe 3 O 4 active materials upon repeated cycles [33][34][35]. The particle size of Fe 3 O 4 will decrease. The increased specific surface area will form more active sites, and improve the electrolyte accessibility, which can provide more surface-related capacitance [33][34][35]. Second, as the cycle number increases, a more-stable SEI film will form on the surface of the electrode material, which is beneficial for the lithium ion storage [36]. Third, the reversible growth of a polymeric gel-like film resulting from kinetic electrolyte degradation can also lead to the capacity increase [37][38][39]. In addition, SGO (324 mAh g −1 ) presents a slightly higher specific capacity than SG (295 mAh g −1 ). Some structure defects, such as edge carbon atoms, carbon chains, and sp 3 -hybridized carbon atoms, were removed [9], and the functional groups (-CO, -COOH) were formed on the unsmooth surface of the graphite after oxidation [40], which induced the formation of a stable SEI layer and contributed to the good cycling performance. The rate capabilities of the three samples are displayed in Figure 4d. The SGO/Fe 3 O 4 -4 delivers excellent rate capability in comparison with SGO and SG. The average charge capacity cycled at 0.1, 0.2, 0.5, 1, 2, and 5 A g −1 is 595, 589, 516, 444, 376, and 299 mAh g −1 , respectively. The specific capacity can recover to 610 mAh g −1 as the current density returns to 0.1 A g −1 , respectively, indicating good reversibility features of SGO/Fe 3 O 4 -4. The long periodic cycling performance of the three samples at a high current density of 2 A g −1 are further discussed (Figure 4e). We found that SGO/Fe 3 O 4 -4 exhibited a high specific capacity of 510 mAh g −1 up to 1000 cycles, which is much higher than SG (146 mAh g −1 ) and SGO Figure 5a displays the cycling performance of five different composites. As observed, SGO/Fe 3 O 4 -4 delivers the highest specific capacity among these composites, which is consistent with those of Figure 5c. The rate performances of different composites are shown in Figure 5b. SGO/Fe 3 O 4 -5 exhibits a slightly high reversible capacity than SGO/Fe 3 O 4 -4 at low current densities due to the increase of Fe 3 O 4 content. Because the theoretical capacity of Fe 3 O 4 is higher than that of SGO, a higher capacity can be obtained when the loading amount of Fe 3 O 4 nanospheres is increased. However, when the loading amount of Fe 3 O 4 nanospheres is too high, the Fe 3 O 4 nanospheres will agglomerate together, resulting in the decrease of specific capacity. The SGO/Fe 3 O 4 -4 shown in Figure 5c manifests in a high-rate, ultra-long cycle life, and its charge capacity is sustained at 510 mAh g −1 up to 1000 cycles. Based on the above results, the SGO/Fe 3 O 4 -4 composite is the optimal product, which exhibits the best lithium storage performance. The loading amount of Fe3O4 hollow nanospheres is a key factor that affects the lithium storage performances of SGO/Fe3O4 composites. SEM images of composites with different loading amounts of Fe3O4 are shown in Figure S6. More Fe3O4 nanospheres are adsorbed on the surface of the composite with the increase of FeCl3 content. In addition, it can be found that particle agglomeration occurs due to excess of particles ( Figure S6d). According to TG results, the mass ratios of Fe3O4 in SGO/Fe3O4-1, SGO/Fe3O4-2, SGO/Fe3O4-3, SGO/Fe3O4-4, and SGO/Fe3O4-5 are about 11.6%, 24.7%, 32.5%, 39.2%, and 48%, respectively ( Figure S7). Figure 5a displays the cycling performance of five different composites. As a high-rate, ultra-long cycle life, and its charge capacity is sustained at 510 mAh g −1 up to 1000 cycles. Based on the above results, the SGO/Fe3O4-4 composite is the optimal product, which exhibits the best lithium storage performance. Electrochemical impedance spectra (EIS) were also studied to investigate the excellent performance of SGO/Fe3O4-4 from another viewpoint. Figure 6 shows the Nyquist plots of three samples for fresh electrodes and cycled electrodes. All of the plots for fresh electrodes consist of the depressed semicircle in the high frequency regions connected to a sloping line in the low frequency regions (Figure 6a). It can be clearly observed that the SGO/Fe3O4-4 shows smaller interface resistances (Rsf) and charge-transfer resistances (Rct) (98.12 Ω) in comparison with that of SG (404.46 Ω) and SGO (158.99 Ω) electrodes (Table S1). The small resistances of SGO/Fe3O4-4 composite are mainly attributed to the hollow Fe3O4 nanostructures, which provide convenient access for the electrolyte to wet the electrode surface and also offer an additional transport channel for Li + diffusion [27]. For the cycled electrodes (Figure 6b), two depressed semicircles are ascribed to the interface resistances (Rsf) and charge-transfer resistances (Rct), which decrease significantly in contrast with those of the fresh electrodes (54.82 Ω for SG electrode, 68.05 Ω for SGO electrode, and 39.02 Ω for SGO/Fe3O4-4 electrode), benefiting the diffusion kinetics upon cycling. The small resistance for the cycled SGO/Fe3O4-4 electrode reduces the energy barrier of Li + intercalation into graphite and benefits fast Li + diffusion and charge transfer [13]. Electrochemical impedance spectra (EIS) were also studied to investigate the excellent performance of SGO/Fe 3 O 4 -4 from another viewpoint. Figure 6 shows the Nyquist plots of three samples for fresh electrodes and cycled electrodes. All of the plots for fresh electrodes consist of the depressed semicircle in the high frequency regions connected to a sloping line in the low frequency regions (Figure 6a). It can be clearly observed that the SGO/Fe 3 O 4 -4 shows smaller interface resistances (R sf ) and charge-transfer resistances (R ct ) (98.12 Ω) in comparison with that of SG (404.46 Ω) and SGO (158.99 Ω) electrodes (Table S1). The small resistances of SGO/Fe 3 O 4 -4 composite are mainly attributed to the hollow Fe 3 O 4 nanostructures, which provide convenient access for the electrolyte to wet the electrode surface and also offer an additional transport channel for Li + diffusion [27]. For the cycled electrodes (Figure 6b), two depressed semicircles are ascribed to the interface resistances (R sf ) and charge-transfer resistances (R ct ), which decrease significantly in contrast with those of the fresh electrodes (54.82 Ω for SG electrode, 68.05 Ω for SGO electrode, and 39.02 Ω for SGO/Fe 3 O 4 -4 electrode), benefiting the diffusion kinetics upon cycling. The small resistance for the cycled SGO/Fe 3 O 4 -4 electrode reduces the energy barrier of Li + intercalation into graphite and benefits fast Li + diffusion and charge transfer [13]. Nanomaterials 2019, 9, 996 8 of 12 To further confirm the potential application of the SGO/Fe3O4 composites in commercial batteries, we used the SGO/Fe3O4-4 composite as the anode and LiCoO2 as the cathode to assemble a full cell (labeled as SGO/Fe3O4-4/LiCoO2), and the corresponding electrochemical performance is discussed. The SGO anode was also assembled into a full cell with a LiCoO2 cathode (labeled as SGO/LiCoO2) for comparison. The related galvanostatic lithiation/delithiation curves of the SGO/Fe3O4-4/LiCoO2 full cell are exhibited in Figure 7a. The initial discharge and charge specific capacities for the SGO/Fe3O4-4/LiCoO2 full cell are 607 and 371 mAh g −1 , respectively, with a low CE of 61.1%. Fortunately, the CE increases to 89% for the second cycle. As shown in Figure 7b, the SGO/Fe3O4-4/LiCoO2 full cell exhibits a higher specific capacity of 137 mAh g −1 after 100 cycles, which is higher than that of the SGO/LiCoO2 full cell (only 39 mAh g −1 ). When the full cell was cycled at 0.5 A g −1 , the SGO/Fe3O4-4/LiCoO2 still maintained a reversible capacity of 79 mAh g −1 after 200 cycles (Figure 7c), which is better than SGO/LiCoO2. The rate capability of the SGO/Fe3O4-4/LiCoO2 full cell shows that the average specific capacities from 0.1 A g −1 to 5 A g −1 are 270 mAh g −1 , 206 mAh g −1 , 164 mAh g −1 , 128 mAh g −1 , 94 mAh g −1 , and 54 mAh g −1 , respectively (Figure 7d). An average reversible capacity of 170 mAh g −1 can be maintained when the current density returns to 0.1 A g −1 . Based on the above results, the SGO/Fe3O4-4/LiCoO2 full cell delivers a higher specific capacity and better rate performance than SGO/LiCoO2. Surprisingly, the button full cell can easily provide sufficient power to light the light-emitting diode (LED) (Figure 7e), which lasts about 40 minutes. In our case, the capacity decay for the SGO/Fe3O4-4/LiCoO2 full cell can be found upon cycling. The reasons can be summarized as follows. First, the formed SEI layer on the surface of SGO/Fe3O4 is unstable from the three cycles of electrochemical activation. Second, the large volume changes of Fe3O4 during repeated cycles destroy the unstable SEI film, thus, regeneration and overgrowth of SEI film will consume more Li + [41], resulting in the rapid capacity decay. Third, the electrode pulverization and insufficient electrolytes may be another reason [42]. This result implies that the Fe3O4 modified spherical-graphite composite is able to replace the commercial graphite for LIBs. Figure 7a. The initial discharge and charge specific capacities for the SGO/Fe 3 O 4 -4/LiCoO 2 full cell are 607 and 371 mAh g −1 , respectively, with a low CE of 61.1%. Fortunately, the CE increases to 89% for the second cycle. As shown in Figure 7b, the SGO/Fe 3 O 4 -4/LiCoO 2 full cell exhibits a higher specific capacity of 137 mAh g −1 after 100 cycles, which is higher than that of the SGO/LiCoO 2 full cell (only 39 mAh g −1 ). When the full cell was cycled at 0.5 A g −1 , the SGO/Fe 3 O 4 -4/LiCoO 2 still maintained a reversible capacity of 79 mAh g −1 after 200 cycles (Figure 7c), which is better than SGO/LiCoO 2 . The rate capability of the SGO/Fe 3 O 4 -4/LiCoO 2 full cell shows that the average specific capacities from 0.1 A g −1 to 5 A g −1 are 270 mAh g −1 , 206 mAh g −1 , 164 mAh g −1 , 128 mAh g −1 , 94 mAh g −1 , and 54 mAh g −1 , respectively (Figure 7d). An average reversible capacity of 170 mAh g −1 can be maintained when the current density returns to 0.1 A g −1 . Based on the above results, the SGO/Fe 3 O 4 -4/LiCoO 2 full cell delivers a higher specific capacity and better rate performance than SGO/LiCoO 2 . Surprisingly, the button full cell can easily provide sufficient power to light the light-emitting diode (LED) (Figure 7e), which lasts about 40 min. In our case, the capacity decay for the SGO/Fe 3 O 4 -4/LiCoO 2 full cell can be found upon cycling. The reasons can be summarized as follows. First, the formed SEI layer on the surface of SGO/Fe 3 O 4 is unstable from the three cycles of electrochemical activation. Second, the large volume changes of Fe 3 O 4 during repeated cycles destroy the unstable SEI film, thus, regeneration and overgrowth of SEI film will consume more Li + [41], resulting in the rapid capacity decay. Third, the electrode pulverization and insufficient electrolytes may be another reason [42]. This result implies that the Fe 3 O 4 modified spherical-graphite composite is able to replace the commercial graphite for LIBs.
Conclusions
In summary, the Fe3O4 hollow nanosphere-modified SGO composites (SGO/Fe3O4) have been successfully prepared by the initial oxidation treatment of graphite and subsequent solvothermal synthesis. Among all of the composites, the SGO/Fe3O4-4 for half-cell exhibits the best lithium storage performance. Its charge capacity can reach as high as 806 mAh g −1 after 200 cycles at 0.5 A g −1 , which is far higher than that of SG and SGO. When the electrode was cycled at 2 A g −1 , the composite achieves a charge capacity of 510 mAh g −1 over 1000 cycles. The superior high-rate lithium storage performance is mainly attributed to its specially designed micro-nanostructure. Besides the half-cell, the SGO/Fe3O4-4/LiCoO2 full cell has been investigated, which exhibits higher capacity, better rate
Conclusions
In summary, the Fe 3 O 4 hollow nanosphere-modified SGO composites (SGO/Fe 3 O 4 ) have been successfully prepared by the initial oxidation treatment of graphite and subsequent solvothermal synthesis. Among all of the composites, the SGO/Fe 3 O 4 -4 for half-cell exhibits the best lithium storage performance. Its charge capacity can reach as high as 806 mAh g −1 after 200 cycles at 0.5 A g −1 , which is far higher than that of SG and SGO. When the electrode was cycled at 2 A g −1 , the composite achieves a charge capacity of 510 mAh g −1 over 1000 cycles. The superior high-rate lithium storage performance is mainly attributed to its specially designed micro-nanostructure. Besides the half-cell, the SGO/Fe 3 O 4 -4/LiCoO 2 full cell has been investigated, which exhibits higher capacity, better rate capacity, and better cycling performance than the SGO/LiCoO 2 full cell. The low-cost synthesis method and eminent electrochemical performance of SGO/Fe 3 O 4 composites demonstrate its promising application as a replacement for current graphite LIBs.
Author Contributions: F.J. conceived and designed the experiments. X.Y. performed the experiments. All authors analyzed the data. Y.Z. wrote the paper. All authors discussed the results and reviewed the paper.
Funding: This research received no external funding. | 6,479.2 | 2019-07-01T00:00:00.000 | [
"Materials Science"
] |
Quality of service adaptive modulation and coding scheme for IEEE 802.11ac
ABSTRACT
INTRODUCTION
Global internet usage accumulates to 63.1% by July 2022 [1] which results in a deterioration of the quality of service (QoS).Thus, there is a growing need for methods to enhance QoS performance.Researchers have developed a number of methods to address these issues, including link adaptation.Most of the recent studies of link adaptation technique focus on cellular networks and not much focus on wireless local-area network (WLAN).The achievement of IEEE 802.11 hinges heavily on its ability to offer QoS, as important applications are moving onto data networks [2] and the evolution of multimedia applications [3].Due to an increase in applications for multimedia running on IEEE 802.11 networks, service providers have been motivated to enhance QoS [4]- [7].Therefore, considering the aforementioned challenge in QoS, the link adaptation technique has been developed to solve the issues [8], [9].
The goal of numerous research is to provide IEEE 802.11WLANs with QoS support capabilities [10]- [16].The work in [17] suggested a dynamic learning approach called intelligent multi-user multiple input multiple output (MU-MIMO) user selection with link adaptation (IMMULA) which used software defined networking (SDN).The IMMULA approach achieved a throughput gain of 539.06%, 277.94% and 506.00% as compared to existing MU-MIMO user selection (MUSE) [18], electivity-aware MU-MIMO design (SAMU) [19], and SIEVE [20] systems.While the standard packet latency for IMMMULA was reduced by 17.65%, 86.69%, and 74.12%, respectively.Chandran et al. [21] conducted a study on the link adaptation technique and Karman filter.The authors proposed to use modulation and coding scheme (MCS) and repetition rate (MRR) as controllable factors.Karmakar et al. [22] proposed a closed-loop mechanism of high throughput mobility rate (HT-MobiRate) by using the link adaptation in a wireless channel under a mobile environment.In the mobile environment, the channel condition fluctuates, thus it is recommended to implement the Thompson sampling technique to evaluate the changes in received signal strength indicator (RSSI).In another study, Karmakar et al. [23] develop a smart link adaptation which transforms a wireless station into an intelligent device that can handle a variety of network situations.The proposed approach outperforms other techniques by a significant margin.Edalat et al. [24] suggested smart adaptive collision avoidance (SACA) method that combined air time and network contention to adaptively decide whether to enable or disable the request to send/clear to send (RTS/CTS) handshake.The SACA technique thus continually beats the state-of-the-art methods.The following paper suggested a synchronized mode full-duplex (SM-FD) media access control (MAC) protocol that utilized reserved fields in WLAN frames to adopt the advance of FD communication, with the aim to increase network performance [25].The proposed protocol outperformed the slotted Aloha MAC protocol in terms of throughput by a factor of two.Nosheen and Khan [26] introduces a novel adaptive transmit opportunity (TXOP) packet transmission technique that adjusts the TXOP time in response to the degrees of congestion and speed detected by video terminals.In comparison to the conventional technique, the suggested flow rate adaptive-TXOP (FRA-TXOP) approach improves the QoS of video traffic while enhancing network throughput.
The proposed approach in this work is unique which focuses on improving the QoS in IEEE802.11acWLANs, a standard that already provides high throughput.This is in contrast to most existing studies, which typically aim to improve QoS performance in other IEEE802.11standards.By focusing on enhancing QoS in IEEE802.11ac, this work addresses an important and previously overlooked aspect of WLAN performance.
METHOD
Figure 1 shows the conceptual framework of the suggested link adaptation algorithm.Based on the framework, the adaptation algorithm executes at the physical layer (PHY layer) which exploits the MCS.The proposed link adaptation algorithm aims to optimize the system performance by dynamically adjusting the transmission data rate to suit the changing channel conditions.
For the adaptation case, at the beginning of the transmission process, a transmitter chooses the maximum transmission data rate to transmit the packet and the receiver measures the end-to-end delay in order to adjust the transmission data rate for the following packet transmission.Then, the receiver sends the information back to the transmitter via feedback packet.For the suggested algorithm, the delay is the variable which represents the current traffic conditions because the end-to-end delay consists of several components as in (1).
The transmission data rate adaptation is a link layer method that represents the performance of a communication link.The MCS has an impact on the transmission data rate's effectiveness.Parameters for the simulation study are tabulated in Table 1.
Figure 2 shows the flowchart of the suggested transmission data rate adaptation algorithm.It starts with transmitting packet data, i.After transmitting a packet, the transmitter waits for the acknowledgement (ACK) packet.In the situation of no ACK packet received, the transmitter initiates re-transmission of the packet.When a packet is successfully received, the delay, , is measured and the average value, , is recorded.In the proposed work, samples of packet data will be taken for each channel conditions.There are three categories of traffic conditions that will be observed in this work which are low, medium and high traffic load.The number of samples, that will be evaluated are 10, 20, and 30 packets.The average value must be measured as it indicates the level of property better than a single measurement.For the proposed algorithm, queue length and delay are chosen to be the adaptation parameters.This is because queue length and delay are the ideal indicators for traffic conditions.The threshold delay, which was set at 2 ms, determines whether to increase or reduce the transmission data rate for the suggested technique.The transmission data rate will be reduced to a lower range in the event that the average delay is less than the threshold value.In the other case where the delay is more than the threshold value, the QoS needs to be maintained which is delayed.By adapting to higher data rate, the delay can be controlled based on (2), where is denoted as the delay in second, is the packet length in bytes, and is the transmission data rate in bit per second (bps).Distributed coordination function (DCF) is a basic MAC protocol to maximize the throughput whilst preventing packet loss due to collision in WLAN IEEE 802.11.The total time will increase if the channel is busy since the back-off period would lengthen.The accumulated time to transmit a packet is increased if the condition of the channel is congested due to an increase in and .The is analysed based on the average contention window time, ̅̅̅̅̅ .
RESULTS AND DISCUSSION
This section analyses the performances of the proposed adaptive transmission data rate in IEEE 802.11acWLAN in terms of QoS performances which are throughput and delay.The simulations were performed using the simulation parameters as listed in Table 1 and executed by using OMNeT++ computer-aided design (CAD).Because the proposed algorithm needs to gain both energy efficiency and QoS performances, the algorithm adaptively adjusts the transmission data rate according to the current contention level.
End to end delay
The performances analyses were carried out to examine the QoS performances as a function of the traffic load.This approach is different from other methods in the literature that may use fixed data rates or predetermined scheduling schemes.By adaptively adjusting the data rate according to traffic conditions, the proposed algorithm can optimize the network performance by efficiently utilizing the available bandwidth while avoiding congestion and delays.The three different traffic conditions mentioned in the proposed algorithm -high traffic, medium traffic, and low traffic load is used to categorize the network traffic based on the amount of data being transmitted at a given time.This allows the algorithm to dynamically adjust the data rate to meet the varying demands of the network.Simulation results illustrate how the transmission data rate adapts to the different traffic conditions with the main goal to improve the throughput.Table 1 shows that the inter-arrival time has been set to 0.1 s which indicates that 10 number of packets are sent from transmitter for every 0.1 s.In order to indicate the network's traffic load for each condition, the number of packets was divided into three parts; 1 to 320 packets are low traffic loads, 330 to 650 packets are medium traffic loads, and 660 to 1,000 packets are high traffic loads.The first part of this section analyses the end-to-end delay for the default condition and the link adaptation approach in ideal channel condition.An ideal channel is defined as the communication channel without any obstacle or fading occurred.In an ideal channel, it is assumed that packet loss happens due to collisions.In the case where the channel condition is congested, the total time which is the end-to-end delay will increase due to the back-off period is increased.Theoretically, the relationship between the throughput and the end-to-end delay are inversely proportional which works for the default condition and it is known as trade-off in order to serve a good QoS in WLAN.The higher the end-to-end delay, the lower the throughput and vice versa.However, the link adaptation approach may exhibit a different relationship between throughput and end-to-end delay compared to the default condition.Specifically, as the data rate is increased in response to higher traffic loads, the throughput may increase while the end-to-end delay also increases.This is because the increased data rate allows for more data to be transmitted, but the time it takes for packets to travel from the source to the destination may also increase due to the increased congestion.Figure 3 shows the recorded average end-to-end delay for both the default condition and the link adaptation approach over a number of packets, illustrating the trade-off between throughput and end-to-end delay in both scenarios.From Figure 3, it is apparent that the average end-to-end delay increases as the number of packets increases for both conditions.This indicates that the traffic channel conditions have a significant impact on the end-to-end packet delay.In order to transmit packets, the transmitter must wait until all packets are collected before the MAC frame can be transmitted.Additionally, queued packets must also wait until previous packets have been successfully transmitted, which can contribute to high queuing delay.As the traffic load increases, more packets are being transmitted, which can lead to congestion and longer waiting times for transmission.This can result in higher backoff and queuing delays, leading to an overall increase in end-to-end delay.The average end-to-end delay for default condition is higher than the average end-to-end delay for link adaptation because there is no adaptation implemented.It can be concluded that the total end-to-end delay is inversely proportional to the transmission data rate.As the transmission data rate increases, the end-to-end delay would be much lower.For the link adaptation approach, as the network becomes congested, the transmission data rate is adapted to a higher level with the main goal to improve the throughput while controlling the delay.Meanwhile, for the default condition, the transmission data rate remained at the same level for all traffic conditions.In high traffic conditions, the average end-to-end delay is increased due to the excessive of packets in the network that need to be transmitted simultaneously which had led to packets collision and retransmission process.Therefore, the probability of packets collision is higher as compared to other traffic conditions.The average end-to-end delay for default condition is 2.56% higher the link adaptation approach.
Default condition
Figures 4 and 5 show the transmission data rate and corresponding throughput for default conditions over a number of packets.In the default condition where no adaptation was applied and the transmission data rate remained fixed, the produced throughput would decrease as the number of packets increased.As more packets are transmitted, the network becomes congested, leading to increased packet loss, delays, and retransmissions.These factors can lower the overall throughput.In the default condition, no adaptation was applied, since there is no adaptation executed, the transmission data rate remained at 360 Mbps.However, the trend of the produced throughput from the transmission data rate decreased as the number of packets increased as shown in Figure 5.In low traffic conditions, the throughput was consistently transmitted at 18 Mbps.However, at 281 packets, the throughput slowly decreased as the traffic in the network channel began to enter the medium channel state.In the medium channel condition, the throughput constantly remained at 12 Mbps before dropping to 620 packets while moving to the congested state.In the congested load, which is high traffic, the throughput remained at 6 Mbps.However, at 900 packets, the network was very congested which resulted in the throughput drop and at 1,000 packets the throughput was at 0 Mbps where the collision occurred in the network channel which caused no packet to be received at the receiver side.In sum, the trend of the throughput decreased due to the traffic condition in the network channel getting busy which has led to the increment of the packet's retransmission as well as the packets loss, thus indirectly affected the decrement of the throughput.
Link adaptation approach
In Figure 6, the link adaptation approach is depicted, illustrating the adaptive changes in the transmission data rate based on traffic conditions.As a consequence of this adaptive behavior, the throughput, which refers to the amount of data transferred per unit of time, also varies accordingly with the corresponding transmission data rate.Corresponding throughput for link adaptation approach against of packets as shown in Figure 7. Figure 6.Transmission data rate approach against number of packets Figure 7. Corresponding throughput for link adaptation approach against of packets Initially, in the default condition, the maximum data rate is selected by the transmitter for packet transmission.In contrast, for the link adaptation approach, the initial transmission data rate is set lower at 57.8 Mbps binary phase shift keying (BPSK) and is then adjusted based on the traffic load to optimize the network's performance.As the number of packets increases, the average delay exceeds the threshold value of 2 ms, which may negatively impact the network's QoS performance.In response, the transmission data rate is increased to the next step of MCS to provide a better QoS performance to the user.This adaptive approach allows the network to maintain a good balance between delay and throughput while also ensuring that the QoS requirements are met.6449 approach also increases, as shown in the same figure.This indicates that the link adaptation approach can effectively optimize the network's performance by adjusting the data rate to match the changing traffic conditions.At the beginning of the simulation, the transmission data rate was set to 57.8 Mbps while the throughput recorded was 4 Mbps.Even when the lowest transmission data rate is used for the link adaptation, packet data are successfully received at the receiver.In low traffic load, the transmission data rate increased from 57.8 Mbps BPSK to 173.3 Mbps quadrature-PSK (QPSK) with recorded throughput in between 4 to 14 Mbps.In medium traffic load, the transmission data rate increased to 231.1 Mbps 16-QAM to 462.2 Mbps 64-QAM, and the throughput is between 17 to 24 Mbps.In the high traffic load, the transmission increased from 520.0 Mbps 64-QAM to 577.8 Mbps 64-QAM.However, at 900 packets, the maximum transmission data rate increased to 693.3 Mbps 256-QAM with the maximum throughput recorded at 38 Mbps in order to ensure the user experiences excellent QoS performance.It can be seen from these figures that the transmission data rate increases to the next step of MCS based on the need to adapt the traffic loads as well as the throughput.
Simulation results presented in the figures indicate that as the transmission data rate increases, the throughput will be increased.Instead of staying at the same transmission data rate as the default condition, the link adaptation approach tries to adapt the transmission data rate according to the channel condition which is represented by the traffic load.The advantage of this approach is it is able to improve the throughput.For an ideal channel condition, there are no fading occurred thus lowering the transmission data rate to combat the varying channel condition is not an issue.Based on that reason, transmission at a higher data rate will give advantage in terms of throughput.The link adaptation approach achieved higher throughput rather than the default condition, which is 38 Mbps, while the default condition higher throughput is 18 Mbps.It can be concluded that the implementation of the link adaptation technique in IEEE 802.11acWLAN has improved the throughput.
CONCLUSION
This paper presented a link adaptation technique to improve a QoS in IEEE 802.11acWLAN in terms of throughput and delay.Rapid growth of internet traffic cause traffic congestion which contributes to low throughput and high delays.Throughput and delay are important performance metrics in ensuring the WLAN usage access by user.In this paper, the link adaptation technique is proposed to help in improving the QoS in WLAN.The proposed technique is able to adapt the transmission data rate in WLAN according to the traffic condition.For instance, in the case where traffic condition is congested, the proposed technique will play its role by increasing the transmission data rate to the next stage to boost the QoS performance, specifically the throughput.Simulation results show that the proposed link adaptation technique achieved 36.48%throughput improvement compared to the default condition.The effectiveness of the link adaptation approach in improving the QoS in WLAN has been compared with the default condition.As a result, it shows that the link adaptation is a technique that is able to improve the QoS performance in WLAN especially in terms of throughput and delay.
Figure 1 .
Figure 1.Conceptual framework of the data rate adaptation
Figure 3 .
Figure 3. Average end-to-end delay against number of traffic
Figure 4 .Figure 5 .
Figure 4. Transmission data rate for default condition against number of packets Figure 6 shows how the transmission data rate is increased step-by-step based on the IEEE 802.11acWLAN MCS data rate as the traffic load increases.As a result, the throughput for the link adaptation Int J Elec & Comp Eng ISSN: 2088-8708 Quality of service adaptive modulation and coding scheme for IEEE 802.11ac (Aliya Syahira Mohd Anuar) | 4,118.8 | 2023-12-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Hadronic interaction model dependence in cosmic Gamma-ray flux estimation using an extensive air shower array with a muon detector
Observation techniques of high-energy gamma rays using air showers have remarkably progressed via the Tibet ASγ, HAWC, and LHAASO experiments. These observations have significantly contributed to gamma-ray astronomy in the northern sky’s sub-PeV region. Moreover, in the southern sky, the ALPACA experiment is underway at 4,740 m altitude on the Chacaltaya plateau in Bolivia. This experiment estimates the gamma-ray flux from the difference between the number of on-source and off-source events by real data, utilizing the gamma-ray detection efficiency calculated through Monte Carlo simulations, which in turn depends on the hadronic interaction models. Even though the number of cosmic-ray background events can be experimentally estimated, this model dependence affects the estimation of gamma-ray detection efficiency. However, previous reports have assumed that the model dependence is negligible and have not included it in the error of gamma-ray flux estimation. Using ALPAQUITA, the prototype experiment of ALPACA, we quantitatively evaluated the model dependence on hadronic interaction models for the first time. We evaluate the model dependence on hadronic interactions as less than 3.6 % in the typical gamma-ray flux estimation performed by ALPAQUITA; this is negligible compared with other uncertainties such as energy scale uncertainty in the energy range from 6 to 300 TeV, which is dominated by the Monte Carlo statistics. This upper limit of 3.6 % model dependence is expected to apply to ALPACA.
gamma rays. Hadronic particles, such as protons, nuclei, and pions, are largely produced by collisions with the atmospheric nuclei when the incident particle is a hadronic cosmic ray. Collisions are repeated in the atmosphere, causing a hadronic cascade shower that amplifies the number of particles. Knowledge of hadronic particle interactions in the atmosphere is imperative in the Monte Carlo simulation process. The Monte Carlo simulation is based on various phenomenological models. Currently, the differences between these models are not negligible, and the number of muons contained in a hadronic air shower differs depending on the phenomenological model. Thus, differences in hadronic interaction models can cause systematic uncertainties in the detection efficiency of the gamma-ray induced air showers using the number of muons as a selection criterion. In order to avoid this dependence, Tibet ASγ experimentally estimates the background hadronic cosmic rays using air showers from directions away from the gamma-ray point source to measure flux [4].
Conversely, when the primary particle is a gamma ray or an electron, the air shower component is a mixture of gamma rays, electrons, and positrons due to the repetition of electron-positron pair creation and their bremsstrahlung, causing an electromagnetic cascade shower. However, even in this case, a small number of muons are produced through photonuclear interactions and muon-pair creation, among others. Pions are produced in gamma-ray-induced air showers via photonuclear reactions, which then interact in the atmosphere, undergoing hadronic interactions. The charged pions decay into muons. Additionally, muons are also produced in muon-pair production during electromagnetic interactions. The crosssection for photonuclear reactions is roughly two orders of magnitude larger than that of muon pair production in the energy range of GeV to TeV. This means that the muon component produced by photonuclear reactions is significantly more dominant than that from the muon pair production process in gamma-ray-induced air showers. Therefore, the muon component in the gamma-ray-induced air showers could depend on hadronic interaction models. However, the hadronic interaction model dependence of the gamma-ray-induced air showers is believed to be small and may be ignored, then the model dependence has yet to be quantified.
Herein, we quantitatively evaluate the model dependence of gamma-ray flux estimation for the first time. Section 2 clarifies the characteristics of air shower muons generated by a Monte Carlo simulation with some interaction models and their model differences using a vertical gamma-ray-induced air shower at the ALPACA altitude. Section 3 evaluates the systematic differences in the detection efficiency of gammaray-induced air showers caused by the γ -CR separation in the energy range from a few TeV to several hundred TeV with a small air shower array (ALPAQUITA [18]), which is the prototype of ALPACA [16,17].
Characteristics of air shower muons at high altitudes
ALPACA and Tibet ASγ measure the muon component above ∼1 GeV using an underground muon detector for γ -CR separation [4,18]. Therefore, we investigated the characteristics of the muons above 1 GeV in gamma-ray-induced air showers depending on hadronic interaction models.
Air shower simulation and simulation setting
We simulated air showers induced by vertical incident gamma rays with energies between 10 TeV and 100 TeV, using the CORSIKA 7.6000 code [26]. It employs the EGS4 code [27] to simulate electromagnetic interactions. The calculation of the photonuclear reactions was added to the EGS4 code [26]. The muon-pair production is also incorporated in the EGS4 code using the analogy of electron-positron pair production [26,28]. There is no definitive model for calculating the behavior of hadronic interactions. Thus, even in gamma-ray-induced air showers, the muon features are model-dependent.
A summary of the simulation setting is presented in Table 1. The secondary gamma rays and electrons are tracked down to 1 MeV while the secondary hadrons and muons are tracked to 1 GeV, the minimum kinetic energy above which they can reach the water surface of MD [18]. In addition, the particle identification, position, energy, timing, and directional vector of each secondary particle were recorded at an altitude of 4,740 m.
Model dependence of the energy spectrum
The calculated differential energy spectrum of muons >1 GeV in gamma-rayinduced air showers for each model combination is shown in Fig. 1a for 10 TeV and Fig. 1b for 100 TeV, respectively. For E γ = 10 TeV, the energy spectrum of muons peaks at about a few GeV. Approximately 85 % of the total number of muons are below 10 GeV. The energy spectrum shows a decrease above 100 GeV with a powerlaw index of −2.62, containing 99 % of the total number of muons below 100 GeV. The trend is similar for E γ = 100 TeV.
To evaluate the differences in the spectra caused by hadronic interaction models, we used the relative change R(F i ) in the differential spectrum to the QGSJET-II + FLUKA model using the following equation: where F i is an arbitrary physical quantity, i = 1, 2, 3, and 4 represent QGSJET-II + FLUKA, QGSJET-II + UrQMD, EPOS LHC + FLUKA and SIBYLL + FLUKA, respectively. For 10 TeV, the value of R for muons below a few GeV is 3.6 × 10 −2 or less. The model differences tend to be larger as the muon energy increases, and the EPOS LHC + FLUKA model has the most significant difference of 1.3×10 −1 at 100 GeV. We can test the two low-energy models by comparing the results of QGSJET-II + FLUKA and QGSJET-II + UrQMD for E γ = 10 TeV. The difference between the two sets of results is less than ∼ 2.6 × 10 −2 below 100 GeV.
In addition, the trend of R for E γ = 100 TeV is almost identical to that of E γ = 10 TeV for QGSJET-II + FLUKA, QGSJET-II + UrQMD, EPOS LHC + FLUKA and SIBYLL + FLUKA, and the differences below 10 GeV are typically of a few percentages. The integral energy spectra show the effects of these low-energy muon differences more clearly. The results of the energy spectrum of muons above 1 GeV in gamma-ray-induced air showers of 10 and 100 TeV are shown in Fig. 2. The majority of the total number of muons (over 85 %) are below 10 GeV as shown by the bottom panels in Fig. 2. The difference between the models in several GeV regions is small ( Fig. 2). The difference in the integrated spectrum above 1 GeV is less than 2.2 % for the 10 TeV gamma-ray shower and less than 3.1 % for 100 TeV.
Model dependence of the lateral spread of muons
The ALPAQUITA underground muon detector has an area of 900 m 2 [18], which measures a portion of the muons in a widely spread air shower. Tibet ASγ and ALPACA have significantly larger areas than ALPAQUITA. For γ -CR separation, however, all three experiments use the muon density within a few hundred meters of the air shower core. Figure 3 shows the total number of muons above 1 GeV within a certain radius from the shower core, Fig. 3a for E γ = 10 TeV and Fig. 3b for E γ = 100 TeV. The number of muons in a 100 m radius circle centered on the air shower core is ∼ 9.6 × 10 −1 for 10 TeV gamma rays and 15 muons for 100 TeV gamma rays, depending on gamma-ray energy. Independent of energy, the distribution shows the same trend in all models.
The Rs relative to the QGSJET-II + FLUKA model, calculated using the same form as (1) is shown in Fig. 3. The number of muons produced by the QGSJET-II + UrQMD is smaller than that of QGSJET-II + FLUKA. However, the difference is small, about 2.0 × 10 −2 over the whole distance range. As the distance from the core approaches zero, the numbers of muons produced by the SIBYLL + FLUKA and the EPOS LHC + FLUKA becomes smaller than that by QGSJET-II + FLUKA. Furthermore, the maximum Rs is 8.9 × 10 −2 for both energies. However, when comparing the total numbers of muons within 100 m (∼Molière unit), the Rs at 100 m are small, ±7.7×10 −3 for 10 TeV gamma rays and ±1.8×10 −2 for 100 TeV gamma rays. The differences in the number of muons due to the hadronic interaction model options are only a few percent if we locate a large muon detector array within a circular area of 300 m in radius
Model dependence of the total number of muons for an air shower
ALPAQUITA uses only the total number of measured muons, but not the lateral distribution for the γ -CR separation. We investigated the total number of muons over 1 GeV per air shower to clarify the model dependence.
The upper figures in Fig. 4a and b show the total number of muons over 1 GeV per air shower for 10 and 100 TeV gamma rays, respectively. The leftmost bin in Fig. 4a for E γ = 10 TeV corresponds to less than 10 muons, and the number of events in the bin constitutes the large majority (∼90 %). For E γ = 100 TeV, the leftmost bin corresponds to less than 100 muons and contains ∼90 %.
As the typical gamma-ray efficiency in ALPAQUITA is assumed to be 50 % to 90 %, the hadronic model dependence of the leftmost bin in Fig. 4 is significant in this work. The lower figures in Fig. 4 show the Rs relative to the QGSJET-II + FLUKA model. For E γ = 10 TeV, the Rs in the leftmost bin (<10 muons) are less than 4.6 × 10 −3 , and for E γ = 100 TeV, the Rs in the leftmost bin (< 100 muons) are less than 3.1 × 10 −3 . The differences are also small. To estimate the gamma-ray flux, ALPAQUITA counts the number of excess (on-source minus off-source) events by real data. To convert the excess counts to gamma-ray flux, we need the gamma-ray detection efficiency calculated by a Monte Carlo simulation, where we expect that the gamma-ray detection efficiency will have some model dependence on hadronic interaction models. The next section will discuss the model dependence with a Monte Carlo simulation, assuming the ALPAQUITA configuration.
Cosmic gamma-ray flux estimation with ALPAQUITA
A conversion factor from the measured lateral distribution of air shower particles into incident particle energy, as well as the detection efficiency obtained by a Monte Carlo simulation of air showers with proper detector response, are required to estimate gamma-ray flux from an experimental data. For example, the differential flux ( where E γ is the energy of a gamma ray, N on is the number of events in an onsource window, N off is the number of events in an off-source window, T obs is the observation period, and S Eff is the effective area of the air shower array. R surv i (E γ ) is the survival ratio of gamma rays after the γ -CR separation using the number of measured muons. R surv i (E γ ) is calculated as Here, N rec sim (E γ ) is the number of reconstructed gamma-ray events, and N rec,γ -like sim (E γ ) denotes the number of events deemed gamma-like among N rec sim (E γ ). The index i = 1, 2, 3, and 4 represents QGSJET-II + FLUKA, QGSJET-II + UrQMD, EPOS LHC + FLUKA and SIBYLL + FLUKA, respectively. From (2) and (3), three factors S Eff , R surv i (E γ ), and E γ have potential hadronic interaction model dependence. As a gamma-ray-induced air shower is almost entirely composed of electromagnetic particles, the hadronic interaction model dependence on S Eff and E γ is regarded as negligible compared to that in R surv i (E γ ). Therefore, we will discuss only the hadronic interaction model dependence in R surv i (E γ ).
Air shower simulation and simulation setting
Assuming a gamma-ray source with a power-law energy spectrum with a spectral index of -2 in the direction of RX J1713.7-394, we generated 10 8 gamma-ray air showers in the energy range from 300 GeV to 10 PeV with CORSIKA 7.6400. Furthermore, we generated cosmic rays assuming the same source orbit and considered their isotropic characteristics by correcting the number of events using the weighting method described in a separate study [18]. As a chemical composition model of cosmic rays, we adopted the Shibata model [19], which was obtained by assuming a rigidity-dependent acceleration limit at the CR source. This model reproduces the results of direct observation experiments up to a few TeV and gradually becomes dominant by heavier nuclei from the region of tens of TeV. We produce 10 9 cosmicray events ranging from 300 GeV to 10 PeV. Table 2 summarizes the setup conditions. Regarding the hadronic interaction models in air shower generation, the four models explained in Section 2 are employed in this section. Each incident particle is randomly injected within a 300 m radius from the center of the air shower array. The particle information is traced to 1 MeV for secondary electrons and gamma rays, 50 MeV for muons, and 1 GeV for hadrons and is recorded at an altitude of 4,740 m.
Detector simulation of ALPAQUITA
Using a Monte Carlo simulation of ALPAQUITA [18] implemented in GEANT 4.10.02, we investigated the effect of the four hadronic interaction models described in Section 2 on gamma-ray efficiency. ALPAQUITA is a prototype detector for a new experiment ALPACA [16] [17] and is currently under construction on the Chacaltaya plateau (4,740 m a.s.l., 16 • .23 S, 68 • .08 W) in Bolivia. At 15 m intervals, 97 plastic scintillation detectors with an area of 1 m 2 are deployed, which is constituting an effective array area of 18,450 m 2 . A water Cherenkov-type muon detector (MD) is installed 2 m below the air shower array. The MD comprises 16 water tanks (total area of 900 m 2 ), and each tank has an air layer of 0.9 m, a water depth of 1.5 m, an area of about 56 m 2 . The concrete ceiling thickness is 20 cm, the thickness of concrete walls is 30 cm. The Cherenkov light photons are diffusely reflected by the walls and floor, which have an 80 % reflectance, and are detected by a downward-facing 20-inch-diameter PMT installed on the water tank's ceiling. The details of ALPAQUITA are described in a separate study [18],
General performance of the air shower array
The secondary particles generated with the CORSIKA are input to the GEANT4 simulation described in Section 3.2, and the behavior of each detector is simulated. The air shower event analysis method used in this study is the same as that described in [18]. We briefly summarize the general performance of ALPAQUITA below. The air shower array measures electrons, gamma rays, and charged particles in an air shower, their total energy loss is converted to the particle number density ρ (m −2 ). The primary energy is calculated as the sum of particle number densities from all detectors, ρ. Based on the relative hit timing of each detector, we reconstructed the arrival direction of the air shower using an air shower front surface, approximated with a cone [4]. The selection conditions described in [18] are required to analyze the air shower array data.
Separation method
The MD detects Cherenkov light photons reflected by the tank walls and floor by one PMT mounted downward on the ceiling [18]. The number of photoelectrons peaks at ∼24 when a single muon passes through the water tank. We define this value as one particle, calculate the number of muons N μ , and use it to select a gamma-ray-induced air shower. Figure 5a shows a ρ vs. N μ scatter plot using the QGSJET-II + FLUKA model, with cosmic rays distributed in the upper region and the gamma rays distributed in the lower region. Figure 5b, c, and d show the same trend as the QGSJET-II + FLUKA model, with the QGSJET-II + UrQMD model, EPOS LHC + FLUKA model, and SIBYLL + FLUKA model, respectively. Figure 6 shows the number of muon distributions for 56.2 < ρ < 100, corresponding to a representative energy of ∼28.8 TeV. As shown in Fig. 6, the N μ of a gamma-ray-induced air shower is significantly smaller than that of a cosmic-ray-induced air shower with the same ρ. Events with N μ < 0.1 are artificially piled up at N μ = 0.01, where majority gamma rays are contained. For the QGSJET-II + FLUKA model, the number determined by the method to discriminate between gamma-ray and cosmic-ray events, described in [18]. The three survival lines in each figure are the same as determined by the QGSJET-II + FLUKA model of gamma-ray events in this bin was 21.8, corresponding to 62.4 % of the total number, whereas the number for the other three models ranged from 21.3 to 21.9. We define gamma-like events below the threshold and cosmic-ray-like events above the threshold using a threshold value of N μ ( N μ,cut ). The portion of total gamma-ray events that remained below the threshold (survival ratio) is 90 % when N μ,cut is set to 2.57 in Fig. 6a. For each ρ bin, we determine different thresholds to satisfy survival ratios of N μ,cut s satisfying the survival ratio at 50 % and 90 %.
Optimal separation
When N μ,cut is high, we can maintain a high detection efficiency for gamma rays, but rejecting cosmic-ray events is challenging. Conversely, when N μ,cut is low, the rejection power of cosmic-ray events is high, but the survival ratio of the gamma-ray events is small. Therefore, there is an optimal N μ,cut value. We used the quality factor (Q factor = N γ / N B + N γ ) [18] to determine the optimal threshold value, where N γ is the number of gamma-ray events and N B is the number of cosmic-ray events. The optimal value of N μ,cut for the QGSJET-II + FLUKA model in Fig. 6a is 0.63 at 28.8 TeV. For each ρ bin, we determine the optimal N μ,cut and obtain the relation between ρ and N μ,cut . We fitted the relationship to the following equation: In the case of QGSJET-II + FLUKA, the optimal line is the optimal survival line shown in Fig. 5, where b becomes 1.6 with a fixed ρ 0 = 31.2 [18]. Utilizing the optimal survival line shown in Fig. 6, we obtained a survival ratio of ∼0.7 for all models at the representative energy of ∼28.8 TeV.
We employ a survival line similar to the optimal survival line in the realistic case. Therefore, we estimate the final hadronic interaction model dependence, assuming the optimal survival line. The final hadronic interaction model dependence includes conservatively differences among the models in the high-energy region [the average fitting result R(R surv i=4 ) = 5.5 × 10 −3 ± 1.2 × 10 −2 MCstat and bin-by-bin maximum difference (R(R surv i=4 )) max = 2.2 × 10 −2 ± 1.1 × 10 −2 MCstat ] and differences among the models in the low-energy region [the average fitting result R(R surv i=2 ) = 2.0 × 10 −3 ± 9.5 × 10 −3 MCstat and bin-by-bin maximum difference (R(R surv i=2 )) max = 1.8 × 10 −2 ± 1.0 × 10 −2 MCstat ]. We then estimate the final hadronic interaction model dependence, adding the differences above in quadrature, We obtained the final hadronic interaction model dependence of less than 3.6 × 10 −2 , which is dominated by Monte Carlo statistics.
Conclusion
Using an extensive air shower array and an underground MD to measure high-energy gamma ray, we studied the performance of a hybrid experiment. The experiment separates gamma-ray-induced muon-poor air showers from cosmic-ray-induced muon-rich air showers using an underground muon detector. To investigate the performance of this type of experiment, such as ALPAQUITA, a Monte Carlo simulation of air showers based on hadronic interaction models is required, which might have some model dependence. Air shower experiments such as ALPAQUITA estimate the gamma-ray flux from the difference between on-source and off-source events by real data, using the gamma-ray detection efficiency calculated by a Monte Carlo simulation, which depends on the hadronic interaction models, whereas the off-source, background cosmic-ray events, can be estimated experimentally. In particular, the models affect the number of muons in the gamma-ray-induced air showers.
To evaluate the differences in the characteristics of air shower muons at 4,740 m above sea level, we simulate the gamma-ray-induced air showers with the four models-QGSJET-II + FLUKA, QGSJET-II + UrQMD, EPOS LHC + FLUKA, and SIBYLL + FLUKA. Thus, we evaluate the hadronic interaction model dependence in the gamma-ray flux measurements between 6 and 300 TeV.
First, before including detector simulation, we studied the hadronic interaction model dependence of gamma-ray-induced air showers at the CORSIKA level. In the 10 and 100 TeV gamma-ray-induced air showers over 85 % of muons have energies below 10 GeV, and model differences in the integral muon spectrum in several GeV regions are small, less than a few percent. For the lateral spread of muons, the differences were also small. They are ± 0.77 % for 10 TeV gamma-ray air showers and ± 1.8 % for 100 TeV gamma-ray air showers when we compare the total number of muons above 1 GeV within 100 m (∼Molière unit) in radius from the air shower core in the lateral distribution. Approximately 90% of 10 TeV gamma-ray-induced air showers contain less than 10 muons, whereas about 90 % of 100 TeV gammaray-induced air showers contain less than 100 muons. For both 10 TeV air showers with less than 10 muons and 100 TeV air showers with less than 100 muons, model differences in the total number of gamma-ray-induced air showers are less than 1 %.
Using the ALPAQUITA simulated detector, we evaluated the impact of these small differences in the characteristics of air shower muons on the gamma-ray flux estimation. For optimal survival, the survival ratio of gamma-ray events varied from 0.7 to 0.9 for ρ in the energy range from 6 to 300 TeV. Thus, the survival ratio's model dependence is less than 3.6 % in the energy range from 6 to 300 TeV. The survival ratio is included in the flux calculation in (2) in the form of (R surv i ) −1 ; thus, the model dependence in the flux estimation is of the same magnitude, which is dominated by Monte Carlo statistics. The contribution of the hadronic interaction model dependence to the gamma-ray flux estimation is negligible compared to other systematic errors, such as the energy scale uncertainty (typically ∼10 %) corresponding to the 20 %-30 % gamma-ray flux uncertainty. Furthermore, as Monte Carlo statistics account for most of the 3.6 % uncertainty due to hadronic interaction model dependence, it is projected to be significantly less than 3.6 %. | 5,699.4 | 2023-01-17T00:00:00.000 | [
"Physics"
] |
Technical Eciency and Productivity of Tobacco Control Policies in 16 Selected OECD Countries: A Comparative Study Using Data Envelopment Analysis, 2008-2014
Background: To date, there is no synthesized evidence about the technical eciency (TE) of cross-country tobacco control policies. This study aims to measure the eciency and productivity of tobacco control policies across 16 selected countries of Organization for Economic Co-operation and Development (OECD) from 2008 to 2014. Method: We used data envelopment analysis (DEA). MPOWER is an acronym for a WHO proposed package consisting of six tobacco reduction interventions that can be adapted to present a commitment of the parties to a treaty labeled FCTC (Framework Convention on Tobacco Control). Taxation on tobacco products and pictorial warning labels were chosen as the inputs. Percentage of daily smokers’ population above 15 years old and the number of cigarettes used per smoker per day were output variables. Additionally, the Malmquist total factor productivity (TFP) was used to analyze the panel data and measure productivity change and technical eciency changes over time. Results: The highest TE score (1.05) was attributed to Norway and the lowest (0.9175) belonged to the United Kingdom (UK). Technological change with a total average of 1.069 would imply that the technology and creativity have increased, while countries have been able to promote their creativity over the time period. Norway with the TFP score of 1.15 was the most productive country, while the UK and Turkey with the TFP scores of 0.95 and .098 respectively, were the least productive countries in the implementation of the MPOWER policies. Conclusion: Most OECD countries have productively implemented MPOWER policies. Such productive performances are the results of the strong pivotal pictorial warnings. Consequently, the policy of plain packaging seems to hamper the MPOWER policies. Taxation on tobacco products were relatively weak and inecient. to strengthen the existing policies in this regard. MPOWER interventions were not solely behind the dissatisfying productivity results revealed in this study. To achieve the optimum outcome of the FCTC MPOWER policies and overcome the challenges of smoking use, countries need to tackle the dicult underlying factors, i.e. tobacco industry opposition and lobbyists, smuggling, and low socioeconomic status, which may hinder the meaningful implementation of such policies and undermine sustainable development goals eventually.
Background
With a broad spectrum of adverse health effects, i.e. obstructive pulmonary disease and ischemic heart disease, tobacco use is a serious threat to global public health (1) and a heavy economic and health burden on the societies. As an estimate, ve million adults' deaths globally were directly attributed to smoking in 2012, which is predicted to increase to eight million by 2030 (2,3). The total smoking-attributable health expenditures were 467 billion USD (United States Dollar) purchasing power parity (PPP) in 2012 (2). Smoking causes death not solely among smokers. Secondhand smoke exposure also has a heavy burden on the society. For instance, in the US, cigarette smoking and secondhand smoke exposure have caused 443,000 premature deaths during 2000-2004 (4). As a main risk factor for non-comunicable diseases (NCDs) (5), smoking may ultimately have signi cant impact on human development (6). As such, production and use of tobacco products are serious danger to sustainable development through waste of resources and compromising intergenerational equity (7).
Despite noticeable smoking rate reduction among OECD countries, it still remains the widest preventable risk factor for health (8). In 1996, the World Health Organization (WHO) voted to execute the WHO Framework Convention on Tobacco Control (WHO FCTC), and it was adopted in 2003 and nally came into force in 2005 (9-11). So-called MPOWER, in 2008, WHO employed some preventive and control policies aimed at curbing tobacco use. MPOWER has six components: monitoring tobacco use and prevention policies; protecting people from tobacco smoke; offering help to quit tobacco use; warning about the dangers of tobacco; enforcing bans on tobacco advertising, promotion, and sponsorship; and nally raising tobacco taxes (12). Substantial tobacco tax and pictorial warnings are considered as the most cost-effective interventions to reduce tobacco consumption (12,13).
High-income countries (HICs), i.e. OECD countries, usually impose higher taxes on tobacco products such as cigarette, compared with low and middle-income countries (LMICs) (14). Partially due to tax increase, cigarette consumption has become less affordable in HICs over time (15). Pictorial warnings on tobacco products is also an effective way to promote consumer knowledge about tobacco risks (16). Many smokers from several countries reported to get more awareness about the lethal and adverse risks of smoking from pictorial warning labels than other sources, except for television (13,17). Furthermore, secondhand smokers, especially children, reported a high awareness of warning labels (17). A survey illustrates that smokers who noticed the warnings on cigarette packages were signi cantly more likely to con rm health risks of tobacco, including lung cancer and heart disease (18).
This study aims to measure the technical e ciency and productivity of tobacco control policies across 16 selected OECD countries over the period of 2008 to 2014. The cross-country comparisons can provide, we envisage, a useful and practical source of evidence for policy-makers to improve their performance of making palatable policies (19). We used Data Envelopment Analysis (DEA) method to determine the countries' performers (20). To the best of our knowledge, this is the rst application of the DEA to measure the e ciency and productivity of preventive medicine policies, comparative e ciency of tobacco control interventions in this article, within a cross-country context.
DEA Analysis and Malmquist Approach
DEA is a non-parametric method to measure relative e ciency (21), which has been frequently used for measuring health system performance (22). As a data-oriented approach, DEA can examine the performance of a set of Decision-Making Units (DMUs) that transform multiple inputs into multiple outputs (23). DEA employs linear programming (LP) methods to calculate the e ciency measures that relative to non-parametric frontiers (20).
There are two versions for DEA: input-oriented and output-oriented. If the aim is to minimize available inputs to provide given levels of outputs, the model would be called input-oriented. On the other hand, if it is assumed that outputs are manageable and the target is to maximize outputs from given levels of inputs, then the model is called output-oriented (20). Pictorial health warnings and taxes on cigarettes have been mentioned in the past as the most effective policies to control tobacco use (24,25). Hence, in order to promote the e ciency, lowering inputs were found out irrational decisions. On the other hand, countries can concentrate on the outputs and improve them by engaging the other tobacco preventive policies (which were quoted in the introduction) from given levels of inputs. Now this can be concluded with regard to de nitions of the DEA orientations, an output-oriented version seems to be appropriate for this study (20) as formulated below (21) Where y rj y rj is the amount of output r from DMU j, x ij amount of input ito DMU j, u r weight given to output r, v i weight given to input i, n number of DMUs, s number of outputs, and mnumbers of inputs. The sign of u 0 shows the reveals returns to scale. In fact, DEA is based on two different models; variable returns to scale (VRS or BCC) or constant returns to scale (CRS or CCR). Under BCC models, returns to scales can change. If the proportions of increases in both inputs and outputs are the same, the return to scale is constant (u 0 = 0). And if outputs increase by a larger proportion than each of inputs, the returns to scale would be increasing (u 0 > 0). Finally, decreasing returns to scale happens when outputs are bigger than inputs by a smaller proportion (u 0 < 0 < / > cript > ). Under CCR model, returns to scale is always constant and doesn't change.
Because e ciency can change over time, the DEA model is appropriate for a speci c time period, not over time. This study covers 2008 until 2014, the period when creativity and technology in applying tobacco preventive policies might have changed. Therefore, we used a DEA analysis of panel data across selected countries during the mentioned time period. Consequently, we measured productivity by using DEA-based Malmquist indexes framework (26) and considered a two-input, one-output model. The Malmquist productivity index (MPI) structure is as follow: This Malmquist index has been made up by the geometric mean of two different parts. The rst expresses that the distance between the two production points, G and B (showing a country in the two periods) is measured relative to the production frontier of period 1. The second factor states, this time the distance of these production points (G and B) is measured relative to the production frontier of period 2. The score of the MPI is interpreted as: 1. If the score was greater than unity (MPI>1), it would indicate the DMU has raised the productivity.
2. If the score was equal to unity (MPI = 1), then it would suggest the productivity is constant.
3. If the score was less than unity (MPI < 1), therefore it would imply that the DMU in period 2 is less e cient than itself in period 1.
The MPI can be decomposed into two factors: technical change and change in technical e ciency ("catching up"). So according to this decomposition, the MPI will turn into: The rst factor which is outside the brackets shows technical e ciency in both periods and measures e ciency change when transferring from period 1 to period 2 (See Fig. 1). It shows that the DMU will be more e cient (with a score greater than unity) provided it nears to its production frontier; and conversely, if the DMU recedes from its production frontier, it will be less e cient and have less e ciency score (with a score less than unity). Neutrally, if the DMU stays in the same position relative to its frontier and doesn't move, the e ciency will be constant (with a score equal to the unit). The second factor in this MPI (inside the brackets), calculates transfers of the actual frontier between both periods. Shifting in the frontier means a change in technology and creativity of each DMU, which depends in turn on how this DMU functions. The result of each function can be an increase in technology (frontier) with a score greater than the unit, a decrease with a score less than unit, or staying in the same position with a score equal to the unit.
Variables and assumptions
Out of six MPOWER policies, only two: taxation of tobacco products and pictorial warning labels on tobacco products had numerical datasets, and were included as the inputs in the model (12). The others had mostly been expressed as "Yes" or "No", meaning whether they have executed or not, and their statistical analysis was not conducted. Taxes on most sold brand of cigarettes (taxes as a percent of price) were considered as measures of tobacco taxation. Pictorial warnings are percentages of principal display area mandated to be covered by health warnings (front and back of cigarette packaging).
Smokers' prevalence with the measure of smokers' population as a percentage of the population aged greater than 15 years old who are daily smokers, and also the number of cigarettes used per smoker per day were variables of the outputs. To preserve the positive concept of outputs in the DEA models, and because the e ciency measurement techniques basically suppose that "more outputs are better" (27), the smokers' prevalence and the number of cigarettes used per smoker were conversely entered into the model ( 1 outputs ).
The Malmquist indexes were calculated under both CRS and VRS, hence no difference which one to be selected (28). Nonetheless, when the study design is cross-national and variables are expressed as ratios, the BCC model is preferable (29). Therefore, we selected the BCC model in this study. We selected the countries and the time period of panel data based on the maximum data availability. Eventually, we chose 16 OECD countries and four point times (2008, 2010, 2012, and 2014). Previous studies recommend that e ciency depends on a number of degrees of freedom, meaning that if the number of DMUs (n) is less than the sum of inputs and outputs(m + s), then most of the DMUs will be likely to be determined as e cient. They introduce a rough rule of thumb in the envelopment model which suggests the number of DMUs (n) should be equal to or greater than max{m * s, 3 * (m + s)} (20). To observe this assumption in our study, the rule thumb is equal to 12 {12 = 3 * (2 + 2)} which is less than 16 (the number of countries). We used the DEA-SOLVER-LV8 (2014-12-05) application for panel data analysis.
Data
We gathered the WHO data of the four selected variables for both inputs and outputs, and panel data of pictorial warnings and taxes on cigarette for all 16 countries and for four selected time periods (30). We found no data for pictorial warnings for the year 2008, and used the 2007 data instead. Data for smokers' prevalence and cigarettes used per smoker were taken from OECD Health Database (2017). There were a few missing data points that were properly xed using a single imputation method.
Results
A summary of descriptive statistics of all four variables including inputs and outputs has been illustrated in Table 1 Results for the technical e ciency (TE) change or catch-up, technological change (TCH) and MPI for each country are presented in Tables 2, 3 and 4 respectively. Catch-up shows how far each country has transferred from the e cient frontier during the time period. The total average of catch-up with the number of 0.98 would be interpreted that the transfers from the frontier have not been considerable and these selected OECD countries have not been successful in technical e ciency improvement. The technical e ciency has slightly decreased overall. The highest TE score with the value of the 1.05 is attributed to Norway and the lowest with the score of 0.9175 belongs to the United Kingdom. Table 2 shows that all countries except Denmark and the Czech Republic had the technical e ciency scores greater than unity in the rst period (2008-2010); but most of their scores declined during the two next periods. The United States and Denmark showed constant scores that were equal to unity over the three periods. The total mean of standard deviation was just 0.03, which seems to be really narrow.
Conversely, technological change with a total average of 1.069 would imply that the technology and creativity have risen over the time period, presenting that countries have been able to promote their creativity. Norway and Japan showed the maximum (1.105), and minimum (1.03) scores of technological change mean, respectively. During the rst period, all countries apart from Japan, South Korea, and the United States showed technological change scores less than unity. During the second period, the United States and South Korea stayed above unity, while other countries improved their technological change scores up to greater than unity, except for Japan whose technological score fell below unity. Eventually, during the third period, all countries showed technological change scores greater than one. The total mean of standard deviation for technological changes was 0.02, which was not considerably high.
Finally, the MPI value of 1.05 could be recognised as a great deal between technical e ciency changes and technological changes. TFP achievement is a deduction of multiplied TE by TCH. Since this value was greater than unity, it means that productivity grew overall. Individually, only the United Kingdom and Turkey experienced a decrease in their productivity. The maximum TFP with the value of 1.157 was related to Norway, which presented the best performance in engaging inputs and producing outputs. The standard deviation of MPI was calculated 0.04, which was not signi cant. Our ndings indicate that the average TE for the study period slightly reduced, indicating ine cient taxes on tobacco products. The maximum and minimum TE score belonged to Norway and the UK, respectively. Similar to many OECD countries, Norway experienced a regressive trend in daily smoking prevalence, while there was an invariant trend in daily smoking prevalence in the UK that shows the effect of ine cient taxes. Most countries that acquired an increase in the catch-up effect during 2008-10, experienced a decrease in that effect over the next years.
Taxation might have a three-fold effect on tobacco use: a barrier to initiation, lowering consumption among current smokers and precluding former smokers from relapsing (31). Due to the lack of comprehensive socio-economic data, it is di cult to map a clear trend in the effectiveness of the full MPOWER packages among the countries (24). Taxation on tobacco products does not provide nuts-and-bolts consideration for the effectiveness and the mechanisms, through which they exert their effects. Detailed data on different dimensions of tax policy including tax administration and tax structure can inform researchers and strategists to advance related tax policies around the world (32). In addition, the effectiveness of implementing a strategy is mixed and may vary depending on uctuating circumstances. For example, in countries where access to low, untaxed and inexpensive tobacco products is high, low-income tobacco users show less sensitivity to price changes. Likewise, populations with the higher proportion of younger smokers, especially new starters, might be more sensitive to tax and price policies than that of adult smokers (24). The effect of increased tobacco price on smoking prevalence varies depending on the characteristics of the interest and population within various settings. Heterogeneity in price responsiveness might be explained by factors such as smokers' level of addiction, cigarette affordability, tobacco industry activity to encourage consumers, and product substitution due to availability of a great variety of tobacco products and wide price (33). Factors including tobacco industry price discounting strategies, proactive lobbying and price-reducing marketing in the OECD countries may explain the variance in the effectiveness of MPOWER interventions (34,35). Further, the existence of state-owned tobacco companies imply a complex and ambiguous attitude towards smoking. As long as governments continue to generate signi cant revenue from monopoly tobacco production, they will face serious inconsistencies in how they deal with the adverse health consequences of tobacco use, e.g. the prevalence of tobacco-related illnesses and mortality (36). This might in turn indicate the need to take strong actions to adapting a range of tactics for appropriate implementation of the WHO's FCTC (37).
According to the WHO estimates, higher taxes, depending on their types, can contribute to almost half of the reduction in smoking. For instance, ad valorem taxes are built upon prices; so, tobacco companies can potentially undermine the effects of higher taxes by reducing supply and putting lower prices on tobacco products. Hence, industry pricing strategies could manipulate consumption levels and change tax revenue. Alternatively, speci c excise taxes imposed based on the quantity of products to generate a xed tax amount must match or outpace periodical in ation to meet their tobacco control objectives (38). Thus, many aspects of each instrument included in the MPOWER package are essential to consider when assessing the merits of designated tools (32).
Our study revealed that all included countries have been following an upward trend towards technological changes, which led to positive performance during 2012-14. Such progress re ects innovations and the use of new technologies, i.e. the implementation of pictorial warning policies. The greatest technology change belonged to Norway, while Japan showed the lowest change. The main elements including the feature of graphic design on cigarette packs, size of the space covered by health warnings, and the time periods for label rotation may account for the impact of pictorial health warnings on smoking prevalence (32). Nevertheless, problems such as the sale of single sticks of cigarettes could reduce the effectiveness of health warnings on the packs.
Another key nding from our study is that the Malmquist index for most countries progressed in the total factor productivity over the study period. Most countries with the Malmquist index over one were those that enjoyed an increasing trend in innovative and technology use. Nevertheless, the observed differences in progress of tobacco control activities among countries might be related to the comprehensiveness of MPOWER package, which might have in turn led to the various extents that a particular country has pursued the FCTC goals.
Norway implemented the point-of-sale tobacco display ban in 2010. This may be the impact of the country's increase in the use of technological changes during 2010-2012, compared to the previous step. Consumers declared that the ban prevented young people to begin smoking and also helped cessation endeavors (39). Norway as the vanguard in this study, also has applied the strongest levels of the monitoring, mass media or anti-tobacco campaigns and smoke free policies. Our unexpected nding was that Turkey and the UK that had the least Malmquist productivity scores, have been implementing much stronger levels of the MPOWER policies (40).
The path from policy to reduce tobacco consumption hinges on the possibility that a country will implement tobacco control measures, and on the measures' effectiveness (41). Despite the progress observed in recent years, no government is fully implementing the MPOWER strategy. Many challenges have remained and much more needs to be done to stop one of the worst scourges of modern times. Applying restrictions to all forms of tobacco advertisement, promotion and sponsorship are among the most effective solutions that few countries have adopted with success (42).
Limitations and strengths
Despite its strengths, i.e. being of its rst kind to measure e ciency and productivity of MPOWER policies in the OECD countries using robust methods, our assessment was limited. This study provides some insight into the issues associated with tobacco control measures for decision makers and implementers to translate good policy models into tangible action and results. Although comparative productivity is an effective methodology as well as an indicator to partially paint the existing circumstances in any country, the interpretation of such comparisons for a more comprehensive status requires vivid attention to other dimensions. Due to lack of comprehensive data on tobacco-control programs, we con ned our analysis to only two outputs. In addition, the data provided by the FCTC parties re ect different methods of data collection, without any adapted standardized survey instruments. This makes direct comparison of prevalence among countries di cult (43).
Finally, our ndings do not explain the concurrence of other obstacles that may have affected the comprehensive implementation of MPOWER in some countries. Tobacco industry opposition and lobbyists, smuggling, nancial barriers like economic bene t of tobacco produce and high cost of cessation programs, might have dwarfed the successful tobacco control plans (42). Socio-economic situations, poverty and lower education are also a major hindrance to access to cessation interventions and acquiring knowledge about the harmful effects of smoking (44). Further studies, which take these variables into account, will need to be undertaken. Nonetheless, this study could generally depict the performance of MPOWER implementation across 16 OECD countries.
Conclusions
Most OECD countries have productively implemented MPOWER policies to reduce tobacco use. Such productive performances are the results of the strong pivotal pictorial warnings. Consequently, the policy of plain packaging seems to hamper the MPOWER policies. The results of taxes on tobacco products were relatively weak, indicating the need to strengthen the existing policies in this regard. MPOWER interventions were not solely behind the dissatisfying productivity results revealed in this study. To achieve the optimum outcome of the FCTC MPOWER policies and overcome the challenges of smoking use, countries need to tackle the di cult underlying factors, i.e. tobacco industry opposition and lobbyists, smuggling, and low socioeconomic status, which may hinder the meaningful implementation of such policies and undermine sustainable development goals eventually. • Ethics approval and consent to participate Ethics approval is not required for this paper because our data were not collected from human subjects or/and animals, and all variables used for our study were collected from publically available databases, such as the World Health Organization (https://www.who.int/gho/tobacco/policies/en/,), and OECD Health Database (https://stats.oecd.org/index.aspx? queryid=30127).
Abbreviations
• Consent for publication Not applicable • Availability of data and materials The datasets generated and/or analysed during the current study are available in the WHO and OECD Health Database repository, https://www.who.int/gho/tobacco/policies/en/, https://stats.oecd.org/index.aspx?queryid=30127 • Competing interests Technology changes in DMUs | 5,686.8 | 2020-06-03T00:00:00.000 | [
"Economics",
"Medicine"
] |
The eCDR, a Radiation-Hard 40/80/160/320 Mbit/s CDR with internal VCO frequency calibration and 195 ps programmable phase resolution in 130 nm CMOS
A clock and data recovery IP, the eCDR, is presented which is intended to be implemented on the detector front-end ASICs that need to communicate with the GBTX by means of e-links. The programmable CDR accepts data at 40, 80, 160 or 320Mbit/s and generates retimed data as well as 40, 80, 160 and 320MHz clocks that are aligned to the retimed data. Moreover, all the outputs have a programmable phase with a resolution of 195ps. An internal calibration mechanism enables the eCDR to lock on incoming data even without the availability of any form of reference clock. The radiation-hard design, integrated in a 130nm CMOS technology, operates at a supply voltage between 1.2V and 1.5V. The power consumption is between 28.5mW and 34.5mW, depending on the settings. The eCDR can achieve a very low RMS jitter below 10ps.
Introduction
The eCDR (CDR for Clock and Data Recovery) has been developed in the framework of the GBT project, which is currently under development as part of the Large Hadron Collider (LHC) upgrade program. The GBT project aims at the realization of a radiation-hard chipset to be used as an ondetector transceiver for the LHC experiments. A bi-directional optical link operating at 4.8 Gbit/s connects this transceiver, the GBTX, with the counting room. On the other side, up to 56 front-end modules can be connected to the GBTX by means of electrical links (e-links) [1].
A full-blown e-link is composed of 3 differential lines, 1 for the uplink, 1 for the downlink and 1 for the clock signal that is sent to the front-end module by the GBTX. In that case, the data recovery in the front-end module is easy to do since the received clock signal can be used directly to retime the data. However, if cabling is difficult or critical, one might prefer to remove the clock differential line. In that case, to recover the data in the front-end module, a proper clock signal needs to be generated first, after which the received data can be retimed with that clock. This is exactly the purpose of the eCDR.
Since the e-links have a programmable data rate of 80, 160 or 320 Mbit/s, the eCDR has been conceived as a highly flexible clock and data recovery system. It accepts input data at 40, 80, 160 or 320 Mbit/s and outputs the retimed data along with a 40, 80, 160 and 320 MHz output clock, regardless of the data rate. The phase of the output data and the 4 output clocks, which are always in-phase, can be programmed with a resolution of 195 ps, so they can easily be aligned with any other clock domain in the front-end module if required.
The architecture of the eCDR is presented in section 2, some of the building blocks are discussed in section 3 and the measurement results are shown in section 5. A conclusion is drawn in section 6.
System overview
The block diagram of the eCDR is shown in figure 1. The input data is applied to a set of detectors: a Phase Detector (PD) and a Frequency Detector (FD). These detectors each control their own charge pump that charge or discharge the first-order loop filter. The Voltage Controlled Oscillator (VCO), which always oscillates at 320 MHz regardless of the data rate, is an 8-stage differential ring oscillator and generates 16 phases of the 320 MHz clock. The required programmability of the data rate is enabled by means of a programmable feedback divider with a ratio of 1, 2, 4 or 8. It generates both an in-phase and a quadrature version of the divided clock. The data is retimed within the PD as can be seen in figure 1. It is in-phase with the feedback in-phase clock which is fed to the PD. In a typical CDR system, this retimed data and feedback clock would be the output signals. In the presented eCDR however, extra functionality is provided by means of the phase shifter. The phase shifter is intended to shift the phase of the extracted clocks and the retimed data so as to be able to align them with whichever clock in the system. In order to do that, it utilizes the 16 phases of the VCO to enable a phase shifting with a resolution of 195 ps, namely 1/16 th of the 320 MHz clock.
The Alexander PD, and basically any PD that can work with data, suffers from the fact that it cannot correct any frequency difference between the input data and the VCO. Consequently, the acquisition range of the eCDR with only the PD would be extremely limited. Therefore, a rotational FD has been added in the eCDR which is based on ref. [3]. As such, the acquisition range is increased significantly up to about ±25% of the data rate.
The eCDR incorporates 2 possibilities to bring the VCO frequency within the acquisition range of the CDR loop. Both are highlighted in gray in figure 1. The first possibility is to start the loop in the PLL-mode before applying any input data. In that case, the PD and FD are disabled and the PFD, which reuses the PD charge pump, is enabled. As the PFD has an acquisition range that is basically unlimited, the VCO frequency will be brought to the applied clock frequency in -2 - any situation. The second way of calibrating the VCO is to make use of the built-in Wien bridge based calibration circuit which will be explained in more detail in section 3.2. The latter means of calibration is especially interesting for systems where there is no reference clock available. All 3 detectors and their charge pumps are disabled when using this type of calibration. Both calibration possibilities require some time (100-150 µs) after a reset before the CDR loop can be enabled.
PD
A block diagram of the Alexander PD inside the eCDR can be seen in figure 2. This bang-bang PD makes a decision (up/down) for every transition in the input data. If there is no data transition, both the up and the down signal remain low. Assuming that there is only a phase offset between the clock and the input data, the up and down signals can never be high at the same time.
As shown in figure 2, 3 samples of the input data are used, 2 rising edge samples, S 1 and S 3 and 1 falling edge sample, S 2 . If S 1 and S 2 differ, a data edge has appeared in between them and the falling clock edge came too late. An up signal is generated to increase the VCO frequency. On the contrary, if S 2 and S 3 are different, the falling edge came too early and a down signal is generated. In any of both cases, this PD tries to push the falling edge of the clock towards the data edges so that the data can reliably be sampled by the rising clock edge. As a result, S 1 and S 3 are reliable and retimed versions of the input data and either of them can be used as output data. Figure 4. Simplified schematic of the Wien bridge calibration circuit.
FD
The FD, a block diagram of which is shown in figure 3, has a rotational architecture [3]. The 4 th (Q4) and 1 st (Q1) quadrant of the VCO clock are sampled at every rising and falling data edge. An up pulse is generated if and only if a data edge samples the clock in Q1 while the next data edge samples the clock in Q4. Logically, this can only happen if there is a frequency offset between the VCO clock and the input data. In this case, the VCO clock is too slow because the VCO phasor did not complete the full 360 • between 2 consecutive data edges. Consequently, an up pulse is necessary to make the VCO oscillate at a higher frequency. On the contrary, a down pulse is generated if and only if a data edge samples the clock in Q4 while the next data edge samples the clock in Q1. The VCO frequency is thus too high and needs to be decreased.
In order for the FD in figure 3 to generate pulses, the frequency error may not be too large. For example, if 2 consecutive data edges sample Q1 and Q3 respectively, instead of Q1 and Q4, no up pulse is generated, although the VCO frequency is clearly too low. It means that this frequency offset is too large for this type of FD to detect. The same is true for a too high VCO frequency that does not generate any down pulses. It can be reasoned that the VCO frequency range for which this rotational FD generates a useful output is approximately ±25% of the data rate.
The number of pulses that is generated by the FD depends on the frequency offset. The larger the offset, the more frequently the data phasor crosses the Q4-Q1 border leading to a lot of up or down pulses. On the contrary, for a zero frequency offset, no pulses are generated at all. The PD, which is only sensitive to phase errors, then takes over to bring the data phasor towards the Q2-Q3 border so that the input data can be sampled by the rising edge of the clock. The loop that is formed by the FD thus opens automatically once the PD loop has found lock.
Wien bridge calibration
The Wien bridge calibration circuit is shown schematically in figure 4. The working principle is to equalize the voltages of the Wien bridge, one half of which is composed of resistors R 1 and R 2 , the other half being composed of resistor R trim and switched capacitor C sw . For a switching frequency f sw , the equivalent resistance of the switched capacitor can be calculated as follows: In steady-state, the closed loop, consisting of the Wien bridge, integrator, VCO and frequency divider, establishes a control voltage to the VCO which is such that it oscillates at the frequency that -4 -keeps the Wien bridge in equilibrium, ie. the 2 voltages being equal. This equilibrium frequency can be calculated as follows: Since the VCO has to be calibrated to 320 MHz and due to the fixed divider ratio of 8 in the calibration loop, the wanted f sw,eq is 40 MHz. In the eCDR, R 1 equals R 2 resulting in an equilibrium voltage that is half the supply voltage. Since R trim has a mid-scale resistance of 18 kΩ, C sw needs to be 1.4 pF for f sw,eq to equal 40 MHz. In order to correct for process variations and integrator offset, R trim can be programmed between 12.5 kΩ and 25 kΩ with a resolution of 190 Ω. After a reset, C lpf , the eCDR loop filter capacitor, is discharged and, as a result, the VCO does not oscillate. The Wien bridge is not in equilibrium as its right node is pulled to ground by R trim . As a result, the integrator charges C lpf and the VCO control voltage rises. When it reaches a certain value, the VCO starts oscillating and the right Wien bridge node voltage rises thanks to the decreasing equivalent resistance of the switched capacitor. This goes on until the Wien bridge is in equilibrium and the integrator remains stable. In the case where the right node is higher than the left node in the Wien bridge, C lpf is discharged, the VCO frequency decreases and both bridge nodes are steered to equilibrium again.
As can be seen in figure 4, a capacitor C f has been added to create a pole at the right Wien bridge node. Such a pole is necessary in order for that node to partly retain its voltage when switching C sw . Otherwise, R trim would always bring that node back to ground and the calibration circuit would never settle. The introduced pole should therefore be considerably smaller than the intended f sw,eq . However, stability issues might show up by adding this pole in the feedback loop. Note that the integrator already has a pole at DC so the extra pole should be at a frequency that is high enough. This can be achieved by decreasing the integrator gain or by decreasing C f .
As mentioned before, process variations and integrator offset can be tuned out by means of R trim as they are known before actually using the eCDR. More important are the variations that can appear during operation, namely temperature and supply voltage fluctuations. Due to the architecture of the Wien bridge, namely that only the voltage difference between the left and right nodes is considered, the supply voltage does not have an effect on f sw,eq as long as the integrator has sufficient gain. The sizing of the switches in figure 4 has been shown to be critical for the temperature stability of f sw,eq [4]. The switches should not be too small as the ON resistance can become too high to fully charge/discharge C sw in half of the period of f sw,eq . However, thanks to the 1/8 divider, the full 12.5 ns are available for this. As a result, the ON resistance can be quite high without affecting the performance of the circuit. The switches should neither be too large since their leakage currents, which exponentially depend on temperature, alter f sw,eq . Therefore, the OFF resistance should be several orders of magnitude higher than R eq .
Parameter selection
The eCDR is intended to be operated from a wide range of supply voltages between 1.2 V and 1.5 V. On top of that, it should work over a temperature ranging from −30 • C to 100 • C and obviously all possible process corners. One of the major consequences of these widely varying conditions is that the gain of the VCO will shift dramatically, according to simulations between 182 MHz V −1 and 1200 MHz V −1 . As can be expected, such a large spread on the VCO gain has its effects on the loop dynamics. On top of that, the loop dynamics depend strongly on the data rate (feedback divider ratio) and its transition density. The more transitions there are, the more decisions the PD makes and the more frequently the VCO is steered towards the correct phase. On the contrary, if there are no data transitions, the PD makes no decisions at all and the loop behaves as being open. The VCO phase will drift significantly in that case. When designing the eCDR, a transition density of 50% has been assumed, which is a typical value for random data.
In order to make sure that the eCDR can lock and behave properly in all these situations, the loop filter resistance can be programmed between 0.5 kΩ and 8 kΩ. On top of that, the 2 charge pumps have an independently programmable output current between 0.8 µA and 12 µA for the PD charge pump and between 1.6 µA and 24 µA for the FD charge pump. The loop filter capacitor is fixed and has a capacitance of 500 pF. In a typical situation, the FD charge pump current is programmed significantly higher than the PD charge pump current. That larger current is needed to make the loop lock easier. This is equivalent to increasing the loop bandwidth in a linear PLL to make it lock easier. On the contrary, a small PD current is typically required so as to not inject too much charge in the loop filter every time a decision is made and thus introduce significant jitter.
Measurement results
The layout view of the presented eCDR is shown in figure 5. The area of the full circuit is 930 µm by 425 µm. Since the eCDR is intended to be used as a building block in a larger design, no bond pads or protection structures are included in this area. The design has been realized in a standard 130 nm CMOS process and is able to operate between −30 • C and 100 • C with a supply voltage between 1.2 V and 1.5 V. The power consumption of the eCDR at a supply voltage of 1.5 V is between 28.5 mW and 34.5 mW depending on the number of VCO phases that is used in the phase shifter. The internal calibration system should bring the VCO to the required 320 MHz oscillation frequency where it needs to remain stable over temperature and supply voltage variations. The trim-resistor in the Wien bridge can only be used to tune the VCO frequency at a particular temperature and supply voltage during the setup of the system. Once the circuit is in operation, the (a) temperature variation of the calibrated VCO frequency relative to the calibrated frequency at 20 • C (b) supply voltage variation of the calibrated VCO frequency relative to the calibrated frequency at 1.5 V Figure 6. Variation of the calibrated VCO frequency as a function of temperature and supply voltage. trim-resistor cannot be changed any more while the temperature and supply voltage obviously can and will change. Both dependencies of the calibrated frequency have been characterized and are shown in figure 6. As can be seen in figure 6(a), the temperature stability is excellent since the calibrated frequency only changes less than 0.5 % over a temperature range of 120 • C. On top of this, the calibrated VCO frequency is basically independent on the supply voltage as can be seen in figure 6(b). The FD in the eCDR with its acquisition range of ±25% of the data rate is therefore more than capable enough to make sure that the eCDR can lock.
Once the VCO is calibrated, with an external clock or with the internal calibration system, the CDR loop can be closed and data can be applied. The extracted clock and data have been measured and analyzed at all possible data rates and with different settings for the charge pump current and the loop filter resistance. The measured RMS and peak-to-peak (PTP) jitter of the extracted clocks is summarized in table 1. It can be noticed that the jitter is higher for lower data rates. This is as expected due to the bang-bang nature of the PD that sinks or sources current during the full bit interval. Consequently, a longer bit internal results in a larger variation of the control voltage and thus more jitter. The jitter values in table 1 are valid for the data rate clocks, so 320 MHz for a bit rate of 320 Mbit/s for example. As mentioned previously, the other clocks are also generated, regardless of the data rate. These clocks have basically the same jitter performance as the data rate clock.
Conclusion
The eCDR has been presented which is intended to be implemented on the detector front-end ASICs that need to communicate with the GBTX by means of e-links. The programmable CDR accepts data at 40, 80, 160 or 320 Mbit/s and generates retimed data as well as 40, 80, 160 and 320 MHz clocks that are aligned to the retimed data. Moreover, all the outputs have a programmable phase with a resolution of 195 ps. The radiation-hard design, integrated in a 130 nm CMOS technology, operates at a supply voltage between 1.2 V and 1.5 V and consumes between 28.5 mW and 34.5 mW. A very low RMS jitter below 10 ps has been achieved and the temperature and supply voltage stability has been proven to be excellent. | 4,614.4 | 2013-01-01T00:00:00.000 | [
"Computer Science"
] |
Intense high-altitude auroral electric fields – temporal and spatial characteristics
Cluster electric field, magnetic field, and energetic electron data are analyzed for two events of intense auroral electric field variations, both encountered in the Plasma Sheet Boundary Layer (PSBL), in the evening local time sector, and at approximately 5 RE geocentric distance. The most intense electric fields (peaking at 450 and 1600 mV/m, respectively) were found to be quasi-static, unipolar, relatively stable on the time scale of at least half a minute, and associated with moving downward FAC sheets (peaking at ∼10μA/m2), downward Poynting flux (peaking at ∼35 mW/m2), and upward electron beams with characteristic energies consistent with the perpendicular potentials (all values being mapped to 1 RE geocentric distance). For these two events in the return current region, quasi-static electric field structures and associated FACs were found to dominate the upward acceleration of electrons, as well as the energy transport between the ionosphere and the magnetosphere, although Alfvén waves clearly also contributed to these processes.
Introduction
Intense perpendicular (to the background magnetic field) electric fields at high altitudes above the auroral region have been the subject of much interest and intense research, because they serve as an indication of electric fields parallel to the background magnetic field in the region between the s/c and the ionosphere.One implication of upward parallel electric fields is that they will accelerate electrons down-Correspondence to: T. Johansson (tommy.johansson@alfvenlab.kth.se)ward, leading to auroral emissions when these particles precipitate into the upper atmosphere.The energy consumed in accelerating the auroral particles, at around 1 R E altitude, must be available in terms of an energy flux, originating from higher altitudes.Intense perpendicular electric fields at geocentric distances of approximately 5 R E are often found in the Plasma Sheet Boundary Layer, PSBL, which is an important source region for the energy flux powering the aurora.A fundamental question is the nature of this energy flux.Presumably, the dominant flux is an electromagnetic energy flux (Poynting flux), which can either be associated with (quasi-) static field-aligned currents (FACs) or dynamic processes, such as travelling Alfvén waves or field-line resonances (FLRs).The transport mechanism might change, so that the way by which the energy flux is carried is different at different altitudes.There is also the possibility that energy is transported as kinetic energy carried by particles, see, for example, Ostgaard et al. (2002).
Up-going (down-going) FACs are associated with negative (positive) potential structures, corresponding to converging (diverging) quasi-static electric fields.This is supported by Marklund et al. (2001), who have utilized the possibilities of the Cluster mission, consisting of four s/c, to observe the growth and decay of a diverging electric field structure accelerating electrons upward, i.e. in the return current region.(The return and primary current regions are also known as the downward and upward current regions, respectively, but the former denotation will be used here.)An increase in a positive electric field peak and the increasing energy of an upward electron beam were found to match, i.e. consistency between the parallel acceleration potential and the perpendicular potential was found, supporting the quasistatic model.However, recent studies on Polar observations (in the primary current region) have focused on the importance of Alfvén wave Poynting flux.High-altitude (above the auroral acceleration region) intense electric field structures (E ⊥ ≥100 mV/m) were found to be associated with large downward directed Poynting fluxes, many of which were consistent with Alfvén waves (Keiling et al., 2000(Keiling et al., , 2001(Keiling et al., , 2002(Keiling et al., , 2003;;Wygant et al., 2000Wygant et al., , 2002)).The consistency of the observed structures with Alfvén waves were concluded by comparing the local Alfvén speed with the E/B-ratio of correlated electric to magnetic fields.These authors argue that the Alfvénic Poynting flux is a major contributor to the powering of the aurora, since at least one-third of the total energy required to produce the global ionospheric auroral luminosity can be accounted for by Alfvén waves (Keiling et al., 2003).Also, a relation between the large Alfvénic Poynting fluxes and the expansion phase of both strong and weak substorms was found by Keiling et al. (2000Keiling et al. ( , 2001)).However, the observations reported by Keiling et al. (2001) are consistent with both the quasi-static model and dynamic Alfvén wave processes.In a FAST/Polar conjunction study, Schriver et al. (2003) have found intense electric fields associated with Alfvén waves in the primary current region and shown in several events that both FAC and Alfvén waves transport energy into the auroral region, with the presence of the Alfvén waves depending on the geomagnetic activity (no Alfvén waves during periods of low activity).The above referred to Polar studies (primary current region), this study, as well as Marklund et al. (2001) (return current region), are studies, on parts of the same current system, connected via perpendicular currents in the ionosphere and driven by a generator, presumably in the magnetosphere.
A special case of Alfvén wave phenomena is FLRs.These have electric and magnetic field topologies similar to those of discrete auroral arcs, hence FLRs have been proposed as a producer of aurora.There is observational support for this proposition, see, from example, Samson et al. (1996Samson et al. ( , 2003)).Further, Lotko et al. (1998) found that their FLR model can reproduce the large-scale field structure of an auroral acceleration region.
Two events of intense electric fields observed in the auroral return current region in the evening sector, from April and May 2002, have been studied using Cluster data.They are connected to the PSBL, as deduced from PEACE (Plasma Electron And Current Experiment) electron and CIS (Cluster Ion Spectrometry) ion data.For descriptions of the Cluster instruments, see Escoubet et al. (1997).The s/c are for both events located at a geocentric distance of approximately 5 R E .The geomagnetic activity is low or moderate, inferred from the K p -index, and both events are in the Southern Hemisphere.
The intent of this paper is to describe the two events and to present a view of the stability or variations of some relevant parameters, such as electric and magnetic fields, Poynting fluxes, FACs, the potentials along the spacecraft trajectory, and electron and ion distributions.The capability of simultaneous measurements provided by the four closely spaced Cluster s/c allows us to reveal whether variations are temporal or (and) spatial.The method used for separating spatial and temporal variations in the electric and magnetic fields is described by Karlsson et al. (2004) in this issue (hereafter called the "companion paper").The variations of some parameters and the stability of others can, together with a knowledge of whether the variations are spatial or temporal, improve the understanding of the different means of energy transport between the magnetosphere and the ionosphere.More specifically, the relative roles of the FACs and the Alfvén waves in this process are investigated.
Observations of intense electric fields
The criterion used when selecting the events was the presence of intense electric fields mapping to at least 100 mV/m at ionospheric altitude.Further, a similar electric field pattern should be recognized by at least two s/c.Twenty-four events of intense electric fields that fulfilled the selection criteria were found in the period January to June 2002.Two of these were chosen to be studied in more detail.The selection was based on the characteristics of the events and on legible data.In this section, data for the two events are presented and discussed.The results are summarized in Table 1.
19 May 2002
On 19 May 2002, 05:26-05:36 UT, the Cluster s/c were at a geocentric distance of 5.0 R E , around −70 CGLat and close to 20 MLT, i.e. in the evening local time sector and in the Southern Hemisphere.This event took place at the peak of a substorm expansion phase (the auroral oval at its maximum expanded state, revealed by a maximal peak in the AL index, after which a decrease begins, corresponding to the recovery phase), with moderate geomagnetic activity (a K p -index of 2). Figure 1 gives the configuration of the four s/c, the direction of motion of the s/c and the orientation of the current sheets (see below) in a plane perpendicular to the background magnetic field.
Using the method described in the companion paper to solve the temporal and spatial ambiguity problem, three regions of different electric and magnetic characteristics could be identified; 200-300 s, 300-380 s and 380-440 s after 05:26 UT. (See Figs.2-5, described below.)In regions I and III, similar features in the electric and magnetic field data where observed by the consecutive s/c with a time lag of approximately 10 s (1-15 s).No clear correlations between the electric and magnetic fields are observed when looking at the data.("Correlation", used here and later, does not refer to a calculated correlation but to a clear similarity between a parameter observed by consecutive s/c or between different parameters observed by the same s/c.)The Alfvén velocities (calculated assuming, for simplicity, equal amounts of hydrogen and oxygen ions and the plasma density estimated from the s/c potential) are 2300 and 2800 km/s, respectively, and the E n /B t -ratios (see discussion of normal and tangential direction below) are 1800 and 1500 km/s, respectively.Note that there are relatively large uncertainties in these estimates such that the Alfvén velocity and the E n /B t -ratio may differ by as much as a factor of two, even for the case of clear Alfvén waves.In regions I and III, the variations are concluded to be predominantly spatial and corresponding to moving field-aligned current sheets having peaks coinciding with the electric field peaks.From the separations between the s/c and the observations of the current sheets, it is concluded that the current sheets are moving with a fairly constant angle of deviation from east-west alignment within each of the two regions.The current sheet orientations obtained are not the same in regions I and III.The deviations from east-west alignment, α, were approximately 34 and −43 • , respectively (determined by a minimum variance analysis on the residual magnetic field), and the current sheet velocity perpendicular to this orientation was 8 and 14 km/s, respectively.Positive (negative) α corresponds to an anti-clockwise (clockwise) rotated current sheet with respect to the eastwest orientation.The order in which the s/c encountered the current/electric field structures was 1, 4, 2 and 3 and the satellite data are presented in this order.The time difference between the structure crossings in region I by s/c 1 and by s/c 4, 2 and 3 were approximately 1, 7 and 12 s.In region III, the time differences between the crossings of the spatial structures were 6, 9 and 13 s, respectively.There is a large-scale upward current of Region 1 type between 100 and 200 s, and a large-scale downward current of Region 2 type between 200 and 290 s which coincides with region I of this event.After 300 s, for regions II and III of this event, the large-scale current consists of both upward and downward currents.In region II, the electric and magnetic field data are closely correlated with almost no time lag, indicating that temporal variations are dominating there.The calculated field component tangential to the local current sheet orientation (green line).The electric field was measured by the EFW (Electric Field and Wave) instrument and the magnetic field by the FGM (FluxGate Magnetometer) instrument.Due to a probe failure, only spin resolution electric field data are available from s/c 1.From the measured magnetic field data, a running-window average (1-min wide and steps of 10 s) was subtracted to remove the background field.The potential along the spacecraft trajectory was obtained by integrating the electric field, and is presented in panel 5. Panel 6 shows the FAC distribution estimated from the residual magnetic field (assuming spatial magnetic variations).The calculated FAC is unreliable in regions where the variations were found to be predominantly temporal due to Alfvén waves, such as in region II.A negative (positive) value corresponds to downward (upward) FACs and is shown with red (blue) color.From the electric field and the residual magnetic field data, the Poynting flux along the background magnetic field, S , was calculated and is displayed in the bottom panel (black line), together with the time integrated Poynting flux (green line).The data presented are all local values and in order to refer to the ionospheric level, the mapping factors are given in the text below Fig. 2.
Two peaks in the electric field data, one in each of the two regions dominated by spatial variations, are seen at 260 and 400 s.Their magnitudes are, if mapped to the ionosphere, 280 and 200 mV/m, respectively.Two downward FAC peaks and two downward Poynting flux peaks are well correlated with the electric field peaks.At 260 s, the magnitude of the mapped downward Poynting flux peak is 33 mW/m 2 and the magnitude of the mapped downward FAC (this estimate depends on the filtering of the magnetic field and should be considered with care; this study is mostly concerned with the variations between consecutive s/c) is 15 µA/m 2 .The Poynting flux is downward dominated but the integrated Poynting flux is close to zero.The remaining parameter, the perpendicular potential, is seen to display a large-scale negative valley structure.There are also superposed small-scale valley shapes between 400 and 470 s, the total minimum being −3.4 kV.By comparison with CIS data (not shown here) it was found that the small-scale negative potentials agreed roughly with the characteristic energy of the up-going ions.
From inspection of the PEACE electron data (a sharp increase in the electron flux, supported by CIS data showing also an ion flux increase) for this and the other s/c passages, it is concluded that the s/c are traversing the PSBL.S/c 4: Three distinct more or less unipolar electric field structures, one in each region, are seen in the data from the passage by s/c 4 (Fig. 3), with the most intense peak at 250 s mapping to 420 mV/m at ionospheric altitude.The peak in the middle region was not present in the electric field data from the passage by s/c 1.Three distinct regions of intense up-going electrons with characteristic energies of 2-3 keV, not seen in the data observed by s/c 1, are seen well separated, with the first and the last of these being well correlated with the downward FAC peaks at 260 and 410 s (the FAC pattern is fairly similar to that observed by s/c 1), and also with the intense electric field peaks and the downward Poynting flux peaks.The Poynting flux is in this passage more downward dominated, due to more downward contributions from the two peaks at 260 and 410 s.The magnitude of the later peak has increased to 25 mW/m 2 , all compared to the observations by s/c 1.In the region dominated by temporal variations, a downward Poynting flux peak is observed in contrast to what was observed in the passage by s/c 1.Finally, the depth of the large-scale potential valley in the later part of the interval has decreased to −2.4 kV.Note that the negative valley in the perpendicular potential pattern between 250 and 350 s coincides well with the inverted V-structure in the precipitating electrons, implying that these electrons will be subject to further acceleration by the order of 1 kV before they reach the auroral ionosphere.
S/c 2: The large-scale, electric field pattern, with the peak at 260 s (mapping to 450 mV/m), and the FAC pattern are both roughly the same as for the passage by s/c 4 (Fig. 4).The Poynting flux has decreased significantly (the main downward peak has decreased from 33 mW/m 2 to 12 mW/m 2 ) and the integrated Poynting flux is small but negative.However, the correlation between the intense electric field peaks, the downward FAC peaks and the downward Poynting flux peaks, at approximately 260 and 400 s, per- sists.No significant variations are observed in the perpendicular potential.Neither PEACE nor CIS data were available from s/c 2. S/c 3: The electric and magnetic field patterns are very similar to those of the previous s/c crossings, although the third electric field peak at 410 s in region III has decreased in magnitude (Fig. 5).Thus, the FAC pattern is also fairly unchanged but the downward peak around 260 is now seen to be somewhat broader.The enhancements in the up-going electron flux (more intense than for s/c 4), with characteristic energies of 2-3 keV, are consistent with the downward FAC peaks and well correlated with the electric field peaks in regions I and III.
In region II, the downward peak of the Poynting flux dominated by temporal variations is seen to have increased to 12 mW/m 2 , locally, while the Poynting flux in regions I and III is similar to the corresponding Poynting flux in the passage by s/c 2 but not to the fluxes observed by s/c 1 and 4. It can also be seen that the potential profile differs from the two previous crossings in that the depth of the large-scale negative perpendicular potential structure has increased from −2.4 kV to −3.7 kV and is resembling the profile observed by s/c 1.
27 April 2002
For the event of 27 April 2002 (19:37-19:47) the Cluster s/c were at a geocentric distance of 5.1 R E , close to 20 MLT and around −71 CGLat, in the Southern Hemisphere (Fig. 6).The K p -index for this event was 2, indicating moderate geomagnetic activity, and from the AL-index it is concluded that the measurements were taken during the growth phase of a substorm.Based on the method described in the companion paper, applied to this event, and on the characteristics of the electric and magnetic field data, and the electron flux data (see Figs. 7-10), it is found that the region of predominantly spatial variations encompasses the entire upward electron beam structure, i.e. between 340 and 400 s after 19:37 UT.Inspection of the various data affirms that spatial variations are dominating in this whole region, although the strongest support to the interpretation that the variations are of a quasistatic nature is obtained for the later half of this interval.The deviation from east-west alignment, α, was −30 • , and the structure had a velocity perpendicular to this orientation of 3.7 km/s.The order by which the s/c encountered the current/electric field structure was 1, 4, 2 and 3 and the time differences between the crossings of the structure were approximately 14, 16 and 23 s, respectively.A large-scale downward current of Region 2 type between 300 and 400 s coincides with the downward current in this event.Starting approximately 100 s before this region, an upward current of Region 1 type is seen.
S/c 1: The electric field is small over the whole interval (Fig. 7) and so are the magnitudes of the predominantly downward Poynting flux.The FAC pattern is characterized by multiple and weak up-and down-going currents.A region of downward FAC between 340 and 380 s is correlated with enhanced upward electron flux, with characteristic energies of about 1 keV.Overlapping this region, the perpendicular potential increases during the first 400 s after 19:37 UT, and then remains constant at 7.5 kV.From inspection of the PEACE electron data for all four passages, it was concluded that the s/c were in the PSBL.
S/c 4: At the time of the crossing by s/c 4 (Fig. 8) the electric field has intensified, the magnitude of the main peak at 380 s is now 1500 mV/m, mapped to the ionosphere.The strongest variations in both the electric and magnetic fields are found to be correlated with intensifications in the upward electron flux, downward FAC and intense Poynting flux and are identified to be clearly of a quasi-static nature.The region of electron flux enhancement around 375 s, having a characteristic energy of 2-3 keV, was not observed in the passage by s/c 1, and is more intense and corresponds to a higher energy than the region of electron flux enhancement around 350 s.The intensifications are consistent with intense down-going FAC peaks.Compared to the passage by s/c 1, a hill-structure has developed in the potential profile and there is also a larger potential difference over the whole interval, 13 kV compared to 7.5 kV.Two smaller peaks are seen at the edges of the potential hill-structure.Note that none of the good correlated features in electric field structure, the potential peak and the intense region of electron flux at 375 s were observed in the passage by s/c 1, nor where the intense Poynting fluxes between 370 and 400 s observed in passage by s/c 1.The downward Poynting flux peak close to 380 s, mapping to 27 mW/m 2 , is well correlated with the main peak in the electric field and the main downward FAC peak of 10 µA/m 2 (mapped) and most intense electron flux.Most of the Poynting flux is restricted to the region dominated by spatial variations in the electric and magnetic fields.
S/c 2: The large-scale electric pattern is fairly unchanged (Fig. 9), but a decrease in the amplitude of the peak can be seen and the peak has split into two peaks, and this is also true for the FAC pattern.The correlation between downward FAC peaks and the electric field peak is good.Significant variations compared to the passages by s/c 1 and 4 are observed in the Poynting flux.The main peak at 380 s is upward but the integrated Poynting flux is negative, hence the net Poynting flux is downward, having contributions from a downward Poynting flux peak at 300 s and another peak around 340 s.The intense positive Poynting flux peak at 385 s is seen to be co-located with a downward FAC peak.The first of the minor peaks at the edges of the potential hill, observed in the passage by s/c 4, is not present in the data from this passage.
S/c 3: The electric field pattern is again roughly the same (the amplitude of the peak is higher) but the slope on the equatorward flank of the positive potential hill around 380 s is seen to be steeper than in the passage by s/c 2 (Fig. 10), corresponding to the increase in the electric field peak to 150 mV/m (1700 mV/m mapped).Also, the FAC pattern is fairly unchanged compared to the passages by the other s/c.The narrow region of intense electron flux at 390 s with characteristic energy of 2-3 keV is more intense than what was observed by the other s/c in this event and well correlated with the downward FAC peak, with the Poynting flux enhancement (with the clearest peak in the upward direction and with overall Poynting flux less downward dominated) and the intense electric field peak.The other region of enhanced electron flux has decreased significantly in intensity compared to the passage by s/c 1 and 4, and the first minor peak in the potential hill-structure is absent, as was also the case for the previous crossing.The correlations between the variations in the electric and magnetic fields, the potential and the electron flux, support the view that spatial variations dominate between 340 and 400 s.
Summary of observations
Table 1 summarizes the events of 27 April 2002 and 19 May 2002, displaying Geocentric Distance (R), Corrected Geomagnetic Latitude (CGLat), Magnetic Local Time (MLT), K p -index, substorm phase (determined from the AL-index), evolution of V ⊥ , corresponding V value (inferred from the characteristic energy of the up-going electrons observed by the PEACE instrument), Alfvén velocity (V A ) and deviation of the current sheet orientation from east-west alignment, α.
V ⊥ represents the hill-(positive V ⊥ ) and valley-(negative V ⊥ ) shapes in the potential along the s/c trajectory.The V and V ⊥ in Table 1 are given for comparison; if there is a positive U-shaped potential structure, these two should be similar.Since the V ⊥ is negative for the 19 May 2002 event, i.e. a potential well, this cannot be compared with a V inferred from up-going electrons.The calculated Alfvén velocity is presented together with the ratio between the electric field component normal to the current sheet and the magnetic field tangential component, E n /B t , being average numbers for all the s/c.
Discussion and conclusions
The intense electric field events presented here both took place in the local time evening sector, approximately at 20 MLT, in good agreement with statistical results from lower altitudes (Karlsson and Marklund, 1996), and from the relation between intense electric fields and low ionospheric background conductivity (Marklund et al., 1997).The magnitude of the maximum electric fields observed here are, if mapped to ionospheric altitudes, approximately 400 and 1500 mV/m, for the two events, similar to reported maximum magnitudes observed at lower altitudes.Also, the two intense electric field events were encountered within the PSBL, which could be concluded from inspection of the PEACE electron data and CIS ion data (not shown here).
The question whether the structures encountered are spatial or temporal, or both, is fundamental for the understanding of their origin.Using the method described in the companion paper, it was found that spatial and temporal variations dominate in different regions.This is supported by the stability of the FAC and the electron distribution between the different s/c, indicating that the most intense variations in the two events described here were predominately spatial.However, in both events, a mix of both Alfvén waves and current sheets (FACs), sometimes well separated from each other and sometimes superposed, is observed.Alfvén waves could thus contribute to the variations also in the regions with dominating spatial structures.The observed spatial variations are quasi-static electric field structures associated with FAC current sheets and hence represent an energy transport between the magnetosphere and the ionosphere by FACs.Temporal variations correspond predominately to Alfvén waves and energy transport by Alfvénic Poynting flux.The most intense electric fields were found to be associated with spatial variations, whereas Alfvén wave-dominated regions were found to be characterized by less intense electric fields.An alternative interpretation of, what in this paper is considered to be, the quasi-static regions might be that Alfvén waves travel along the spatial boundaries and that the time delays are given by the encounters by the s/c with these waves.However, these Alfvén waves must have very low frequencies.
The most intense Poynting fluxes in these two events are directed downward and they are correlated with the intense quasi-static electric fields, with downward FAC peaks associated with up-going electrons and also with the acceleration potential structures.The Alfvén wave-dominated regions were typically associated with less intense electric fields and less intense Poynting fluxes.Thus, in the events studied here, there is a clear connection between the intense electric fields, FAC current sheets and strong downward Poynting fluxes.
Cluster offers the first opportunity to observe the stability of the different parameters on various time scales.Variations seen between the satellites indicate which parameters are more stable than others on the time scale given by the separation between the s/c in the reference frame of the moving FAC sheet (typically 10-20 s for the events studied here).The electric and magnetic field patterns, and the associated FAC pattern are qualitatively not so variable between the satellites, while the perpendicular potential is a parameter that displays some variations.The Poynting flux is seen to be concentrated to a few locations, whose positions remain the same, whereas the magnitudes show strong variations on the time scale of 10-50 s.The variations in the electron energytime spectrograms between the s/c are generally fairly small, but they show enhancements for the up-going electrons consistent with the downward FAC.
In the event of 27 April 2002, the growth of a positive potential structure accelerating electrons away from Earth is seen.A good consistency was obtained between the characteristic energy of the up-going electrons, i.e. the acceleration potential V , inferred from PEACE electron spectrograms, and the calculated perpendicular potential V ⊥ during this growth period, serving as evidence of quasi-static U-or Sshaped positive potential structures in the auroral return current region.The effective time lag between the crossings by s/c 1 and s/c 3 was 25 s, taking into account the motion of the structure.During this period the acceleration potential grew by 2.4 kV, but a decay was not observed.In the study by Marklund et al. (2001), where the separations between the s/c were longer, the growth and decay of an acceleration potential took place within 200 s.It was suggested by Marklund et al. (2001) that the lifetime is clearly related to the time it takes to evacuate the ionospheric electrons within the flux tube of the downward current, which depends on the FAC magnitude.Since the magnitudes of the FACs in the 27 April 2002 event are about half of that in Marklund et al. (2001), the expected lifetime of the acceleration potential would be more than 200 s.However, the separation between the s/c is small and only growth is observed.
An interesting difference from the event discussed by Marklund et al. (2001) is that the intense electric field structures observed in the two events studied here are unipolar, not bipolar.This is somewhat unexpected since observations at lower altitudes mostly have shown divergent bipolar electric field structures.An S-shaped potential structure might explain the unipolar field structures.
There can be no doubt that both quasi-static structures and Alfvénic Poynting flux are important for the energy transport between the ionosphere and the magnetosphere.Schriver et al. (2003), for example, have shown that FAC and Alfvén waves both contribute to the energy transport into the auroral region.Where and under what circumstances each of the two is present and/or dominates is a question of importance and one that is subject to on-going work.The results from Polar discussed above pointed at the Alfvén wave Poynting flux as a major contributor to the powering of the aurora (Keiling et al., 2003), andWygant et al. (2002) have shown that the Alfvén wave Poynting flux was the larger contributor in their events.The events investigated and presented here imply that the most intense electric field structures, associated with FACs and intense downward Poynting flux, were signatures of quasi-static acceleration structures.These events occurred in the return current region, while the referred work on T. Johansson et al.: Intense high-altitude auroral electric fields Polar data is from the primary current region.This could be a reason for the discrepancy between the results.It has been shown (Keiling et al., 2001) that Alfvén wave aurora can be associated with aurora at the poleward edge of the PSBL.The results presented here indicate that intense electric fields near the polar cap can also be associated with quasi-static structures.The nature of the electric field structures may depend on whether they are encountered in the primary or return current region, which may explain the differences in observations.
The main conclusions drawn in this study of two intense electric field events are: 1.The energy transport between the magnetosphere and the ionosphere, in these two events (return current region), has contributions from both FACs and Alfvén wave Poynting fluxes.
2. The most intense electric fields were found to be quasistatic structures associated with moving quasi-static FAC current sheets.The Alfvén wave dominated regions were found to be associated with less intense electric fields.The most intense Poynting fluxes are downward directed and well correlated with the intense quasi-static electric fields, which implies that, for the events studied here at a geocentric distance of 5 R E and during moderate geomagnetic activity, the quasi-static FACs represented the dominant contribution to the energy transport between the magnetosphere and the ionosphere in the return current region.
3. The FACs and the structure of the associated electron distributions tend to be fairly stable, while the perpendicular potential, correlated with the characteristic energy of the up-going electrons, shows more variations on the time scale between consecutive s/c crossings, of 10-40 s.The Poynting flux peaks vary in magnitude, whereas their locations remain stable.
4. The observed growth of a positive potential structure accelerating electrons away from Earth during effectively 25 s gives a lower limit of the lifetime of an acceleration potential growing in magnitude.
5. Upward ions associated with negative potential peaks are observed which demonstrate that at least some negative U-shaped potential structures extend up to 5-5.5 R E .However, the most significant potential structures observed at this altitude are positive and associated with up-going accelerated electron beams as in the event reported by Marklund et al. (2001).
Fig. 1 .
Fig. 1. 19 May 2002.The direction of motion (dotted line) and configuration of the s/c in a plane perpendicular to the background magnetic field.dt is the time separation of the satellites passing through this plane, with respect to s/c 3. The orientations of the current sheets (see the text) are shown as dashed lines.
T
. Johansson et al.: Intense high-altitude auroral electric fields Cluster 1 PEACE Electrons at 0,90,180 deg V wrt to
Fig. 2 .
Fig. 2. 19 May 2002, s/c 1. Electron energy-time spectrograms at pitch angles 0, 90 and 180 • are shown in the first three panels.This event is on the Southern Hemisphere, hence the first panel shows electrons going upward from Earth and the downward electrons are seen in the third spectrogram panel.The following panel shows the component of the electric field normal to the current sheet (black line) and the tangential component of the magnetic field (green line).Only these two components are shown, since they are the ones where the dominating variations are seen.The next panel displays the potential along the spacecraft trajectory.A negative (positive) value in the second to last panel corresponds to downward (upward) FACs and is shown with red (blue) color.Poynting flux (black line) and the integrated Poynting flux (green line) are plotted in the bottom panel.The vertical dash-dot lines delimit the regions of dominating spatial and temporal variations (see the text).All panels are local value plots, with the ionospheric mapping factors being 11.2 for E and 125 for FAC and S.
Fig. 6 .
Fig. 6. 27 April, 2002.The direction of motion (dotted line) and configuration of the s/c in a plane perpendicular to the background magnetic field.dt is the time separation of the satellites passing through this plane, with respect to s/c 3. The orientation of the current sheet (see the text) is shown as a dashed line.
Fig. 7 .
Fig. 7. 27 April 2002, s/c 1.The same panels as in Fig. 2. The vertical dash-dot lines delimit the region of dominating spatial variations (see the text).
T
. Johansson et al.: Intense high-altitude auroral electric fields Cluster 3 PEACE Electrons at 0,90,180 deg V wrt to
Table 1 .
Summary of the events of 27 April and 19 May 2002.Positive (negative) α corresponds to anti-clockwise (clockwise) rotation of the current sheet from east-west alignment.The different α given for 19 May 2002 corresponds to the two different regions of spatial variations, see the text. | 8,073.8 | 2004-07-14T00:00:00.000 | [
"Physics"
] |
Preparation and cutting performance of nano-scaled Al 2 O 3-coated micro-textured cutting tool prepared by atomic layer deposition
Al2O3 nano-scaled coating was prepared on micro-textured YT5 cemented carbide cutting tools by atomic layer deposition ALD. The effect of Al2O3 nanoscaled coating, with and without combined action of texture, on the cutting performance was studied by orthogonal cutting test. The results were compared with microtextured cutting tool and YT5 cutting tool. They show that the micro-texture and nano-scaled Al2O3 coated on the micro-texture both can reduce the cutting force and friction coefficient of the tool, and the tools with nano-scaled Al2O3 coated on the micro-texture are more efficient. Furthermore, the friction coefficient of the 100 nm Al2O3coated micro-texture tool is relatively low. When the distance of the micro-pits is 0.15mm, the friction coefficient is lowest among the four kinds of pit textured nanometer coating tools. The friction coefficient is the lowest when the direction of the groove in strip textured nanometer coating tool is perpendicular to the main cutting edge. The main mechanism of the nanometer Al2O3 on the micro-textured tool to reduction in cutting force and the friction coefficient is discussed. These results show that the developed tools effectively decrease the cutting force and friction coefficient of tool–chip interface.
Introduction
Dry cutting technology, as an environment-friendly and cost-saving machining technology [1][2][3][4], is getting more attention because people keep a watchful eye on the environment problem. However, severe friction occurs between tool and chips during dry cutting process, which generate a large amount of heat, and then causes the tool to fail prematurely. Moreover, the oxidation of tools can reduce its mechanical properties and significantly affect its service performance [5]. To improve machining accuracy and efficiency, the coated cemented carbide tools have been more and more applied to the machining [6]. Among the coating materials, Al 2 O 3 offers great chemical inertness and oxidation resistance and enhances the wear resistance [7][8][9]. Al 2 O 3 retained hardness at elevated temperature, showing a high chemical and thermal stability, even at temperatures above 1,000°C, at which most nitride coatings suffer from severe and rapid oxidation [10,11]. Al 2 O 3 coating is used as thermal barriers to protect the cemented carbide substrates from the high temperatures at the cutting edge [12]. The Al 2 O 3 coating obtains higher hardness compared with ZrO 2 , and the modest hardness of ZrO 2 limits their use for wear applications [13]. Numerous investigations carried out by various authors showed that the textured tools exhibited better performance than conventional non-textured tools during different machining conditions [14][15][16][17][18][19][20][21][22][23][24][25][26]. More recently, researches show that the micro-texture and coating on the tool surface are better than non-textured coating tool and the textured tool without coating [27][28][29][30]. Applying nano-scaled coating has better performance than traditional micro-crystalline coating [31], and nano-scale coatings are isotropic, can be applied to three-dimensional objects having similar properties, this indicates that combination of the nano-scale coating and micro-texture has a good prospective in improving the performance of the cutting tool. However, Neves et al. [32] show defects in the coating may work as crack initiation points and lead to coating failure, thus it is very important to improve coating integrity and reduce coating defects.
Some researchers used atomic layer deposition (ALD) technology to prepare coating tools for processing and obtained good results. Mohseni and Scharf [33] reported the improvement of the wear resistance of carbon-carbon composites by ALD of ZnO/Al 2 O 3 /ZrO 2 coatings. A study by Giorleo et al. [34] showed that the life of micro-drill bit was significantly improved when drilling the Ti-plates by ALDcoated Al 2 O 3 . ALD is able to meet the needs for atomic layer control and conformal deposition [35]; more importantly, ALD can fabricate pinhole-free nanometer coating. The basis of ALD thin film coating growth is the alternating saturated gas-solid phase reaction. When the chemical adsorption of the surface is saturated, the number of surface-reactive precursors no longer increases with time, so there is only one layer film grown per cycle. ALD can obtain coatings of nanometer or even atomic thickness. The self-limiting growth mechanism of ALD has many characteristics: good bonding strength, layer-by-layer deposition, uniform film thickness, good composition uniformity, step coverage, conformality, repeatability, and accuracy at the atomic scale. In addition, reaction temperature of ALD temperature is around 200°C, largely lower than physical and general chemical vapor deposition (PVD and CVD), and this will be beneficial to the mechanical properties of the tool matrix material. ALD technology can improve coating adhesion to the substrate [36] and get better surface integrity [34] compared to PVD technology. Unlike CVD, ALD keeps the precursors strictly separated from each other in sequenced deposition cycles, thus preventing gas phase reactions and allowing atomic layer-by-layer deposition with nearly 100% step coverage [37]. The surface-controlled nature of ALD enables extremely uniform and conformal films on virtually any complex substrates [38]; thus, the ALD technology is suitable for micro-textured surface coatings.
Based on the above analysis, this paper prepared nanometer coating on the micro-textured tools using ALD deposition method and studied their cutting performance, expecting to provide a new way for the development of new cutting tools. In this article, the nano-scaled Al 2 O 3 -coated micro-textured cutting tool is prepared to improve the wear resistance and friction reduction performance of the tool. Two textures of stripe and pit were designed (named as ST and PT), and the micro-texture was prepared on the rake face of commercial YT5 (WC-10Co-5%TiC) tool by a laser marking machine. The stripe textures are parallel and vertical cutting edges, and 45°and 135°with the cutting edge (named as strip texture 1-4, respectively). The pit pitch of the micro-texture pit is 0.2, 0.15, 0.1, and 0.05 mm, respectively (named as pit texture 1-4, respectively). Nanometer-Al 2 O 3 coating was prepared on YT5 textured tool by ALD. First, YT5 tool was cleaned by an ultrasonic cleaner with acetone for 15 min and dried with high purity nitrogen. The stop/exposure mode was used for the ALD process, and each ALD cycle considers pulse, exposure, and purge times for each of the precursors. Thus, the pulse times of TMA and H 2 O in the cycle were 30 and 100 µs, respectively, while the exposure and purge times were both 10 µs. The thicknesses of the alumina thin films were 50, 100, and 200 nm measure by ellipsometer (named as FNST, HNFT, and TNFT correspond to 50, 100, and 200 nm coated stripe textured tools. Named FNPT, HNPT, and TNPT correspond to 50, 100, and 200 nm coated pit textured tools.). The morphology of the tools after coating is shown in Figure 1. Figure 2 shows the micro-morphology of textured tools surface obtained by AFM.
Cutting experiment
Orthogonal cutting experiments were carried out on a CNC lathe (Okuma Corp., Japanese). No cutting fluid was used in the machining processes. The cutting tool working geometry angle: rake angle γ 0 = 0°, back angle α 0 = 11°, main angle K r = 75°, blade inclination λ 0 = 0°, tip radius r ε = 0.5 mm. The cutting parameters were as follows: v c = 100 m/min, a p = 1 mm, and f = 0.1 mm/r, and the work piece material was an AISI1045 steel sample with a hardness of 190 HB. The tools were made of cemented carbide (YT5) and textured using a laser maker and coated by ALD. A non-textured tool and textured tools were also tested to compare the machinability. Cutting force is measured by a three-way piezoelectric dynamometer and a charge amplifier, and the cutting forces in the three directions of X, Y, and Z are collected by a data collector and data acquisition software (Kistler Corp., Switzerland). Friction coefficient between tool and chips was calculated by the following equation [39,40]: where γ 0 is the rake angle, and F x and F y are the principle force and radial force, respectively. A super depth of field microscope (Keyence Corp., Japan) and an electron microscope (SEM, JEOL Corp., Japan) were used to observe tool surface morphology and tool wear.
Cutting force of cutting tools
The curve shown in Figure 3 is the change of YT5, PT4, and HNPT4 three-direction cutting forces with cutting time. The comparison in the figure shows that the principle force and radial force fluctuation amplitude of YT5, PT4, and HNPT4 decreased in turn under the same cutting conditions, which indicate that HNPT4 formed good sliding-rolling composite friction in the cutting process; thus, the vibration is smaller, and the cutting process is more stable. Figure 4 shows the comparison of the cutting forces of the conventional tool, the micro-textured tool, and the nano-coated micro-textured tool at a cutting speed of 100 m/min. We use the pit textured tool, and the pit pitch is 0.05 mm. Compared with traditional tools (YT5), the cutting force of micro-textured tools (PT4) has been reduced to a certain extent, which has been confirmed by many researchers [41,42]. The main cutting force and radial force of the nano-scaled Al 2 O 3 -coated micro-textured tool (HNPT4) are greatly reduced. All the axial force of the three is almost zero because it is an orthogonal cut. tool-chip friction coefficient compared to the micro-texture tool, and the 100 nm nano-coating has the best reduction effect.
Effect of micro-texture morphology
The tool-chip friction coefficient of the four nano-coating tools with different shapes and texture is shown in Figure 6. For comparison, the friction coefficient of the corresponding uncoated textured tool is also given. The change in the stripe texture morphology has little effect on the friction coefficient between the tool and chip. Different stripe textures morphology and coating combinations have different effects on the friction coefficient of the tool-chip (Figure 6a). When the coating thickness is 50 nm and 200 nm, the micro-texture morphology has little effect on the tool-chip friction coefficient. The texture morphology has a great influence on the friction coefficient when the coating thickness is 100 nm. When the stripe micro-texture is perpendicular to the cutting edge, the friction coefficient of the coating tool is the smallest, 0.43, which is reduced by about 25% than that of the stripe texture parallel to the cutting edge. For pit-textured tools (Figure 6b), the lowest coefficient of friction is obtained when the distance between points is 0.15 mm, variation in the tool-chip friction coefficient of the coated pit textured tool is similar to that of the coated stripe textured tool (Figure 6b), and the 100 nm pit textured coating also has a small coefficient of friction. When the pit texture pitch is 0.15 mm, the tool-chip friction coefficient is the smallest and is 0.31, which is lower than that of the stripe textured tool. Figure 7 shows the micrograph of the 100 nm Al 2 O 3coated stripe-textured tool after cutting when the stripe parallel and perpendicular to the major cutting edge. It can be seen that there is a certain adhesion on the tool rake face, and the stripe parallel to the main cutting edge has a significantly higher adhesion than that of tool with stripe perpendicular to the main cutting edge. The microscopic picture of the 100 nm Al 2 O 3 -coated pit-textured tool with different texture spacing is shown in Figure 8. It can be seen that when the pit texture spacing is 0.15 mm, the adhesion of the tool is obviously lower than the pit texture pitch of 0.05 mm. From the magnified view of the 100 nm Al 2 O 3 -coated pit micro-texture tool (pit texture pitch is 0.05), it can be seen that the chip is deposited in the micro-texture, and the micro-texture plays the role of collecting chips. At the same time, the energy spectrum analysis of Figure 9 shows that there is less adhesion (Figure 8c) on the surface of the coated tool, indicating that the coating has the function of reducing adhesion.
Effect of cutting speed
The effect of cutting speed on the tool-chip friction coefficient is shown in Figure 10. When the speed is increased from 100 to 200 m/min, the friction coefficient of stripe and pit micro-textured coating tools increases from 0.31 to 0.47 and from 0.43 to 0.60, respectively. It can be seen that as the cutting speed increases, the friction coefficient of the stripe textured and the pit textured nano-coating tool increases to some extent. It shows that the effect of nano-coating is more obvious at lower speeds.
Discussion on the anti-friction mechanism of nano-coating on micro-texture
The friction coefficient of Al 2 O 3 and steel is about 0.66 [43], while the friction coefficient of WC-Co-TiC cemented carbide and steel is about 0.2-0.4 [44], which indicates that the application of nano-alumina on the surface of cemented carbide does not reduce the friction coefficient of cemented carbide cutting tools, and it is also impossible to lower the friction coefficient of micro-textured tools and steel. However, our experimental results are the opposite. This implies that the application of the nanoalumina coating to the surface of the micro-textured tool causes a fundamental change in the friction mechanism between the tool and chip during the cutting process. The micro-texture form and direction have a certain influence on the friction between the tool and chip. The main friction reduction mechanism of micro-textured tools is mainly the reduction in the contact area between the tool and chips on the tool rake face [45]. When the nano-coating is applied to the micro-textured tool, on one hand, the high-strength, high-hardness, and heat-resistance properties of the coating protect the micro-texture of the tool rake face. On the contrary, the micro-texture collects the Fe 2 O 3 particles and nano-Al 2 O 3 particles formed by the chipping and nano-coating detachment, and the nano-Al 2 O 3 coating particles in the micro-texture are squeezed and infiltrated into the actual chip contact interface under the action of cutting force and cutting heat. Because of the high hardness of the Al 2 O 3 particles, it formed a similar rolling element between the tool and chips, so that the direct contact between the two dies is transformed into a two-body-three-body compound contact, so that the friction between the tool-chip interfaces changes from sliding to sliding-rolling, as shown in Figure 11. We know that the friction coefficient of rolling friction is much lower than the sliding friction, and this change will effectively reduce the friction coefficient between the chips. Different coating thicknesses have different chances of forming sliding-rolling friction between the chips. When the nano-coating thickness is 100 nm, the friction coefficient of the four micro-textured coating tools is significantly lower than that of the uncoated micro-textured tool and the 50 nm and 200 nm coated micro-texturing tool. The possible reason is that the particles formed by the 100 nm coating during the cutting process enter well between the tool and the chips, forming rolling friction. When the thickness of Al 2 O 3 coating is 50 nm, the amount of nano-Al 2 O 3 coating particles involved in the formation of three-body contact between the chips during cutting is too small to form a rolling friction state, which cannot achieve the effect of changing the friction mechanism; when the coated Al 2 O 3 thickness is 200 nm, although the number of nano-Al 2 O 3 coating particles generated during the cutting process increases, too many coating particles will appear to be stacked, and the friction coefficient between Al 2 O 3 and Al 2 O 3 particles is larger than that of Al 2 O 3 and cemented carbides, so that the friction coefficient of the chip interface increases.
The probability of this transition in different microtextured tool is also different. When the stripe texture is perpendicular to the main cutting edge, the nano-coated micro-textured tool has the lowest friction coefficient. This result is different from the pure micro-textured tool [6]. Under this condition, when the chip is perpendicular to the micro-texture direction and the nanoparticles are more easy to enter into micro-texture, coated particles are more likely to form a roll between the tool and the chip. At this time, the stripe-like micro-texture direction is parallel to the chip flow direction during the cutting process, so that the coating particles in the micro-texture can penetrate with the chip flow to enter into the actual chip contact portion, so that more nano-coating particles participate in the formed rolling friction. In the case of pit texture, the distance between the pit has an important effect on the friction form. The change in the pitch of the pit micro-texture causes a change in the micro-textured area occupancy of the rake face contact area. When the pit texture spacing is 0.15 mm, the nano-coating particles are more likely to exist in the micro-texture, forming rolling friction, and obtaining a lower tool-chip friction coefficient. Larger pit texture spacing cannot form effective rolling friction, and the smaller point texture spacing will obviously weaken the strength of the tool surface, which is easy to break, resulting in increased cutting force, which is not conducive to cutting.
As the cutting speed increases, the friction coefficient of the tool-chip interface increases during the cutting process of the nano-coated micro-textured tool. Low speed (100 m/min) is conducive to the formation of sliding friction during cutting. At high speed (200 m/min), because of the faster drag speed of the chips, the amount of chips generated per unit time increases, and the wear of the tool rake face is intensified, reducing the micro-texture anti-friction effect. At the same time, the increase in the chip removal speed removes the nano-Al 2 O 3 coating particles formed by the coating peeling off, so that the rolling friction effect formed by the nano-Al 2 O 3 particles between the chips during the cutting process becomes weak.
Conclusion
Surface texturing was made on the rake face of the WC-Co-TiC carbide tools, and then, nanometer Al 2 O 3 was coated on the surface texturing by ALD. Dry cutting tests were carried out with these tools, and the friction coefficient is mainly analyzed. The following conclusions were obtained: (1) The micro-texture and nano-scaled Al 2 O 3 coated on the micro-texture both can reduce the friction coefficient of the tool, and the tools with nano-scaled Al 2 O 3 coated on the micro-texture are more efficient. (2) The friction coefficient of the 100 nm Al 2 O 3 -coated micro-texture tool is relatively low. When the distance of the micro-pits is 0.15 mm, the friction coefficient is lowest among the four kinds of pit-textured tools. The friction coefficient is the lowest when the direction of the groove in stripe texture tool is perpendicular to the main cutting edge. (3) The main mechanism of the nano-scaled Al 2 O 3 on the micro-textured tool to reduction in friction coefficient is that the nano-scaled Al 2 O 3 form sliding friction between the tool and chip. | 4,426.8 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
Analysis of Heating Energy Consumption of School Buildings in Shigatse, Tibet
: This paper is launched against the background of excessive heating energy consumption in China, continuous calls for sustainable development, and unsatisfactory heating energy use in primary and secondary schools in Shigatse, Xizang. Through field research on the existing school buildings in Shigatse, Xizang, a large number of first-hand materials were obtained, and then through targeted summary and classification of the data, the key problems leading to high energy consumption, low clean energy utilization rate and poor thermal comfort of school buildings in Shigatse, Xizang were found. On this basis, further analysis will be conducted to address the issue, seeking ways and methods to utilize clean energy technologies such as solar heating to solve the heating problem in school teaching buildings
Introduction
At present, in Shigatse, Xizang, urban schools have achieved centralized heating.Township schools generally use coal-fired boilers or stoves as the main heating source for heating.The equipment is backward, the thermal efficiency is extremely low, and environmental pollution is serious.In addition, coal-fired stoves commonly used in rural schools can only ensure the warmth of the room where the stove is located, and the stove must be taken care of regularly.Otherwise, it will automatically extinguish and the indoor temperature cannot be guaranteed, while other rooms cannot meet the requirements.Heating efficiency is low, and application is inconvenient.If the combustion is not thorough, it can easily cause poisoning of toxic gases such as CO among indoor personnel [1].
The winter heating of urban school buildings in the Shigatse region mostly adopts centralized heating, which has high fuel burning efficiency.However, in rural areas, due to the relatively remote and scattered nature of schools, some schools have fewer teachers and students, and the use of centralized heating is unreasonable.The heating equipment in the classroom is outdated in winter, and even plastic sheets are used to block the windows on the north side to resist the cold wind.The main heating methods for rural schools include stove heating and coal-fired boilers.Schools that use stoves for heating account for about half of the total number of studies, as shown in Figure 1.The direct combustion heat efficiency of furnace heating is low, and the distribution of heat emitted is uneven.The temperature near the furnace is relatively high, while the temperature far away from the furnace is relatively low.Moreover, the coal smoke generated by heating with a stove can lead to indoor smoke and poor indoor air quality, affecting students' health.In the winter classroom of Liuxiang Central Primary School in Lazi County, Shigatse City, stove heating is used.According to students' feedback, the indoor air quality is poor in winter, with a foul smell that can sometimes cause discomfort.In places with good economic conditions, coalfired boilers are also used for heating.This type of earthen boiler does not have professional personnel to guide its design, which poses great risks.Moreover, the thermal efficiency of earthen boilers is very low, and the energy consumption for heating is high, as shown in Figure 2.This article conducts research on the energy consumption methods, heating forms, and types of schools through on-site visits.It is found that the main heating methods for rural schools in the Shigatse region are centralized heating with coal-fired boilers and separate heating with stoves.The proportion of use of these two types is similar.Generally, schools with flat houses use separate heating with stoves for each classroom, Multi story classrooms generally use coalfired boilers for collective heating of the entire school.
Heating Energy Consumption Analysis
From Table 1, it can be concluded that (a) in the surveyed schools, all schools use traditional energy, and the main energy source for winter heating in rural schools is still coal, accounting for 88.9% of the surveyed schools.Some schools use crop straw firewood as auxiliary fuel.Shigatse village and town schools generally use boiler centralized heating and classroom coal stoves for separate heating.(b) The winter heating energy consumption of Shigatse village and town schools is the most important aspect of the total energy consumption of schools.Therefore, reducing the energy consumption of school heating while improving classroom comfort is of crucial significance for energy conservation, emission reduction, and the promotion and use of clean energy.
Analysis of the Enclosure Structure of The Teaching Building
The outer protective structure of the building separates the external environment from the indoor space.The external enclosure structure in winter can prevent cold air from entering the interior, while also preventing heat from being transferred to the outside, improving the comfort of the building interior.The peripheral protective structure mainly includes the exterior wall, doors and windows, and roof.The energy consumption caused by the heat transfer of the peripheral protective structure accounts for about 50% of the energy consumption of heating and air conditioning [2].Therefore, strengthening the thermal performance of the outer enclosure structure can not only reduce the energy consumption generated by winter heating, but also ensure that the indoor temperature is maintained in a comfortable state.Through research, it was found that the peripheral protective structures of school buildings in Shigatse villages and towns generally have low standards and do not meet the requirements of energy conservation.
Current Situation of Exterior Walls
The exterior wall is the most important component of the building envelope structure.According to statistics, the heat loss through the exterior wall accounts for 40% of the total energy consumption of the outer enclosure structure.It can be seen that strengthening the insulation performance of external walls plays a very important role [3].
According to research statistics, it was found that all surveyed school buildings were constructed with solid brick walls for external walls, as shown in Figure 3. 77.8% of school building walls have no insulation layer; Out of the 27 schools surveyed, 6 of the teaching buildings used 240mm solid brick walls with internal and external plastering, accounting for 22.2% of the total; There are 14 school teaching buildings that use a 370mm solid brick wall with internal and external plastering, accounting for 51.9% of the total, without insulation layer.The school buildings with insulation layers accounted for 25.9% of the total number of research, all of which were newly built teaching buildings after 2000.According to Table 2, the average heat transfer coefficients of 240mm and 370mm solid clay brick walls are 2.03 and 1.53, respectively, which are far greater than the limits in the "Energy Efficiency Design Standards for Public Buildings".Due to the poor insulation performance of the external walls, cold air infiltration is severe in winter, resulting in low indoor temperature and high humidity, and the phenomenon of condensation and mold on the internal walls, as shown in Figure 4.
Current situation of exterior doors and windows
During the survey, it was found that there are three commonly used forms of external windows in Shigatse village and town schools: single glass wooden windows, single glass steel windows, and PVC plastic steel windows, as shown in Figure 5.The most commonly used form of external window in the surveyed schools is PVC plastic steel windows, accounting for 45.8% of the total number of surveyed schools, as shown in Figure 6.Most of the doors are made of iron sheet, and there are also some plywood and glass doors, as shown in Figure 7.In recent years, the windows of some rural schools that have been renovated and newly built have been changed to plastic steel windows, which greatly improves the airtightness of doors and windows, reduces the infiltration of winter cold air, and improves the indoor thermal environment.
Reasons for the Current Situation of
The Enclosure Structure (a) The construction level is low.The construction technology level of rural schools in the Shigatse region is relatively low, the quality of construction personnel is relatively poor, the construction equipment is relatively backward, and the supervision procedures for engineering quality are not sound.Due to the lack of a dedicated construction team during the construction process of village and town schools, most of them are built by idle farmers during their leisure time.Without a dedicated supervision and engineering quality management department, it is difficult to ensure the quality of the project.If the mortar is not full during wall masonry, and the proportion of school buildings in villages and towns in the Shigatse region using clean water walls is relatively large, the consumption of heating energy is very high due to the infiltration of cold air in winter.In addition, the processing technology of components and accessories is relatively low, and the doors and windows produced cannot meet the airtightness requirements specified in the specifications [4].
(b) The damage to the enclosure structure is severe.The construction of village and town school buildings in the Shigatse region is mainly based on low standard construction.The thermal resistance value of the enclosure structure walls is generally relatively low, the cement joints on the walls fall off, the cold air penetration is large, the insulation performance of the ground and roof is poor, and the selected doors and windows have poor insulation and airtightness.In addition, some doors and windows are severely damaged, such as cracking of glass sealing strips, aging and deformation of doors and windows, all of which make the insulation performance of the enclosure structure worse [5].Although low standard construction saves construction costs in the early stages of construction, it increases the winter heating costs and overall energy consumption throughout the year.In the long run, this low standard construction is very unreasonable.
Survey of Indoor Comfort in Teaching Buildings
In order to combine data to understand the impact of technical measures on the comfort of teaching buildings, in order to effectively understand the indoor comfort status of winter classrooms and the satisfaction of school teachers and students with the winter heating conditions of classrooms.According to the survey results, as shown in Figure 8. 96.5% of teachers and students believe that the classroom is cold, while almost everyone believes that it is coldest in the morning.21.1% of teachers and students feel that there is often cold air around the window, and 66.7% of teachers and students feel that there may be cold air around the window at times.Therefore, measures should be strengthened to reduce cold air infiltration.Only 10.5% of teachers and students are relatively satisfied with the winter classroom heating, and the vast majority of teachers and students are not satisfied with the classroom heating.84.2% of teachers and students believe that heating should be renovated.Overall, 73.7% of teachers and students are not satisfied with the indoor comfort of the classroom, while 77.2% of teachers and students believe that indoor temperature, humidity, and air quality are all important.Therefore, these three aspects should be improved to meet the requirements of teachers and students for indoor comfort.
Conclusion
Through field visits and questionnaire surveys, this article found that the heating conditions, proportion of clean energy use, and current energy consumption of school buildings in the Shigatse region are not optimistic, and there are many problems: (a) The outer protective structure of the teaching building is mostly built with 240mm or 370mm solid brick walls, and there is no external insulation structure; The exterior walls and doors and windows cannot meet the requirements for airtightness.In winter, cold air infiltration is severe, and the heat loss caused by the surrounding protective structure is huge.The temperature of the classroom is extremely low, and the indoor walls are frozen and moldy.Students often have red hands due to freezing, which affects their physical and mental health and learning efficiency.
(b) Due to limited conditions, the collective heating method in rural areas of Shigatse is not reasonable.Schools generally use two heating methods, namely stove heating or coal-fired boiler heating.These two heating methods not only have low thermal efficiency and high energy consumption, but especially in classrooms heated by stoves.In winter, the doors and windows are tightly closed, and the smoke generated by stoves leads to low indoor air quality and poor comfort, which may seriously affect students' health.
(c) The energy used by the school is mainly conventional coal, and firewood from crop straw is used as auxiliary fuel.The energy usage is relatively single, and the proportion of clean energy usage is extremely low.
Supported Project Name
"Investigation and Research on the Promotion and Application of New Energy in Schools in Tibet Region"
Figure 1 .
Figure 1.Winter heating method for classrooms in Shigatse school
Figure 2 .
Figure 2. Common heating methods for schools in the Shigatse region
Figure 3 . 2 .Figure 4 .
Figure 3. Statistics of survey questionnaire results on the form of classroom exterior wallsTable 2. Limitations on Heat Transfer Coefficient of Outer Envelope Structure External walls (including non transparent curtain walls) Body shape coefficient≤0.3K=0.5 0.3<Body shape coefficient≤0.4K=0.4
Figure 6 .Figure 7 .
Figure 6.Statistics of Research Results on the Forms of Classroom External Windows
Figure 8 .
Figure 8. Statistics of Survey Questionnaire Results on Classroom Temperature
Table 1 .
Statistics of Energy Consumption in Research Schools in the Shigatse Region Energy | 3,003 | 2023-11-21T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Evaluation of water states in thin proton exchange membrane manufacturing using terahertz time-domain spectroscopy
A
Introduction
Proton exchange membrane fuel cells (PEMFCs) are hydrogenfuelled electrochemical devices for clean energy conversion.Given their distinctive characteristics such as low temperature operation, high power density and compactness, PEMFCs have been recognised as a promising zero-emission power source for portable, mobile, and stationary applications [1].PEMFCs incorporate a solid electrolyte, known as the proton exchange membrane (PEM), which selectively conducts protons and water between the electrodes, while preventing electron transport and reactant mixing.The most common electrolyte materials for PEMFCs are perfluorinated sulfonic-acid ionomers (PFSAs) [2], which represents a class of synthetic polymers with ion-conducting properties whose chemical structure consists of a chemically inert and hydrophobic polytetrafluoroethylene backbone with side groups terminated with hydrophilic sulfonate groups.Proton conduction in PFSAs is highly dependent on its degree of hydration [3] and the mechanisms, such as Grötthuss hopping, electro-osmosis and back diffusion are related to the nature of water present [4].Depending on the degree of hydrogen bonding to the polymer's hydrophilic sulfonic groups, there are three main water states: bound water (strongly hydrogen-bonded and predominately bound to the hydrophilic domain containing the sulfonate groups [4,5]), bulk water (weakly hydrogen-bonded, exhibiting co-operative reorganisation of hydrogen bonds and the least interaction with the polymer backbone [5,6]) and free water (not hydrogen-bonded [7,8]).Increase in bulk water presence is generally associated with the formation of channels of weakly hydrogen-bonded water, which can combine isolated hydrophilic domains containing hydrated protons and sulfonate ions resulting in a continuous network.These in turn promote water diffusion and enhance proton conduction [9,10].To reduce the overall transport resistances for improving PEMFC performance, there has been a growing focus to decrease membrane thicknesses while incorporating inorganic fillers or additives (e.g.SiO 2 [11], TiO 2 [12], ZrO 2 [13], clays [14] and zeolite [15]) and reinforcements (e.g.expanded polytetrafluoroethylene (ePTFE) [16]) to produce chemically and mechanically stable composite membranes.These modifications are, however, known to affect membrane water properties [16][17][18][19] and therefore understanding this performance durability trade-off is crucial for optimisation.
To probe hydration in PFSA-based PEMs, various techniques have been demonstrated, for example gravimetric-based dynamic vapour sorption (DVS) [20], neutron scattering and imaging [21][22][23][24], microwave dielectric relaxation spectroscopy [4], Raman spectroscopy [25], Fourier transform infrared spectroscopy and variants [10], differential scanning calorimetry (DSC) [26][27][28], nuclear magnetic resonance spectroscopy [29,30], X-ray scattering [31] and more recently, terahertz time-domain spectroscopy (THz-TDS) [6,32,33].The terahertz portion of the electromagnetic spectrum (between 0.1 and 3 THz) is interesting because the dielectric response of water in this frequency range contains information on the reorientation dynamics of bulk relaxation peak at approximately 20 GHz [4] and free/fast water relaxation [6,34].In general, THz-TDS is an efficient technique for the coherent generation and detection of broadband terahertz radiation: a femtosecond pulsed near-infrared laser is focused onto a terahertz emitter (semiconductor photoconductive antenna or nonlinear crystal), where each optical pulse results in the excitation of sub-picosecond pulses with a bandwidth spanning from several hundred GHz to a few THz.The emitted terahertz pulses interacts with the sample and the resulting terahertz electric fields are measured through a coherent detection scheme either by means of photoconduction or electro-optical detection.The advantage of this approach is that the amplitude and phase of a terahertz pulse can be resolved with an excellent signal-to-noise ratio, which can be used to extract the sample's dielectric response in the form of complex refractive index, an intrinsic property of the material that includes both the refractive index and absorption coefficient at the terahertz spectral range.Advances made in THz-TDS have opened up many exciting industrial applications [35][36][37][38][39] where compared to aforementioned methods in PEMs testing, THz-TDS has been demonstrated to probe molecular water states non-destructively, without specialised sample preparation and without physical contact, producing data consistent with microwave dielectric relaxation spectroscopy [32] and water retention [6].Due to Fabry-Pérot reflections, however, acquired waveforms are analysed using numerical optimisation techniques [40][41][42][43][44][45][46] for parameter extraction.The dielectric response of hydrated PEMs are then fitted with double Debye model to quantify the water contributions [5,6,32,47,48].As prior work has focused on Nafion 117 (160-180 μm thickness) [6,32], we propose to use THz-TDS in this study to evaluate water content inside industrially relevant thin membranes (13-70 μm) prepared under different processing conditions and reinforcement loadings.In particular, we propose a parametric-based algorithm based on the double Debye model [6] to analyse our measurements and validate our analysis against literature [33] and complementary DVS measurements.By tracking the contributions of the relaxations, we examine the bulk, free and bound water contributions during a water desorption process for membranes prepared under various processing conditions.
Materials
Membranes used include commercial Nafions (117, 212 and 211) (Fuel Cell Store, TX, USA), ionomers A and B, which were prepared at different conditions (heat application, temperature, heat treatment duration) as summarised in Table 1 (Johnson Matthey, UK) and at different proportions of ePTFE reinforcement relative to a fixed ionomer equivalent weight i.e. effective equivalent weight (EEW).
A small design of experiment (DoE) was performed to systematically understand the effect of process parameters as summarised in Table 2.In particular, we investigated the application of heat treatment, their duration, temperature and the method of heat delivery.Owing to the commercially sensitive nature of the treatment, limited details can be disclosed.Membrane thicknesses were measured using a confocal microscope (Olympus LEXT OLS5000) with a 400-420 nm wavelength laser, assuming a refractive index of 1.36 [49].After removing the protective films, the samples were thoroughly rinsed with deionised water (Res: 18 MΩ cm) prior to measurement to remove impurities.For desorption measurements, the JMFC samples were hydrated from water vapour inside a home-made glass hydration chamber with 100% relative humidity (RH) for 24 h.This method of hydration is used to minimise sample surface water, which can cause significant measurement uncertainty [6,50].For direct comparison against literature [6,33], Nafion samples were soaked in DI water for 24 h, and excess surface water was removed with lint-free paper wipes.With the exception to Nafions, for each of the ionomer specimens investigated in Table 2 there were three repeats.
Experimental setup 2.2.1. Dynamic vapour sorption
DVS measurements were performed using a commercial analyser (Q5000SA, TA Instruments).Approximately 5-10 mg of the same sample used for THz-TDS were placed in an open quartz metal coated pan.The sample was equilibrated at 26 • C at 90% RH and an isotherm was recorded for 300 min.The RH was then set at 80% and the change of weight was monitored for another 30 min.Weight profiles were normalised at the end of the isothermal step from which, the rate of change of the profiles was calculated with the peak representing the maximum rate of change corresponding to the maximum driving force from 90 to 80% RH.The water adsorption isotherms were fitted with Park's multimode adsorption model [51] to extract water population associated to the mechanisms of Langmuir adsorption at low water activity, non-specific adsorption in accordance with Henry's law and clustering at high water activity.Fitting details can be found in the SI.
Terahertz time-domain spectroscopy
We performed transmission terahertz spectroscopy using a commercial THz-TDS setup (TERA K15, Menlo Systems, Germany) as shown in Fig. 1.The hydrated Nafions and Ionomers A were measured during a 25 min water desorption process at ambient conditions (T = 26 • C, RH = 41%) while Ionomers B were measured under the same conditions for 15 min due to a reduced thickness.Terahertz waveforms were recorded every 1 min interval from 5 averages.Oven dried membranes (60 • C for 24 h) were also measured with THz-TDS.As a standard routine to all our measurements, a reference measurement was always acquired without the sample being present and taken immediately before the sample measurement to remove potential baseline drift.The acquired waveform of the terahertz electric field for both the sample and the reference were then converted to the frequency domain by fast Fourier transformation.
Analysis algorithm
In order to estimate the macroscopic water content in the PEMs, we use the method in Ref. [6] where an equivalent model of a hydrated membranes arranged as a dry membrane and a layer of water of consistent thickness is assumed.From Beer-Lambert's law, which relates light attenuation across the sample to its material properties, we can calculate water thickness d m according to Equation (1) [6]: where E ref (ω) and E hyd (ω, t) are the frequency dependent fast fourier transforms of the terahertz time-domain pulse of the reference (free space) and sample (hydrated membrane), respectively, α w (ω) and α m (ω) are the absorption coefficients of water and PFSA samples, respectively and d m is the sample thickness.The time dependent macroscopic water content in a weight basis WC(t) is estimated using Equation (2) [6]: where ρ w and ρ m are the density of water (1 g/cm 3 ) and Nafion (1.94 g/ cm3 [6]), respectively.To process measurements from thin membranes, the proposed parametric algorithm examines the total transmitted electromagnetic wave E s (ω) through a dielectric slab with complex refractive index ns = n s (ω) − ik s (ω) at normal incidence in free-space using plane wave approximation in Equation ( 3) [36]: where E s (ω) and E r (ω) are the Fourier transform waveforms of sample and reference signals, respectively, Ĥ(ω) is the transfer function, ω is the angular frequency, n 0 is the refractive index of air, c is the vacuum speed of light, d is the sample thickness.FP(ω) is the Fabry-Pérot from multiple reflections inside the slab given by Ref. [36]: It should be noted that in cases where the sample is sufficiently thick, Fabry-Pérot reflections are temporally separated out from the main pulse in the time-domain and therefore can be removed by a time windowing function resulting in an approximate solution for ns [52].Iterative methods can also extract the optical parameters by minimising the error between modelled transfer function Ĥ(ω) and the measured transfer function [40][41][42][43][44][45][46].The error is commonly known as the objective function g(w) in optimisation [53] which can be expressed as the following The modelled transfer function is determined by intermediate values of complex refractive index, which is initially entered in as guesses, but allowed to gradually converge to the actual values.This approach considers Fabry-Pérot in Equation ( 4) and therefore can additionally determine the sample thickness.Without material a priori information, however, an iterative solver can produce solutions with discontinuities and non-physical artefacts due to multi-modal solutions.To overcome these difficulties, assumptions are made on the material's dielectric properties, i.e. the complex refractive index behaves in accordance with a dispersion model such as Lorentz or Drude, commonly known as parametric based methods [53,54].This is therefore exploited in hydrated membranes, which follow a double Debye response [32,33] and is given by where ε ∞ is infinite dielectric constant, Δε complex refractive index is related to the complex permittivity ε(ω) = ε ′ (ω) − jε ′′ (ω) via Equations ( 7) and ( 8) For the membrane at the dry state, the dielectric response follows a single Debye model [55] with the fits shown in SI.For all the fittings against measurement in this work, we simultaneously fit the real and imaginary part by nonlinear least squares method.To optimise the objective function, a derivative-free particle-swarm solver is used for global optimisation with bounds with initial values taken from the literature [33] and thickness from the confocal microscope measurements.The search range of all the variables were set as ±15% of the initial value.A flowchart of the algorithm and further details can be found in SI.From the extracted dielectric parameters and macroscopic water content, the time dependant proportions of bulk (f bulk ), bound (f bound ) and free (f free ) water states in hydrated membranes during desorption [6] are estimated using Equation ( 9), ( 10) and ( 11) [6].
where Δε 1,bulk and Δε 2,bulk are the values of dielectric strength for pure water [56] and C 0 is the concentration of pure water (55 mol/L).The time dependent density of the hydrated membrane ρ wm (t) and molecular concentration of water in hydrated membranes (C H2O (t)) are obtained from Equations ( 12) and ( 13) where M w is the molecular weight of water (18 g/mol).
Algorithm validation
To test the algorithm, we compare the analysis results of a Nafion 117 against the literature [6] where the extracted values of Δε 2 , ε ∞ and τ 2 are in close agreement, with the values of Δε 1 being higher due to difference in amount of surface water at t = 0 and relative humidity of the environment.The extracted thickness is 172 μm in close agreement to the measured thickness of 170 μm.As THz-TDS is essentially a high frequency extension of microwave dielectric relaxation spectroscopy, our extracted dielectric constants is comparable against Nafion with low water content [32], which previously was validated against the low frequency regime [4].We then applied the algorithm to the Nafions 212 and 211 measurements at approximate thicknesses of 50 μm and 25 μm, respectively and the fitted results are shown in Table 3. Small deviations are due to high frequency noises occurring at frequencies >1 THz.It should be noted that analysis of these thin membrane measurement using non-parametric based analysis did not converge.
Water content (WC)
Fig. 2 shows the terahertz estimated WC for all the samples where exponential decays are observed, consistent with earlier work [20,50], with the exception of Nafion 117, which is outside the drying time window [33].Ionomer A6 also has the highest water uptake >50% from ionomer equivalent weight without any heat treatment.As expected, membrane drying rate is faster in thinner membranes than thicker counterparts.Due to uncertainties related to initial water content (t = 0) from surface water removal and sample mounting time, some water loss can be expected.Therefore, the desorption profiles are represented by the desorption rate taken as the slope of the desorption in the first minute.In particular, we compare against DVS data at RH 90-80% for Ionomers A1-A6 in Fig. 3 where a linear correlation is observed.The choice of 80% over 40% is because there is correlation between 80% and 40% DVS data (see SI) where the instrument is able to record with sufficient fidelity at a lower driving force resulting in a lower rate of change.It should be noted that the DVS data here represents the maximum desorption rate from saturation, while the initial water content from the terahertz data will always be lower due to uncertainties under the measurement configuration.Owning to the time-consuming nature of each DVS measurement (~1.5 days/sample), only a subset of ionomers were tested using DVS.Further details on DVS data are available in SI.
Fig. 4 summarises the effect of the processing conditions investigated as part of the DoE on the desportion rate where Ionomers A1, A2 and A6 results are consistent with DVS (Fig. 2).As expected, Ionomer A6 without heat treatment has a higher desoprtion rate than the heat treated Ionomer A2 due to a greater amount of water being present.Higher water content in turn promotes the formation of water channels thus increasing the water diffusion across the membrane.In contrast, heat treatment has the function to improve the mechanical properties of the membrane at the expense of morphology change thus collapsing the water channels leading to a decrease of water sorption, retention and eventually diffusion [17][18][19]57].Similar trend is observed for both ionomer types when the duration of heat treatment is considered.An increase in desorption, however, is observed for increased in treatment temperature in Ionomers B1-3 contrary to what is expected [58].This is possibly due to a different approach being used to administer heat to ionomers B, which is also observed between Ionomers B4 and B3.Methods of heat treatment has been shown to induce shape-memory effects in Nafions [59] resulting in changes polymers' properties such as water uptake, tensile modulus and counter-elasticity [60,61].The fact that there a contrast shown by THz-TDS highlights the sensitivity of the technique for future investigation.
Molecular water states
Fig. 5 shows the extracted water states for all samples without heat treatment where the following trend can be observed: bulk water contribution generally dominates inside the membrane followed by bound and free water at the start of the experiment -Nafions in particular with a hydration number ~12-13 resulting in more bulk than bound [2,4,6]; bulk water becomes the main source of water loss with bound water becoming the dominant over time signified by a cross-over point where bulk and bound contributions are the same; stabilisation of bulk water at approximately 30-40% sufficient for proton conduction [62].The Nafion 117 data is a notable exception to these trends because of a comparatively longer drying time, but generally is in agreement (Table 3) though with a lower desorption due to a higher measurement RH and temperature [6].Another trend observed is that thinner membranes generally encounter this crossover faster, which is expected that given that bulk water molecules are considered the ones at the centre of the pores of the membranes, it is reasonable to assume that thinner membranes will have comparatively less space for bulk-water networks to form [63,64].It should also be noted that this crossover is also affected by initial water, which for thin membranes are prone to dry out quicker than thicker counterparts due to the ambient nature of the measurement used.Fig. 6 shows the corresponding water states for the heat treated ionomers where like the un-treated counterparts in Fig. 5, similar trends are observed.However, key difference being that the removal of the crossover point between bulk and bound water desorption profiles.This could be due to the lowering of the initial bulk water uptake as a result of having a greater amount of heat induced crystalline domains in the polymer chain resulting in a structure with reduced water cluster domains [65].As the non-specific adsorbing water corresponds to water with the highest mobility, which correlates to non-freezable water [51], Fig. 7 compares the non-specific adsorbing water against terahertz bulk water contributions at t = 0 with an estimated water activity between ~0.8-0.9 and at t = 25 with water activity ~0.4 for Ionomers A1-6 where similarities between the trends can be observed.However, there are some discrepancies, e.g.Ionomer A5 at t =0, Ionomer A1 and A6 at t = 25, but these are relatively small compared to the changes at t = 0 between Ionomers A1, A2 and A3-A6.These discrepancies could be due to uncertainties associated with membrane water activity, and the fact that single point measurement is taken in THz-TDS as opposed to over an entire membrane by DVS.
Fig. 8 summaries the desorption profiles as a function of the processing conditions, highlighting the effect of heat treatment on the water states.In particular, comparing Ionomer A6 against the heat treated Ionomer A2 at t = 0 shows that bulk water has reduced.As bulk water is related to proton conduction [66], this reduction agrees with prior works where proton conductivity is reduced with heat treatment at the trade-off of better mechanical properties [67,68].The data also shows an opposing trend between Ionomers A and B, for example at the wet state (t = 0), Ionomers A generally has higher bulk water than Ionomers B but that trend is reversed at the end of measurement (t = 15) indicating a greater bulk water retention between different ionomer types.Furthermore, heat duration has the effect of increasing initial bulk water in Ionomers A than B while temperature presents a nonlinear effect on Ionomers B where an optimum retention is seen for the B2 sample.Minor differences at wet conditions can be observed for the water states thus suggesting the change of water content may be due to additional effects such as hydrophobic nanometre thick skin layers forming at membrane air interface than just ionomer structural changes [69][70][71].Owing to the complexity of the process, it also highlights the need to interpret water content (e.g.Fig. 3) in conjunction with other information such as water states, complementary techniques at different scale e. g.Neutron scattering [72,73] in order to comprehensively infer on the morphological changes induced in a manner similar to Ref. [74].
To further explore the effect of ePTFE reinforcement across all the ionomers, Fig. 9 shows the water states contributions when ePTFE proportion is accounted for across Ionomers A1-A6.In particular, the EEW = 963 g/mol data point is taken as the average between Ionomers A1, A2.Inclusion of other ionomers to this particular EEW is possible though would unlikely to change the trend observed due to similarities in values.This is necessary as EEW is used to account for the ionomer dilution to ensure the trend is not an artefact.Here bulk water decreases with increasing ePTFE proportion implying water domains have been disrupted under reinforcements thus decreasing the ability of ionomers to accommodate water, consistent with prior observations made where proton conductivities were also reduced in membranes with hydrophobic ePTFEs [75,76].This trend occurs for both initial (t = 0) and end of measurement (t = 15) and supports water disruption hypothesis.The extent of the decrease is also accentuated by the initial water uncertainty, which becomes more dominant for thinner ionomers.Even though there is a general agreement between the presented results against independent DVS analysis and literature findings, practioners of the technique should generally be aware of the following in addition to those pointed out previously [6,33,50]: 1) manual surface water removal during sample preparation will inevitably introduce measurement uncertainty especially pertinent for thin membranes and therefore vapour humidification is suggested; 2) the exact values on the proportion of water states will likely be few percent lower (<5%) where this baseline would correspond to the extracted proportions from fitting the double Debye model to a completely dry sample.As such, what is being presented is the relative proportions of water states; 3) as the sample is not in a closed chamber where the local environment can be varied significantly, the extracted water states are at constant room temperature thus not directly comparable to the water states extracted at other environmental conditions e.g. at low temperatures with nitrogen flow using DSC [77].Improving the proposed technique with THz-TDS inside a closed chamber to cater for a variety of environmental conditions is therefore the subject on future investigation for unambiguous comparisons.Our data in this study has demonstrated the sensitivity of THz-TDS to the molecular water states inside membranes prepared under various treatement strategies where measurements and analysis can be performed rapidly, without physical contact using table-top instruments, thus opening up opportunities for practical manufacturing parameter space investigation for future product optimisation.
Conclusions
In this study, we have demonstrated the broad applicability of THz-TDS to extract the water states and their retention properties of industrially relevant membranes where previous studies have focused on thicker membranes.In particular, we have developed a parametricbased algorithm to perform the data analysis, where the results have been validated against prior work and complementary measurement.By further evaluating the extracted water states against membranes prepared under various heat treatment conditions, our results have generally agreed with prior understanding and literature demonstrations.Compared to other approaches, THz-TDS is a highly interesting contactless, table-top characterisation technique with clear potential to complement existing methods and to open up new opportunities for future rapid membrane performance testing, enabling a greater material understanding for optimising performance stability trade-off.
Funding sources
The authors acknowledge financial support from the EPSRC (Grant No. EP/R019460/1, H2FC Supergen Flexible Grant EP/P024807/1).Additional data sets related to this publication are available from the Lancaster University data repository https://doi.org/10.17635/lancaster/researchdata/505
Fig. 2 .
Fig. 2. Desorption of macroscopic WC in all samples.The line and shade region in the plots refer to mean and standard deviation data for three repeats.(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
Fig. 3 .Fig. 4 .
Fig. 3. Comparison of rate of water desorption for the Ionomers A1-A6 acquired using THz-TDS and DVS.Line is plotted to guide the eye.
Fig. 5 .Fig. 8 .
Fig. 5. Desorption of microscopic water states in the Nafion and non-heat treated JMFC membranes.The line and shade region in the plots refer to mean and standard deviation data for three repeats.(For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
Fig. 9 .
Fig. 9. Water states at the 0 th minute (a) and 15th minute (b) for the JMFC membranes as a function of EEW.
Table 1
Range of process parameters.
Table 2
Summary of the membranes used in the study.
Table 3
Double Debye parameters at the 0th minute of Nafions. | 5,699.4 | 2022-02-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics",
"Chemistry"
] |
Chemotherapy-induced cardiotoxicity: a new perspective on the role of Digoxin, ATG7 activators, Resveratrol, and herbal drugs
Cancer is a major public health problem, and chemotherapy plays a significant role in the management of neoplastic diseases. However, chemotherapy-induced cardiotoxicity is a serious side effect secondary to cardiac damage caused by antineoplastic's direct and indirect toxicity. Currently, there are no reliable and approved methods for preventing or treating chemotherapy-induced cardiotoxicity. Understanding the mechanisms of chemotherapy-induced cardiotoxicity may be vital to improving survival. The independent risk factors for developing cardiotoxicity must be considered to prevent myocardial damage without decreasing the therapeutic efficacy of cancer treatment. This systematic review aimed to identify and analyze the evidence on chemotherapy-induced cardiotoxicity, associated risk factors, and methods to decrease or prevent it. We conducted a comprehensive search on PubMed, Google Scholar, and Directory of Open Access Journals (DOAJ) using the following keywords: “doxorubicin cardiotoxicity”, “anthracycline cardiotoxicity”, “chemotherapy”, “digoxin decrease cardiotoxicity”, “ATG7 activators”, retrieving 59 articles fulfilling the inclusion criteria. Therapeutic schemes can be changed by choosing prolonged infusion application over boluses. In addition, some agents like Dexrazoxane can reduce chemotherapy-induced cardiotoxicity in high-risk groups. Recent research found that Digoxin, ATG7 activators, Resveratrol, and other medical substances or herbal compounds have a comparable effect on Dexrazoxane in anthracycline-induced cardiotoxicity.
INTRODUCTION
Cancer is a significant global public health issue, causing over 10 million deaths in 2020, and is projected to surpass cardiovascular disease as the leading cause of death by 2025 to 2030 [1]. According to the World Health Organization (WHO), in 2019, cancer was the first leading cause of death in 57 countries, including the US, Canada, and Europe, and the second cause of death after cardiovascular diseases in 55 other countries [2,3]. Although the age-adjusted incidence rate has decreased by about 31%, the number of cancer cases continues to grow. This is associated with an older population and increased survival due to JOURNAL of MEDICINE and LIFE scientific advances in the early detection of cancer [4,5]. However, despite the improved survival, cancer therapy has revealed significant cardiovascular toxicities that were previously overlooked.
Chemotherapy and radiotherapy are two of the mainstays of treatment for several types of cancer. These treatments have allowed for an increased number of patients to survive. However, their mechanism, doses, and frequency of use to achieve remission can generate side effects in patients, with cardiotoxicity as one of the most concerning [5,6]. These side effects can manifest as symptoms of heart failure and myocardial damage from direct and indirect toxicity of antineoplastic therapy [7,8]. Because of this, cardiac function is considered a dose-limiting variable during cancer therapy, contributing to the exposed population's morbidity and mortality [9].
Cardiotoxicity encompasses various pathological manifestations at the cardiovascular level caused by oncological treatment, with heart failure being the most frequent complication linked with a 3.5-fold increase in mortality with anthracycline therapy [10]. A study on 22,643 adult survivors of childhood cancer showed an increasing prevalence of heart conditions over time, from less than 3% at the age of 20 to 8. [8][9].0% in those 35 years of age [10].
Cardiovascular evaluation of chemotherapy patients, risk assessment, mitigation of cardiac harm, and monitoring of heart function before, during, and after chemotherapy should be performed. Additionally, to ensure a comprehensive approach to patient care and promote positive outcomes, multidisciplinary efforts are needed to further develop existing pharmaceuticals and devise new strategies for preventing and treating cardiotoxicity. Cardio-oncology has become a crucial field in providing comprehensive care for chemotherapy patients [11].
The study aimed to systematically review the literature on chemotherapy-induced cardiotoxicity and compare as well as identify the chemotherapies that cause cardiotoxicity along with their mechanisms and methods for decreasing or preventing it.
MATERIAL AND METHODS
We searched for relevant articles in PubMed, Google Scholar, and Directory of Open Access Journals (DOAJ) using keywords such as "doxorubicin cardiotoxicity", "anthracycline cardiotoxicity", "chemotherapy", "digoxin decrease cardiotoxicity", and "ATG7 activators". To ensure the quality and reliability of the articles, we excluded articles that were inaccessible, outdated, unreliable, or not directly related to our study's focus (e.g., articles on gene polymorphism or other side effects of chemotherapy). The selection process is illustrated in Figure 1.
Cardiotoxicity and Chemotherapy
Cardiotoxicity is a condition that occurs when chemotherapy damages the heart. It is a harsh condition that needs immediate medical attention. In order to manage cardiotoxicity, medical professionals employ a combination of strategies, such as reducing the chemotherapy dosage and administering appropriate cardiac medications to patients. Early recognition and treatment of potential heart failure are crucial to ensure positive patient outcomes [11].
JOURNAL of MEDICINE and LIFE
The Cardiac Review and Evaluation Committee define cardiotoxicity after these criteria: Cardiomyopathy with compromised function of the left ventricle: • Symptoms or signs of heart failure coupled with the presence of third noise, tachycardia, or both; • Decrease of at least 5% in the ejection fraction with values less than 55% and signs or symptoms present, or a decrease of 10% at values less than 55% in the ejection fraction without signs or symptoms [12,13]. The American Society of Echocardiography identifies cardiotoxicity as a decrease in left ventricular ejection fraction (LVEF) of more than 10% or more than 53% from baseline (mean reference values for two-dimensional echocardiography) [14,15].
Chemotherapy is indicated in several phases of antineoplastic treatment as neoadjuvant, adjuvant, or palliative therapy. Therefore, patients may experience a cardiotoxic event early in treatment or up to 40 years after they finish therapy. Therefore, chemotherapy-induced cardiotoxicity is classified into acute or subacute, in which the cardiac damage develops from the onset of treatment and persists for several weeks after its completion. Chronic cardiotoxicity is further divided into two stages: early, within the first year after treatment, and late, which occurs years after therapy has ended ( Figure 2) [16,17].
Different drugs and chemotherapeutic agents lead to various degrees of cardiotoxicity, which could be low, moderate, and high risk according to their potency and targeting of cardiomyocytes (Table 1) [18,19].
Types of chemotherapy-induced cardiotoxicity Type I Cardiotoxicity
Type I cardiotoxicity is characterized by sudden, unexpected, and severe changes in cardiac function. Most commonly, it manifests as a decrease in heart rate and pressure as well as an increase in the rate and rhythm of the heartbeat [21]. In severe cases, these changes can result in the weakening or failure of the heart altogether. Other serious effects caused by type I cardiotoxicity include palpitations, dizziness, shortness of breath, and shock. Over time, type I cardiotoxicity can lead to heart failure, characterized by weakening the heart's pumping power. This type of toxicity is related to the damage produced by ROS or free radicals in which the reduction of the quinone group in the B ring of anthracyclines leads to the formation of a semiquinone radical, which is oxidized and generates ROS (e.g., superoxide) [22,23]. This is caused by ROS interacting with the myocardium and producing an imbalance between antioxidant mechanisms and pro-inflammatory substances, which are predisposed to damage by the reduction of glutathione peroxide, which is affected by the use of these drugs [24]. This process is catalyzed by the interaction of Doxorubicin with a ferric iron complex. It causes more ROS to be produced, which helps convert ferrous iron into ferric iron, damaging the endoplasmic reticulum and cell membranes and lowering intracellular calcium and contractility [24][25]. Histamines, TNF-alpha (TNF-α), and interleukin 2 are then released due to inflammatory cytokines. Dilated cardiomyopathy and -adrenoceptor dysfunction are brought on by these cytokines. Topoisomerases have also been linked to the toxicity of anthracyclines in addition to oxidative stress [26]. By establishing a ternary complex with one of the isoenzymes known as Top2a-doxorubicin-DNA, Doxorubicin's anticancer effect is explained. These alterations have a connection to the induction of apoptosis [25].
The most essential and helpful idea is that this group causes early diastolic and late systolic dysfunction as well as dose-dependent (cumulative) myocyte damage [25]. The American National Dogfish Institute defines anthracycline cardiotoxicity as an absolute reduction in LVEF below 50% or a 10% decline in LVEF around the original value, regardless of symptoms or indicators of heart failure. This definition takes into account the cardiotoxic effects outlined. According to the risk classification carried out on each patient, this fact shifted the official indication of clinical-echocardiographic follow-up in a sequential and planned way [26].
Type II Cardiotoxicity
Trastuzumab-like type II cardiotoxicity, also known as "effect trastuzumab", is associated with reversible heart damage that permits functional recovery and, if necessary, a resumption of the regimen. Myocytes do not undergo any ultrastructural alterations, allowing this [27,28]. Trastuzumab prevents oral human tum cells that overexpress the HER2 protein from proliferating [29]. It interacts with the extracellular domain of HER2, a transmembrane receptor tyrosine kinase that functions as a proto-oncogene and is connected to the control of cell development. HER2 is an epidermal growth factor. It has a bad prognosis and is overexpressed in 25% of breast cancers [30]. It is connected to neuregulin in the heart, a peptide ligand for HER3 and HER4 that, when combined with HER4, promotes heterodimerization with HER2 and phosphorylation, as well as the activation of several signaling pathways. This boosts the survival and contractile capabilities required for the formation and survival of cardiac myocytes [30] by increasing cell contact and mechanical coupling. Trastuzumab exposure can potentially cause cardiac dysfunction through many molecular processes connected to apoptosis [30,31]. It is crucial to remember that the initial cardio-depressant impact is brief and reversible when the medicine is stopped and that left ventricular ejection fraction (VEGF) recovery takes around a year. Its occurrence varies depending on the risk variables involved; for instance, it ranges from 5 to 30% when administered alone or in combination with anthracyclines. Additionally, this incidence rises with age, a history of cardiovascular illness, and radiation therapy (RT) or chemotherapy (CTX) use in the past, all of which are substantial risk factors for cardiotoxicity. As a result of the finest and strictest risk factor monitoring and the objective to prevent the concurrent use of anthracyclines, the cardiotoxicity in this group has decreased [32].
Toxoids
After treatment, toxins like paclitaxel and docetaxel have a variety of cardiotoxic consequences. Paclitaxel may cause bradycardia or tachyarrhythmias, atrioventricular and branch blockages, heart ischemia, and hypotension in certain people. This serves as a byproduct of the Purkinje system's direct chronotropic action. The formulation of paclitaxel utilized in clinical settings contains adjuvant chemotherapy, which is also linked to hypersensitivity responses, cardiotoxicity, and nephrotoxicity. Atrial fibrillation, severe coronary artery disease, congestive heart failure, and unstable angina are risk factors for cardiac toxicity [33].
Fluorouracil
Fluorouracil (5-FU) is a widely used chemotherapy drug for cancer treatment, but it can cause severe cardiac complications, making it one of the most cardiotoxic drugs. [34]. The exact mechanism that causes 5-FU-induced cardiotoxicity is not fully understood, but it is thought to be linked to the inhibition of thymidylate synthase, an enzyme that plays a vital role in DNA synthesis [34].
Cyclophosphamide
Current research on the molecular underpinnings of cyclophosphamide-mediated heart injury may help develop better preventative measures for treating cardiotoxicity. It has been demonstrated that cyclophosphamide treatments suppressed the expression of carnitine palmitoyltransferase-I and heart-type fatty acid-binding proteins in cardiac tissues. Cardiomyopathy results from blocking these pathways, which decreases the amount of adenosine triphosphate produced and causes harmful intermediates from fatty acid oxidation to accumulate. As an early indicator of chemotherapy-induced cardiotoxicity, heart-type fatty acid-binding protein may be employed. Monitoring blood and urine carnitine levels is critical because carnitine shortage can worsen cardiotoxicity. Supplemental carnitine demonstrated positive results in a number of cyclophosphamide-induced toxicities [33,34]. Myocarditis and heart failure (although less common) JOURNAL of MEDICINE and LIFE appear during the first weeks of treatment. A decrease in systolic function also occurs in some patients and, like anthracyclines, increases intracellular concentrations of oxygen ROS [33][34][35].
Cisplatin
The mechanism of cisplatin-induced cardiotoxicity is not fully understood, but it is believed to involve the generation of reactive oxygen species (ROS) and the subsequent damage to cardiac cells. Cisplatin was also found to cause damage to the blood vessels in the heart, leading to heart dysfunction. Additionally, cisplatin causes an alteration in the electrolyte balance which can lead to arrhythmias and cardiac dysfunction. The incidence of cisplatin-induced cardiotoxicity varies depending on the dose and duration of treatment, as well as the patient's age and underlying cardiac conditions. The most common symptoms of cisplatin-induced cardiotoxicity are chest pain, dyspnea, and arrhythmias. In severe cases, cisplatin can cause myocardial infarction and heart failure [8]. Cisplatin can cause severe electrolyte disturbances like hypokalemia, hypocalcemia, and hypomagnesemia, which may trigger abnormal cardiac rhythm [36].
Bevacizumab
Thromboembolic events, hypertension, and other cardiovascular problems with wound healing are side effects of Bevacizumab. Higher bevacizumab doses (10-15 mg/kg) are linked to an increased risk of several adverse effects, including thrombosis. According to a meta-analysis, thromboembolism was shown to happen in 11.9% (6.8%-19.9%) of patients with diverse malignant diseases (relative risk [RR], 1.33; 95% confidence interval [CI], 1.13-1.56; pp=.007) and 5 mg/kg per week (RR, 1.31; CI, 1.02-1.68; p=.04). In a separate trial, non-small cell lung cancer (NSCLC) patients receiving first-line treatment experienced fatal pulmonary hemorrhage at both dose levels. Use is prohibited by the current label in NSCLC, including squamous cells. Combination therapy with Bevacizumab increased the frequency of treatment-related deaths (15 patients versus 2 patients; p=.001) compared to chemotherapy alone. Decreased left ventricular ejection fraction was another side effect of long-term Bevacizumab use, while chemotherapy may have also contributed to this [37].
Pertuzumab
A humanized antibody called Pertuzumab interacts with trastuzumab by binding to the HER2 receptor at a distinct region. Two different treatment plans with and without Pertuzumab were contrasted by researchers [38]. Compared to the group getting Trastuzumab, Docetaxel, and placebo, the Pertuzumab group had a reduced risk of left ventricular dysfunction (6.6% vs. 8.6%). Trastuzumab was recently studied in the Adjuvant Pertuzumab and Trastuzumab in Early HER2-Positive Breast Cancer Trial (APHINITY TRIAL), which sought to assess its efficacy. Compared to the Pertuzumab/placebo group, the proportion of heart failure was reduced in the Pertuzumab group [38].
Interferons alpha
Interferons are cytokines produced from leukocytes (INF-α), fibroblasts (INF-β), and T lymphocytes (INF-γ). INF-α increases the expression of neoplastic antigens on the cell membrane surface to be recognized by the immune cells [21,39]. The use of INF-α can cause arrhythmias ranging from atrial fibrillation to ventricular fibrillation in 20% of patients, and their chronic use can lead to dilated cardiomyopathy [21,39].
Risk factors for the development of cardiotoxicity from chemotherapy
Age (children and adults over 65 years old), previous cardiovascular disease, prior radiotherapy (mainly mediastinal), metabolic alterations, and hypersensitivity increase the risk of chemotherapy-induced cardiotoxicity [39]. African Americans are also more susceptible to chemotherapy-induced cardiotoxicity even after adjusting for confounding factors [39]. Likewise, pharmacogenetics can recognize patients with a high risk for chemotherapy-induced cardiotoxicity [40].
Monitoring and diagnosis of cardiotoxicity
A baseline cardiovascular examination should be conducted to identify the cardiac risk factors of each patient. Before initiating chemotherapy, hypertension and dyslipidemia should be managed [41]. It is recommended to monitor the patient's JOURNAL of MEDICINE and LIFE cardiac health before, during, and after chemotherapy, mainly if anthracyclines were used, to detect any early subclinical changes. However, there are currently no regulations to establish the procedures or frequency for conducting such monitoring [42].
Transthoracic echocardiography is the most commonly used diagnostic technique in oncological clinical practice for measuring cardiotoxicity in patients receiving chemotherapy, as it enables periodic examination of cardiac function and detects any decline in left ventricular ejection fraction (LVEF) [35,42].
On the other hand, the 12-lead electrocardiogram, an early indicator of left ventricular dysfunction in patients receiving extensive anthracycline treatment, exhibits abnormalities in repolarization, decreased voltage of the QRS complex (indicative of cardiomyopathy), and prolongation of the QT interval. Meanwhile, these diagnostic techniques underestimate the severity of heart injury and depend on the operator, and early pharmacological intervention is limited since suggestive alterations caused by cardiotoxicity only surface after serious myocardial dysfunction [42]. In order to properly and promptly identify chemotherapy-induced cardiotoxicity, alternative approaches and procedures have been suggested [32,43].
The European Society of Oncology advises measuring LVEF at the beginning of antineoplastic therapy after half of the total cumulative dose of anthracyclines is administered, before the next dose, and three, six, and twelve months after chemotherapeutic treatment for patients who are older than 60 or have cardiovascular risk factors [35]. Since multiple studies have indicated that changes in LVEF are related to chronic heart failure three years after chemotherapy, antineoplastic therapy should be stopped when the drop associated with an absolute LVEF value of less than 50% is more significant than 10%.
The major disadvantage of this technique is the inability to detect slight differences. Advancements in technology have made it possible for new types of echocardiography to evaluate the myocardial function and detect changes that occur well before the deterioration of LVEF. One of the new echocardiography techniques that have shown promise in detecting early changes in myocardial function is the study of myocardial tissue and the measurement of global longitudinal strain. This technique is helpful in diagnosing subclinical cardiomyopathies and identifying joint bone problems and detecting cardiotoxicity caused by antineoplastic agents [44].
At present, advances in three-dimensional and tissue Doppler echocardiography, myocardial strain imaging, and cardiac magnetic resonance imaging have the potential to detect subclinical changes [31,34,40]. Endomyocardial biopsy is described in recent publications as the most sensitive and specific method for diagnosing and monitoring cardiotoxicity by anthracyclines since it allows for directly measuring the presence and extent of cardiac fibrosis produced by chemotherapy. Despite its effectiveness, the use of endomyocardial biopsy is limited due to its invasive nature and the need for a blood-based procedure [35].
Assessing LVEF before initiating chemotherapy treatment is a topic of debate regarding the cardiac monitoring of patients. Some authors recommend against performing an initial LVEF assessment in patients who have no cardiovascular risk factors, will receive less than 300 mg/m 2 of Doxorubicin, are not receiving concurrent trastuzumab treatment, or are females younger than 65 without risk factors. However, some scholars [45] disagree with these recommendations. Due to its relative simplicity, predictability, precision, and accuracy, assessing particular blood biomarkers of cardiac damage has been proposed as an appealing, legitimate, and new technique to identify and monitor cardiotoxicity in patients treated with chemotherapy [45]. Troponin, B-type natriuretic peptide (BNP), and NT-proBNP are the three main serum markers [46]. The "guardian of the genome" serum P53 shields cells from recurring cancer. There are several studies that relate cardiotoxicity and P53 levels [47].
The use of biomarkers, particularly troponin, because of its strong negative predictive value, enables stratification of patients who do not need close monitoring of cardiotoxicity and reduces the use of pointless diagnostic procedures and the expense on the health system and the patients [48].
The same procedures and technique diagnostics utilized in patients with similar symptoms who do not receive chemotherapy may be used to identify additional types of chemotherapy-induced cardiotoxicities, such as ischemia, arrhythmias, and pericardial disease [48]. There is a disagreement on the best course of action for treating these individuals, although prevention techniques for cardiotoxicity have been highlighted. This calls for the need for fresh prospective investigations with sizable patient groups. In addition to developing treatment strategies, researchers must also focus on developing reliable and commercially available screening techniques with biomarkers that can improve risk stratification and cardiotoxicity categorization.
Anthracyclines-induced cardiotoxicity
The independent risk factors for developing cardiotoxicity must be considered to prevent myocardial damage without decreasing the therapeutic efficacy of cancer treatment. Therapeutic schemes can be changed by choosing prolonged infusion application over boluses. Schemes can range from 6-96 hours, and the bolus application's risk of developing cardiotoxicity is 4.13 times greater than prolonged infusion. Administration of drugs in long-time regimens like weekly treatment regimens shows less cardiotoxicity (0.8 vs. 2.9) [2,3,49]. Utilizing anthracyclines with liposomal coating reduces cardiotoxicity by up to 80% compared to traditional formulations while preventing access into the heart without reducing tumor penetrance [40,49]. The uses of analogs of anthracyclines, such as epirubicin and daunorubicin, have lower cardiotoxicity despite their decrease in therapeutic efficacy [50].
The iron-chelating agent, Dexrazoxane, received FDA approval for preventing anthracycline-induced cardiotoxicity in August 2014. Dexrazoxane acts by binding to iron in the body before entering the cardiac cell and thus decreases the formation of iron-anthracycline complex and decreases free radicals that damage the heart through the peroxidation of lipid membranes with decreased cardiotoxicity in anthracyclines. It is used simultaneously with these drugs or with the first and second doses at baseline when cumulative doses reach 300 mg/m 2 [50].
Other chemotherapeutic or radiotherapy agents Digoxin
Several studies have shown that Digoxin is an effective treatment for chemotherapy-induced cardiotoxicities, such as anthracycline cardiotoxicity, due to its ability to suppress oxidative stress and cellular damage [51]. Digoxin is also beneficial when combined with ACE inhibitors to treat Trastuzumab-induced cardiotoxicity [52]. In addition to its cardioprotective effects, Digoxin may have an anti-cancer effect. According to Wang et al., Digoxin can suppress cancer, and its co-treatment with anthracyclines increases JOURNAL of MEDICINE and LIFE their activity [53]. However, the combination of Digoxin and Doxorubicin increases the level of γH2AX, which is associated with DNA changes and induction of apoptosis [53]. On the other hand, a study by Pereira et al. found that the combination of Digoxin and Paclitaxel decreased the anticancer effect [54].
Atg7-Based Autophagy Activation
Autophagy is a cellular process that removes damaged organelles and other unwanted material from the cells in the body. Autophagy helps maintain the cell's healthy functioning by removing damaged components and recycling useful materials. This process is also necessary to remove waste materials generated by the cells, such as dead proteins and waste food particles. When autophagy becomes overactive, it can lead to many conditions, including cancer, liver disease, and neurodegenerative disorders [47]. Autophagy, the breakdown of proteins and organelles by autophagic means, is a crucial process for mitophagy, which selectively degrades damaged or dysfunctional mitochondria. In addition to autophagy, mitochondrial dysfunction can also be induced by lysosomes or selective degeneration of other organelles and is considered a core mechanism for mitophagy. Mitophagy is a process in which the autophagic machinery selectively removes damaged or dysfunctional mitochondria. This targeted removal of defective mitochondria helps to reduce cellular damage and improve cellular function and is considered an important mechanism for maintaining cellular homeostasis. Inhibition of this pathway is a common cause of cardiac toxicity and mortality due to introducing a toxic insult to the heart. This effect can be exacerbated by highly toxic chemotherapeutic agents before ATG7 inhibition -especially with Bortezomib, Doxorubicin, and Cyclophosphamide. Other side effects include anemia, hyponatremia, leukopenia, thrombocytopenia, febrile neutropenia, infection, fatigue, rash, gastrointestinal bleeding, and tumor lysis syndrome [54].
ATG7 and Doxorubicin
Identifying these specific characteristics of both molecules highlights the potential for the design of improved and less toxic ATG7 and Doxorubicin analogs with reduced cardiomyopathy. However, there are currently no effective treatments to reverse the development of cardiomyopathy in mice despite ongoing research to develop such treatments. The results of this study provide a foundation for identifying new therapeutic strategies in treating cardiotoxicity induced by ATG7 and doxorubicin therapies [48,55].
ATG7 activation to decrease cardiotoxicity
There are several studies conducted to evaluate how ATG7 activation can decrease cardiotoxicity. Some studies on experimental animals illustrated that six weeks of Resveratrol with calorie restriction could induce autophagy, which offers some protection against anthracycline-induced cardiotoxicity. Another study on zebrafish found that overexpression of ATG7 at 1-week post-injection prevented and reduced cardiac dysfunction induced by anthracyclines. Additionally, the research observed that daily administration of Spironolactone and Rapamycin restored ATG7 activation, offering cardioprotection to H9c2 cardiac cell lines [48,56].
Dexrazoxane to prevent anthracyclines cardiotoxicity
One mechanism behind anthracycline-induced cardiotoxicity is the generation of free radicals, and several studies have implicated iron's role in this toxicity. To counter this, Dexrazoxane offers protection by chelating free radical iron and reduc-ing cardiotoxicity [57]. The FDA approved it in 1995 to prevent cardiotoxicity in patients receiving Cyclophosphamide, Fluorouracil, and Doxorubicin. However, recent studies suggest that taking Resveratrol in patients receiving Doxorubicin significantly reduces free radicals and Doxorubicin cardiotoxicity compared to Dexrazoxane. The mechanism behind Resveratrol's reduction of cardiotoxicity not only involves decreasing free radical generation but also activating ATG7 [58].
Monitoring to decrease cardiotoxicity t4
One of the best ways to prevent cardiotoxicity is by monitoring. In March 2017, the American Society of Clinical Oncology (ASCO) published guidelines for controlling and monitoring cardiac dysfunction in adult cancer survivors. The purpose of this guide is to develop recommendations for the diagnosis and monitoring of heart functions in these patients [58].
The guidelines were based on a systematic review of 104 functional clinical studies published between 1996-2016 and focused on clinical methods in the hospital, such as slow IV administration and monitoring, to prevent cardiotoxicity [58].
To identify patients at risk of developing cardiac dysfunction, there should be an assessment of their exposure to anthracyclines and radiotherapy, tyrosine kinase inhibitors, and possibly cardiovascular risk factors (smoking, hypertension, diabetes, dyslipidemia, and obesity) [25,34,58]. Before starting therapy, risks can be prevented or reduced by doing a complete clinical examination of the patient, eliminating any potential cardiovascular risk factors, and avoiding or using cardiotoxic medications as little as possible [25,35,58].
During the course of administering therapy, precautions are taken to reduce danger. Incorporating cardio protectors into treatment (Dexrazoxane) when giving anthracyclines in continuous or liposomal infusions, treating cardiovascular risk factors (smoking, hypertension, diabetes, dyslipidemia, and obesity), and evaluating the fields of therapy and technology to be applied to patients receiving mediastinal radiotherapy are a few examples. Clinical follow-up, imaging tests (cardiac ultrasound, nuclear medicine, and magnetic resonance), assessing blood biomarkers (troponins and natriuretic peptide), and referrals to cardiologists are all used to monitor patients throughout therapy [59]. A detailed medical history, physical exam, and early recognition of cardiotoxic signs and symptoms are required for monitoring individuals at risk for cardiac dysfunction after treatment. Nevertheless, it is suggested that even in asymptomatic patients with left ventricular dysfunction, the causative medication should be discontinued, and appropriate treatment should be initiated if cardiotoxicity is found despite preventive therapy within the predetermined parameters already evaluated by echocardiography or other diagnostic techniques [59].
When symptoms and outright heart failure arise, some studies recommend utilizing medication based on angiotensin-converting enzyme inhibitors and beta-blockers [59]. In symptomatic situations or when rhythm problems are present, appropriate therapy should be designed that incorporates diuretics, aldosterone antagonists, nitrates, and, despite the controversy, Digoxin. The importance of resuming or altering the chemotherapeutic regimen will be considered based on the development and clinical response [59].
Medical plant and herbal products that may decrease cardiotoxicity
The protective effect of grape seed extract has been shown to decrease free radicals and prevent Digoxin toxicity [60]. However, its efficacy as an antioxidant in reducing cardiotoxicity is still in question. Other studies propose the use of Costus pictus extract against anthracyclines toxicity, but when compared with vitamin E, it showed no significant effect [60]. Also, some studies have been conducted on medications such as Febuxostat and herbal plants (e.g., Panax ginseng and pomegranate) that may help to relieve or protect from Doxorubicin cardiotoxicity [4,11,23,61]. Furthermore, several studies recommend using neurohormonal axis inhibitors, such as angiotensin-converting enzyme inhibitors and Carvedilol, which protect from cardiotoxic effects when used during cancer therapy due to their antioxidant effects [4,61], as shown in Figure 4.
CONCLUSION
Chemotherapy is often the first line of treatment for many cancers and remains a cornerstone of cancer care. It can be used alone or in combination with other therapeutic techniques. The advances in chemotherapy have significantly increased cancer patients' life expectancy. However, chemotherapy treatment can also have adverse effects due to the treatment's mechanism, dosage, and frequency of administration required to achieve remission. One of the most worrisome side effects of chemotherapy is cardiotoxicity, which can significantly impact a patient's quality of life and long-term prognosis. Recent research has found several drugs and compounds that can lower heart toxicity, including Dexrazoxane, that can reduce chemotherapy-induced cardiotoxicity in high-risk groups. Moreover, Digoxin, ATG7 activators, Resveratrol, and other medical substance or herbal compound have a comparable effect to Dexrazoxane in anthracycline-induced cardiotoxicity. These drugs have demonstrated efficacy in preventing and treating heart toxicity resulting from chemotherapeutic drugs. | 6,570.2 | 2023-04-01T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Evaluation of Hospital Laboratories Design in Ethiopia
Background: Since advancement of science and technology in the area of laboratory medicine in 21st century the design of laboratory should be organized into high flexibility zones and need an open plan to support the dynamic nature lab testing by manual or semi-automated or fully automated. The most difficult issue in design of laboratory is allocation and organization of space. Although well design laboratory is in place, it will be compromised if the ergonomics workstation and workflow of laboratory are not designed well. In Ethiopia there is no baseline information on hospital laboratory design. Objectives: To evaluate the existed hospital laboratories design setup and proposed new laboratory design in Addis Ababa and Adama, Ethiopia. Methods: A cross-sectional study design, quantitative method and direct observation were conducted in five public and private laboratories in Addis Ababa and Adama, Ethiopia between 2015 and 2016. Results: Floor plan was available and posted in all laboratories. Three of labs were not initially designed for the Laboratories. The adjoining and adjacent matrix principles were not documented in assessed laboratories. The laboratory design didn`t have proper exist door and show the direction of evacuation plan during emergency situation and fire extinguisher were not strategically placed and free of obstruction. Laboratories did not have any mechanical ventilation system. Laboratory and non-laboratory activities were not separated. The exited lab design didn’t accommodate future demands. Conclusion: Whenever laboratory construction or renovation is planned, first identifying the size and nature of laboratory test being performed, laboratory workflow, number and size of laboratory equipments needed, type and number of ergonomics casework and countertops, emergency evacuation plan, access control areas. Laboratory proximity programming and functional relationship between laboratories and reception should be determined early in design process and laboratory module unit space size determination and open plan laboratory together mobile care work and lean laboratory design should be consider in design process.
Introduction
The implications of laboratory design can be explained as the clinical lab should be organized into three flexibility zones (highly flexible, semi-flexible and least flexible) that correspond to techno-logical requirements since the equipment is central to the function of the lab; a lab needs an open plan to re-evaluate and reconfigure furnishings, to support the dynamic, sequential movement of the specimen, and to remain operational as new technologies are added; plug-and-play utility systems (such as overhead service carriers) should be included, particularly in the highly automated areas, Modular furniture, adjustable height tables, and movable furniture are recommended so as workstations can be removed or reconfigured as technological processes change [1,2].
The laboratory design process encompasses many elements like space, case-work or workstation, furniture, storage, ventilation, lighting, water, wall, floor, equipment, etc. [3]. Among the most difficult issues in design of laboratory is allocation and organization of space. Now a day's most laboratory shall pass through accreditation process to be competent in the market and then space issue is major area since lack of adequate space may compromised the quality of work performed by the laboratory.
Adequacy of space can be explained as putting something into space and measure physically when limits the space itself. However, usually when the laboratory builds initially the space may be enough but through time more and more objects are putting together into limited space and this leads to dysfunction of the work [3].
The laboratory design and layout define traffic patterns, workflow and the functionality of a laboratory. Laboratory safety is also directly affected by decisions made about the quantity, quality, type and organization of laboratory casework and furnishings. A laboratory should have enough well-configured casework, both floor and overhead if necessary so that the functionality and safety of the laboratory is not compromised. Too much casework for the square footage of laboratory space available is a more common problem than too little casework [3].
Today's clinical laboratory is one of a hospital`s largest departments and produces vital information for effective healthcare delivery. Clinicians depend daily on the clinical laboratory data for patient care. The modern clinical laboratory is a sophisticated network of people, machines, and processes tightly coupled with the clinicians. Laboratory tasks often require intense eyeto-hand coordination, frequent use of the arms for precision work, manual materials handling, and high demands on visual concentration so that in the process of laboratory design, ergonomics needs to be an integral factor [2,4].
It has been recognized that laboratory design process should not only left for engineer and architects who have active roles in building construction. Laboratory manager or laboratory professional shall participate in the design of laboratory since the inability to obtain informed input at critical points may result the laboratory staff finds less than satisfactory in both form and function laboratory [1].
Methodology
A cross-sectional study design was conducted to evaluate the hospital laboratories design in Addis Ababa and Adama, Ethiopia. Quantitative methods and direct observation using checklist were utilized. Laboratory design assessment was conducted on five hospital laboratories, three public and two private laboratories between 2015 and 2016. The objective of this study was to evaluate the existed laboratory design setup and propose new laboratory design.
A checklist was used to assess the laboratory design to measure laboratory layout plans, number of section, space, wall, flooring, sinks, chemical and waste storage, furniture design, location and exit paths, lighting, cleanness, electricity, ventilation, emergency eyewash and safety showers, doors and window, caseworks, countertops, water and equipment.
Five hospital laboratories were selected based on having various laboratory departments, processed large volume of samples per year, had various workstation and located in Addis Ababa and Adama Town.
Principal investigator was collecting data on the current laboratory design setup using the standard prechecked checklist. All necessary measurement unit and sketch of laboratory were captured.
Functional
Relationship and Proximity of Laboratory: Five laboratories were assessed and each located within a complex building structure of hospital. The functional relationship diagram explained the existed laboratory situation (Figure 1). Based on their location from origin of samples collection areas, four laboratories were centrally located and found on the same floor level to that of reception. An approximate average distance in meter from reception to Chemistry, Hematology, Serology, Parasitology, and Bacteriology laboratory were 15,20,21,24 and 25 respectively.
The overall adjoining and adjacent matrix principle were not observed and applied in assessed laboratories. There was no functional relationship between various laboratories. Space: Laboratories didn't have its own mini conference room, 4/5 labs had no separate office for laboratorians, no separate corridor in accessing laboratory personnel, no free space for expanding its service next to wall, no enough aisles for laboratory equipment for air circulation around them and no functional relationship between laboratory, no flexible space for future use by using flexible casework and also there was unnecessary wall and partition which further compromised the space. More than half of the laboratories (3/5) had a wall partition of three which further confronted the adaptability and flexibility of space usage since the design of laboratory was not an open space.
On average Chemistry, Hematology & CD4, Bacteriology and Blood Bank laboratory section had the largest share of total square meter of 31 (23.3%), 27 (20.3%), 17 (12.8%), and 17 (12.8%) respectively from the gross area of 133 Sq. meter ( Figure 2). Door: There were no lockable doors for security purpose and doors were not closed at all time while an experiment in progress. The main door width size was 1.08 meter which was below the minimum standard of 1.2 meter and the interior height door of all laboratories were 0.82 meter which still below the minimum of 0.92 meter. All doors were not fitted with vision panels.
Window: Laboratories window were not installed insect screens, however, all laboratories had found that enough windows for natural lightings and the size of large window was in the range of (W:1.35m,H:1.55m) to (W:1.39m,H:170m) and the small windows in the range between (W:0.55m,H:1.55m) to (W: 0.97m,H:1.55m).
Floor, Wall and Ceiling: The laboratory floor was made of non-pervious, one piece and with coverings to wall but the floor was not finished by tiles and wooden planks. The laboratories were not completely separated from outside areas and bounded by four walls. Laboratories ceiling height were in range between 2.75 to 2.90 meters which was below the minimum standard of 3.1 meters. The colour of adjacent walls and ceiling were milky white dark.
Sink: A minimum of one laboratory sink was observed in each section of laboratory for hand wishing and almost all where located close to the egress, and each sink had a lip of 0.20 meter. There were an extra sinks in some of laboratories which compromised the net useable space.
Laboratories had a separate hand washing sink with minimum of one and maximum of two and the space was in range of (0.55x0.77) m 2 to (0.75x1.00) m 2 . There was large size sink in some of laboratory section in the range between (0.60x1.0) m 2 to (0.64x1.40) m 2 . Another kind of sink available was countertop sink which was small in size of (0.40x0.46) m 2 . Most sink on the countertop were not functional. Store Room: Laboratory had small size underground cold room storage for non-hazardous materials but no window and sufficient ventilation system was in place. There was no enough chemical storage space available and no separation of chemical by their nature. However, there was hardwood or metal made chemical shelving in mini store room.
Laboratory Furniture: As observed almost all laboratory furniture were sturdy but the work surface (bench tops and counters) were not made of impervious to the chemical and they made of wooden.
Lighting System: Laboratories had adequate natural and artificial illumination to ensure sufficient visibility for operational safety except some laboratory had only one functional bulb out of five which were compromised the visibility and safety of operators while working at night. The distance between light fixtures to the work surface were 1.97 meters in average, and the luminaries was mounted differently to the work-surface. However, there were some cabinets above the work surface that impend the luminaries. No special lighting systems were installed for emergency exit purpose.
In average 32 bulbs were counted in each hospital laboratories and in average each section had 4 bulbs and at least two were functional and switching control On/Off was located at entrances and exits door. Two laboratories out of five had double switched so that depending on the level of illumination desired, half of the tubes switched at a time.
The location of luminaries were mounted in three different ways such as perpendicular 41.6% (5/12), 33.3% (4/12) parallel, and 25.0% (3/12) mixed type. Those luminaries mounted perpendicular to the work surface were created a shadow from the person working at the bench.
Casework:
The laboratories casework were not built in building, however, they couldn't remove, relocated and reused in the other areas of building. In total two fixed caseworks (not modular) were available from all laboratory section and all was made of wood and outdated style, however, they had the skeleton of gas, vacuum and air outlet but none of them operating, some of them had wiring and electrical connection by function. Their drawers were not working well. Relatively the size of casework was larger than the main door size so that it would not be easy to relocate into other laboratory without separating its components.
Countertop:
The edge of laboratory countertop were made of wood or stone core and not rounded. There were no distinct bench type available in all laboratories and the height of seated bench was in average 0.80m which was above the 0.76m of minimum standard. Conversely, the height of standing bench was 0.80m in average which was also below 0.91m minimum standard but knee clearance under sit-down work meet the minimum standard of 0.71m.
Water: Laboratories used two type of water, tap water and distilled water, but there was no centrally located distillation room rather it was installed and placed in laboratory section that served for laboratory purpose.
There was no reservoir tanker dedicated only for laboratory section.
Storage: Concerning storage of flammable liquid there was no separate storage cabinets so that it was placed mini store, however, all were displaced below the eye level.
Ventilation system: Laboratories had not any mechanical ventilation system in place and laboratory section had no system of intake the outside air and exhaust to the outside. There was no control system the thermostat of each laboratory.
Electric System: Laboratories had no adequate number of electrical outlets to accommodate additional current requirement, no cover plates and ground system in place; they used electrical cords permanently. In some laboratories section wires were stretched across the floor that may cause electrical hazard. All laboratories used backup generator and supply power.
Safety Station:
Laboratories had no plumed eyewash for all working area during emergency situation and no distinct eyewash safety and shower station installed at edge of the exit door. The laboratory design hadn't shown the direction of evacuation plan during emergency situation and fire extinguisher were not strategically placed and free of obstruction.
Therefore, based on the existed situation of five hospital laboratories, the study proposed new or modified physical layout of laboratories and recommends the minimum clinical laboratory standard whenever laboratory construction and renovation needed.
Proposed Laboratory Design
Since the existed laboratories proximity functional relationship were far behind from the current clinical laboratory standard, the new proximity functional relationship ( Figure 3) and general floor plans is presented below (Figure 4). Figure 4 didn't show clearly the doors, location of equipment or casework at this stage (planning and programming phase). However, it illustrates the various options how the laboratory space listed in the program fit. In addition it shows the generalized movement of the samples, staff, supplies, waste and patient entrance. Based on the evaluation of laboratory design of five hospital laboratories, the study determined the physical layout based on its size, nature and number of clinical tests. The adequate space for a particular laboratory requires an understanding of the whole functions.
Floor Space Determination: Laboratory features that are required by national regulatory and accreditation agencies need to be addressed when determining square footage. These includes fixtures such as emergency eyewash stations, emergency floor, showers, hands wash sinks, and fire extinguisher cabinets. Space must be allowed for ease of manoeuvring throughout the laboratory, including the area between casework, in aisles, and around equipment.
Laboratory Module:
A single laboratory module is defined as a basic unit of space of a size commonly referred to as a two-person laboratory. A typical laboratory module is between 10 to 11 feet (3.10 to 3.35 meter) (Figure 5 & 6). The 10-foot module includes countertop of 30 inches (0.76m) on each side and an aisle of 60 inches (1.52m). The 11-foot module has the same countertops and aisle, but also incorporates the utility core that is typical for many laboratory casework systems (Figure 7).
The length of laboratory module is governed by several variables: the overall width of the building enclosure, the area allotment for a standard module. Laboratory module length is generally 20 to 30 feet for the efficient operation of the laboratory. Laboratory length in excess of 35 feet may generate egress problems.
Laboratory Space Determination during the Planning and Programming Design Phase
Allocation and organization of space are among the most controversial issues in the laboratory design. Laboratory space is currently determined based on the feature of the laboratory itself. These would include the laboratory equipment, work areas, plumbing fixtures, aisles and code clearance. The necessary space for a particular laboratory requires an understanding of all its functions.
During preparation of the planning and program statement and the current space needs assessment were serve as the foundation for predicating future space need. Customized space needs predications were made based on change in total equipment/instrumentation size and bench top workspace. Hence, the study was calculated the need for current space need for main laboratories using actual data but for certain common space the study adopted the minimum standard since there were no strict rule in determine the space but better to be calculated and predicted for future space need for own country.
Current Space Needs Calculation
One of the most easily methods of determining space needs is to determine how much space a laboratory should have based on what is currently done within that laboratory. For example, by measuring all equipments in laboratory and by calculating the total net square meter and possible to predicate for the future space needs. The calculation is done after measuring equipment on floor mounted and on bench and finally providing the desire aisle width between caseworks.
Laboratory Lean Design Method
Laboratory workflow were studied in detail such as value and non-value added activities, employing walk path, equipment, supplies and materials were already identified and these favoured a chance to develop the new layout of laboratory. Therefore, for lean laboratory design existed laboratory design setup was taken as a model to develop the net square space. The focus of the lean in the laboratory deign was to add value, remove wasteful practices with technologists, reagent/samples, and equipment and create a testing process toward single-piece flow. Beside various ergonomics workstation were considered and included prior to laboratory design whereby selecting and specifying highly flexible standing and seating countertop, casework and adjustable laboratory chair were used in developing the ideal layout of laboratory and computer layout calculation was important (Figure 8).
Discussion
Laboratories revealed that the functional relationships between different laboratories sections and its proximity principle (adjacency and adjoining) were not in accordance with standard. There was no functional relationship observed between various laboratory sections. In designing of laboratory, functional relationship should be developed before initiation of construction or renovation. Hence, appropriate relationships and adjacencies are essential to permit a smooth flow of personnel, supplies, and equipment. Workflow is highly depending on the physical layout of laboratories and its proximity. It is important that the distance between laboratories, reception, and common instrument rooms be as short as possible since samples, chemicals, and flammable materials are transported between the theses areas [3][4][5][6][7].
Concerning the laboratory design, the study identified that almost all laboratories were not had a separate office for laboratory workers, no mini conference space available, no flexible space for future use, no flexible casework, unnecessary wall and partition, no free space for expanding its service next to wall, no enough aisles around equipments, and the laboratory was not completely separated from outside areas. As clinical laboratory standards, all laboratories and non-laboratories works should be separated [7,8]. All laboratory doors were not self-lockable and had no vision panels and the main door size was below the minimum standards. There was no laboratory access control. Moreover, there was no exit door for emergency evacuation plan, operator forced to use entrance door during emergency and all laboratories main doors were swing into the laboratory. However, doors should swing out from laboratories as a means of safe egress if corridors are wide enough; but it can create hazardous conditions by blocking the corridor traffic if not recessed into an alcove. For new construction or renovation of laboratory, 1.8 meter wide corridors, 1.2 meter door widths are appropriate permitting larger equipment to be moved into the laboratory [7].
Almost all laboratories had adequate natural and artificial illumination to ensure sufficient visibility for operational safety procedures. However, the location of luminaries was not in similar fashion, some of them were perpendicular, parallel or mixed type. In general, the more detailed task as of bacteriology laboratory, the higher the illumination required to perform it with accuracy. Where the correct identification of colour is important, special colour-corrected lamps may be necessary [3,9].
The assessed laboratory had no mechanical ventilation system and no adequate number of electrical outlets to accommodate additional current requirement, no ground system and cover plates. The laboratory didn't have an emergency evacuation exit door in placed [10,11].
Since hospital laboratories were not an opened plan, many wall partitions together with unstandardized lab module type were recognized. In order to achieve flexibility, the design must be planned in terms of a basic planning concept, "the laboratory module". The module establishes a dimensioned method by which building systems, partitions, and casework work well together within the new or existing building structural framework. Whenever new design of laboratory construction or renovation is in place, cost savings proposals must be investigated during the process of planning. Partitions used within laboratories to hang casework may be replaced by free-standing cabinets and shelving [12,13,7].
In this study laboratories accumulated nonfunctional equipment which compromised the existed space and these indicated that through time newly arrived equipment has been added over and over in the same room. Thus, critical information regarding equipment needs and operations is required for each laboratory unit early in the design phase of the project but taking the future expandability of service into consideration, the design should incorporate free space. Equipment selection and location should be finalized early in the design stage to avoid redesign and schedule delays [11].
Emergency showers are located in the hallways with a contrasting spot painted on the floor to indicate the shower location, with the number of showers per area to be based on Occupational Health and Safety Agency (OSHA) requirements [6]. | 4,963 | 2017-01-01T00:00:00.000 | [
"Engineering",
"Medicine"
] |
A Random Shuffle Method to Expand a Narrow Dataset and Overcome the Associated Challenges in a Clinical Study: A Heart Failure Cohort Example
Heart failure (HF) affects at least 26 million people worldwide, so predicting adverse events in HF patients represents a major target of clinical data science. However, achieving large sample sizes sometimes represents a challenge due to difficulties in patient recruiting and long follow-up times, increasing the problem of missing data. To overcome the issue of a narrow dataset cardinality (in a clinical dataset, the cardinality is the number of patients in that dataset), population-enhancing algorithms are therefore crucial. The aim of this study was to design a random shuffle method to enhance the cardinality of an HF dataset while it is statistically legitimate, without the need of specific hypotheses and regression models. The cardinality enhancement was validated against an established random repeated-measures method with regard to the correctness in predicting clinical conditions and endpoints. In particular, machine learning and regression models were employed to highlight the benefits of the enhanced datasets. The proposed random shuffle method was able to enhance the HF dataset cardinality (711 patients before dataset preprocessing) circa 10 times and circa 21 times when followed by a random repeated-measures approach. We believe that the random shuffle method could be used in the cardiovascular field and in other data science problems when missing data and the narrow dataset cardinality represent an issue.
INTRODUCTION
Heart failure (HF) affects at least 26 million people worldwide (1), so predicting adverse events in HF patients represents a major target of clinical data science.Common challenges in clinical studies and trials are as follows (2,3): (i) troubles in finding patients fitting the eligibility criteria (e.g., rare disease); (ii) difficulties in the enrollment because of a poorly formulated informed consent; (iii) data collection problems; (iv) time delays because of complicated study design or due to unpredictable events; and (v) financial demands of the clinical practice.All these issues could be the cause of missing data and datasets with narrow cardinality, which are relevant challenges in data science (in a clinical dataset, the cardinality is the number of patients in that dataset).
As a consequence, researchers need to produce novel hypotheses and methods to deal with these issues, which are particularly critical when the dataset is used to build risk models in the field of clinical cardiology.A successful effort to overcome the abovementioned issues is represented by the MAGGIC risk score, developed as a tool of risk stratification for both morbidity and mortality in HF patients (4,5).To build MAGGIC, Pocock et al. (5) have combined 30 datasets to enlarge patients' cardinality, thereby reaching an astonishing amount of 39,372 patients, and handled the missing patients' values via multiple imputations using chained equations (6,7).In detail, to deal with missing data, regression equations are defined; the missing values are initially replaced by randomly chosen observed values of each variable, and then the missing values are replaced by a random draw from the distribution defined by the regression equations, and at the end of the last iteration, the final value becomes the chosen imputed value.Hence, we can argue that a random procedure could be important to overcome not only the issue of missing data, but also, at the same time, the one of narrow dataset cardinality.
The conceptual challenge of missing data is dual: 1) missing patients (i.e., completely missing data but plausible patients, as discussed later) who cause a narrow cardinality of the dataset and 2) missing data in patients with a partial list of needed values.In the current work, we unify the vision of these two kinds of missing data, searching for them with a random method, our novel random shuffle method without the use of specific hypotheses and regression models: we only need the original data, and we randomly shuffle them while it is statistically legitimate."Statistically legitimate" means that, to validate our random shuffle method, the new datasets with enhanced cardinality were compared to those enhanced via an established random repeated-measures method (8,9).
Indeed, the aim of this work is not to obtain a risk score, but to introduce an innovative method to enlarge the dataset cardinality and boost up the statistical performance.Our random shuffle method can be applied in other research fields when both missing data and limited dataset are issues because of financial, experimental, or ethical limitations.
Original Dataset
The clinical dataset is composed of a total of 711 German, Austrian, and Italian patients suffering from HF in different stages, in hospital facility due to either an acute hospitalization or an ambulatory visit, released and followed up for a period of 6 months.Patients were enrolled in two distinct clinical studies: (i) the Aldo-DHF trial (10), a multicenter, randomized, placebo-controlled, double-blind, two-armed, parallel-group study that enrolled patients from 10 trial sites in Germany and Austria (data are available in the Supplementary Materials) and (ii) the STOP-SCO trial, a prospective, multicenter, observational study that enrolled patients from 10 hospitals in the Northern Italy (unpublished data, that are available in the Supplementary Materials).The protocol and amendments were approved by the institutional review board at each participating center, and the trials were conducted in accordance with the principles of the Declaration of Helsinki, Good Clinical Practice guidelines, and local and national regulations.Written informed consent was provided by all patients before any study-related procedures were performed.
The studied endpoints at 6 months were a composite endpoint (all-cause hospitalization plus all-cause mortality) and all-cause hospitalization.
The dataset is organized in rows (patients) and columns (clinical parameters or features).The features are of two types: i) 13 binary features that show the presence (value = 1) or the absence (value = 0) of the following conditions: peripheral edema, composite endpoint, age >75 years, angiotensin receptor blockers intake, β-blockers intake, left ventricular ejection fraction at admission >50%, nt-proBNP >1,000 pg/mL, diabetes, chronic kidney disease with glomerular filtration rate <50 mL/min, heart rate at release ≥90 bpm, anemia (hemoglobin concentration <12 g/dL for women, <13 g/dL for men), all-cause hospitalization endpoint, more than 2 hospitalizations in the last year; and ii) 6 numerical features: age, heart rate at release, body weight at release, systolic aortic pressure at release, diastolic aortic pressure at release, left ventricular ejection fraction at admission.
To preprocess the clinical dataset for the removal of patients with missing values, two exclusion criteria were sequentially set: 1) at least an endpoint lacking (composite endpoint, all-cause hospitalization endpoint) and 2) at least a feature lacking (other than endpoints).
After the preceding data cleaning, the 13 binary features were used as dummy variables (11) to group the patients into classes, where the number of classes could be, at maximum, 2 13 .In particular, a self-balancing (12) (also called height-balancing) was applied to the tree of the binary features obtaining a new sorting of the dataset.In summary, the ordered list of the first 13 columns is the i) list above.
Moreover, because an intraclass-intrafeature random shuffling is possible if and only if the class cardinality is >1, the monoexample classes (i.e., with a lone patient) were excluded.
After preprocessing, the dataset is composed of 385 patients grouped into 61 classes.Conceptually, each class represents a particular clinical condition; in other words, the class label delimits a dataset subset inside which the shuffling is legitimate and not tautologic [as we show below, in statistical manner, via the comparison to a MATLABimplemented repeated-measures fitting followed by its "random" method (8,9); MATLAB R , The MathWorks, Inc., Natick, MA].
In Figure 1A, for demonstration purposes, we show a simplified representation of the original dataset with four patients analyzed with 3 features and grouped into 2 classes.
In Figure 2A, for sake of example and comparison with enhancing methods (Figures 2B,C), we plot two original numerical features for two classes (e.g., the 1 st and the 3 rd of 61 classes).The following sections will describe how to obtain variants of the original dataset.
Repeated-Measure Variant
In MATLAB R (Statistics and Machine Learning Toolbox TM ), there are already implemented functions as the "fitrm" (acronym for "fit repeated-measures model") with the associated "random" method permitting to generate new random response values given predictor values (8,9).
In particular, in the fitrm function, the measurements (the 6 numerical features above listed) are the responses, and the class column (with the aforementioned 61 classes) is the predictor variable.The fitrm function produces a repeatedmeasures model onto which we can apply the random method to randomly generate new response values, that is, new numerical measurements for our 6 numerical features.We called this random generation as "repeated-measures" variant (Figure 1B), and we added it to the original dataset (Figure 1A) obtaining an enhanced dataset (Figure 2B).
Theoretically, it is possible to generate at will without outputting replicated values, but we have introduced a calculus checkpoint to delete eventually replicated patients in the enhanced dataset.
Shuffle Variant
In MATLAB R , we have implemented an intraclass random exchange/shuffle of values inside each feature (i.e., each feature is independently shuffled in random and intraclass manner).We called this random exchange/shuffle as "shuffle" variant (Figure 1C), and we added it to the original dataset (Figure 1A) obtaining an enhanced dataset (Figure 2C).
It is likely to shuffle with outputting replicated patients (especially inside low-cardinality classes), so we have introduced a calculus checkpoint to delete replicated patients in the enhanced dataset.
Hotelling t 2 Statistic
Hotelling T 2 distribution is a multivariate distribution proportional to the F distribution; in particular, it is a generalization of the Student t distribution for multivariate purposes.Hotelling t 2 statistic is a generalization of Student t statistic used in multivariate hypothesis testing (13,14).
In our multivariate problem, we have 6 numerical features, and we would enhance the original dataset without generating a different population (p > 0.05).So, the original dataset gives the expected multivariate mean vector (EMMV), and against EMMV, we compare the repeated-measures enhancement vs. the shuffle enhancement at a significance level of 0.05.
In other words, for the same enhanced number of patients, we are validating the shuffle enhancement using the repeatedmeasures enhancement which is an already accepted method: the shuffle enhancement is validated if and only if the p-value is not significant (i.e., the enhanced shuffled population is the same as the original dataset or the enhanced repeated-measures one).
Combined Approach
In a combined approach, an enhanced shuffled population was subjected to a repeated-measures processing.
Stressing the Enhanced Datasets via Machine Learning and Regression
In our specific cardiology problem (HF), the main goals of having enhanced datasets by enlarging their cardinality, while it is legitimate, are a greater classification/prediction skill (e.g., to predict the patient's class of risk) and a greater regression skill (e.g., to estimate the likelihood of two endpoints: composite endpoint, all-cause hospitalization endpoint).In other words, we are trying to overcome the issues of missing data and datasets with narrow cardinality, which are typically due to financial, experimental, or ethical limitations without losing the statistical nature of the original dataset, boosting its statistical performance while legitimate (p > 0.05 in t 2 -test).
To highlight the benefits of the enhanced datasets vs. the original one, we have compared their classification/prediction skill and regression skill.
In detail, to stress via machine learning, we have used all the 19 features (13 binary, 6 numerical) and the column with the class labels as the response column (the enhanced dataset had 61 classes as the original one).A 10-fold cross-validation was applied to calculate the accuracy (%) by the MATLAB R Classification Learner application (methods: fine tree, fine KNN, weighted KNN, linear SVM; all default settings were unchanged).
To stress via regression, we have used 17 features (11 binary, i.e., excluding the 2 endpoints; 6 numerical) and, as response column, a column containing a specific endpoint (composite endpoint or all-cause hospitalization endpoint).A 10-fold crossvalidation was applied to calculate the root mean square error (RMSE) by the MATLAB R Regression Learner application (methods: fine tree, linear, linear SVM; all default settings were unchanged).
Hotelling t 2 Statistic
The two enhanced populations (repeated-measure, shuffle) were the same as the original one until 20× enlargement; that is, we arrived up to 7,700 patients (including the 385 original).Further enhancements were not legitimate (p < 0.05).
In a combined approach, the preceding 20× shuffled population was subjected to a 2× repeated-measures processing,
Stressing the Enhanced Datasets via Machine Learning and Regression
The comprehensive results are presented in the following tables in terms of accuracy (%) and RMSE.
Accuracy is a metric for evaluating the performance of machine learning in terms of the fraction of correct classifications.In this example dataset, high accuracy means that a sizable portion of patients was grouped into the correct classes (Table 1).
RMSE is a good estimator for the standard deviation of prediction errors; it informs about how far off we expect the regression model to be on its next prediction.If the RMSE is very small (Tables 2, 3), the predicted value of an endpoint will practically coincide with the observed binary value in the future.
DISCUSSION
To stratify patients according to their cardiovascular events risk in a 6-month follow-up after hospital discharge, the appropriate method of classification needs to be accurately determined in the case of the original dataset.In our case, the fine KNN algorithm implemented in MATLAB R revealed to be a good choice (accuracy equal to 93.2%, Table 1).However, the enlargement or enhancement of the cardinality of the original dataset, while it is legitimate, could possibly enable a greater classification/prediction skill.In detail, we have designed and developed a random shuffle method and validated it against the already used random repeated-measures method: the validation has given statistical legitimacy to the random shuffle method (while p > 0.05 via Hotelling t 2 statistic), and we have obtained a performance (accuracy up to 100%, independently from the classification method) better than that of the fine KNN dedicated only to the original dataset (Table 1).These results prove that the strategy with binary features, used to define the classes, and our random shuffle method to enhance the dataset can give a particularly good classification performance (Table 1).
To estimate the likelihood of the two endpoints (composite and all-cause hospitalization), a linear regression is already a good choice (Tables 2, 3).However, the enlargement of the cardinality of the original dataset via both the random repeatedmeasures method and the random shuffle method or via the combined approach can give a better performance (RMSE down to 0), as stressed via the fine tree regression method.For example, a fatal clinical set is positive for nt-proBNP >1,000 pg/mL and heart rate ≥90 bpm, whereas a rehospitalization clinical set is positive for peripheral edema and left ventricular ejection fraction >50%, where the last parameter lightens the general health condition.The names of the classification methods (fine tree, fine KNN, weighted KNN, linear SVM) refer to the preset tools inside the "Model Type" section of the MATLAB ® Classification Learner application (all default settings were unchanged).
Clinicians could certainly claim that the abovementioned inferences could be easily made also without the use of mathematical methods or tools of artificial intelligence (e.g., classification/prediction or regression as shown in the Tables 1-3).Indeed, we consider such a provocative observation as a major strength of this study because we have validated the random shuffle method not only by statistics, but also, more importantly, by clinical judgment.
Another clinical strength is that the chosen features are patients' event ratios at hospitalization and follow-up.Thus, by randomly shuffling these features between patients, we are creating in silico plausible patients with a realistic and likely The names of the regression methods (fine tree, linear, linear SVM) refer to the preset tools inside the "Model Type" section of the MATLAB ® Regression Learner application (all default settings were unchanged).The names of the regression methods (fine tree, linear, linear SVM) refer to the preset tools inside the "Model Type" section of the MATLAB ® Regression Learner application (all default settings were unchanged).
combination of comorbidities and event ratios.Therefore, the enhancement of the dataset cardinality yields not only statistical but also clinical worth.
In conclusion, we have shown that our random shuffle method is validated not only by statistical comparison to an already established method (the random repeated-measures method) but also, more notably, by the clinical knowledge and expertise.In addition, in comparison with the random repeated-measures method, a mathematical advantage of the random shuffle method is the absence of a fitting procedure.Consequently, we believe that our random shuffle method can also be applied in other research fields when missing data and the narrow cardinality of a dataset are issues because of financial, experimental, or ethical limitations.
Exclusion Criteria
Three exclusion criteria were sequentially set: 1) at least an endpoint lacking (thus, 116 patients were removed); 2) at least a feature lacking (other than endpoints) (another 67 patients removed); and 3) the monoexample classes (i.e., with a lone patient) were excluded (another 143 patients removed).Because the monoexample classes cannot be shuffled, one could certainly observe that exclusion criteria 1 and 2 are particularly selective.For instance, to increase the number of patients after preprocessing, only one endpoint at a time could be considered for patient's exclusion; this choice is certainly possible and correct, but implies the cutting of an entire feature, that is, the other endpoint, and as a consequence, we would obtain a reduced stratification of the patients.In addition, the random repeatedmeasures method does not tolerate missing data.Summarizing, the choice was (i) a lower number of patients but with all features, all endpoints, and full stratification or, on the contrary, (ii) a higher number of patients but with a reduced set of features and endpoints and with a reduced stratification.To stress the random shuffle method, we have chosen the first possibility, which is the "worst case" in terms of patients' number and stratification.In any case, the meaning of the random shuffle method remains the same as described above.Moreover, the choice permitted the use of the same data for both classification and regression.
Cardinality Enhancement
The cardinality of the original dataset could be small because of two concomitant reasons: (i) a small number of classes (low stratification) and (ii) a small number of patients inside the classes.With these traits of the original database, the intraclassintrafeature random shuffling has "suffocating borders" in which to act, and the database enhancement is also subjected to the deletion of repeated patients: in that case, we can hypothesize that the times of dataset enhancement is calmed down by the small cardinality of the original dataset.On the contrary, we see the maximum possibility of enhancement when the number of classes and the number of class patients are both high.On the other hand, we see intermediate possibilities when the classes are few but with many patients in each and, vice versa, when the classes are many but with few patients in each.In our original dataset, the classes were many (61 classes), and some of them had few patients (e.g., before cardinality enhancement, two or three or four patients); for additional details, see the following discussion dedicated to oversampling.
Oversampling
The random shuffle method could also be seen as a new kind of oversampling dedicated to the classes of both minority (with low number of patients) and majority (with high number of patients).Oversampling is useful when there is an imbalance (related to the number of patients) between majority and minority classes able to downgrade the classification performance (15,16).The imbalance can be corrected via oversampling inside minority classes and undersampling inside majority ones, e.g., via the SMOTE (Synthetic Minority Oversampling Technique) along with a randomly reduced number of patients in the majority classes (15).In a different approach respect to (15), where the information content is amplified or reduced in minority or majority classes, respectively, we have oversampled both minority and majority classes, while it is statistically legitimate; in other words, we preserve the imbalance (hallmark of a dataset), and we multiply the information content, while it is statistically legitimate, obtaining an enhanced classification and regression performance.We could also hypothesize that the reinforcement of all classes could improve the "exclusion power" of classification algorithms permitting them to better predict patients into reinforced minority classes.
Cross-Validation for Oversampled Datasets
One could certainly observe that the cross-validation, although a very common and accepted technique to avoid the overfitting in classification and regression and so to ameliorate their prediction skill, could be prone to "overoptimism" when applied to oversampled datasets because similar samples or exact replicas may appear in both the training and test partitions.This issue has been clearly discussed by Santos et al. (17), who found a useful combination of characteristics to obtain a notoveroptimistic oversampling: (i) use of cleaning procedures, (ii) cluster-based synthetization of samples, and (iii) adaptive weighting of minority samples.The last cannot be applied because of the simple nature of the shuffling, but the other two have been comprised in the proposed method: the random shuffle is done in an intraclass manner, and then, we delete possible patients' replicas before further analysis; moreover, as third characteristic, each feature is independently shuffled, so that plausible patients are synthetized as clinically discussed above.The combination of these three method's traits makes us confident in the cross-validation done.
CLINICAL LIMITATIONS
The clinical timepoint is to be considered approximately in the middle between those of the two trials used (Aldo-DHF and STOP-SCO).Even if the two trials were different in terms of patients' nationality, we used them together because they represent a real-life heterogeneous set of HF patients who are commonly observed in daily clinics.The risk prediction model at 6 months and an investigation on the differences between the data of the two trials were not purposes of this study and will be addressed in another work.
FIGURE 1 |
FIGURE 1 | Simplified representation of the original dataset along with its variants.(A) The simplified original dataset showing four patients (P = patient) each analyzed with three features (F = feature), displayed with different symbols and colors, and grouped into two classes highlighted with the colored boxes.(B) Representation of the "repeated-measure" variant to expand the cardinality of the original dataset.(C) Same as B, but for our proposed "shuffle" variant.
and we arrived up to 15,199 patients (including the 385 original).Further enlargements were not legitimate (p < 0.05).
FIGURE 2 |
FIGURE 2 | Comparison of the simplified original dataset with its enhancements.(A) Plot of two original numerical features for two classes (the 1 st and the 3 rd of 61 classes).(B) Plot of two numerical features for two classes (the 1 st and the 3 rd of 61 classes) whose cardinality has been enhanced 2×: original plus one intraclass random generation of values inside each feature according to a fitted repeated-measures model.(C) Plot of two numerical features for two classes (the 1 st and the 3 rd of 61 classes) whose cardinality has been enhanced 2×: original plus one intraclass random exchange/shuffle of values inside each feature (each feature is independently shuffled in random and intraclass manner).
TABLE 1 |
Machine learning with 10-fold cross-validation to calculate the classification accuracy (%).
TABLE 3 |
Regression with 10-fold cross-validation, endpoint = all-cause hospitalization, to calculate the regression RMSE (root mean square error). | 5,384.4 | 2020-11-20T00:00:00.000 | [
"Computer Science"
] |
Jigsaw: Supporting Designers to Prototype Multimodal Applications by Chaining AI Foundation Models
Recent advancements in AI foundation models have made it possible for them to be utilized off-the-shelf for creative tasks, including ideating design concepts or generating visual prototypes. However, integrating these models into the creative process can be challenging as they often exist as standalone applications tailored to specific tasks. To address this challenge, we introduce Jigsaw, a prototype system that employs puzzle pieces as metaphors to represent foundation models. Jigsaw allows designers to combine different foundation model capabilities across various modalities by assembling compatible puzzle pieces. To inform the design of Jigsaw, we interviewed ten designers and distilled design goals. In a user study, we showed that Jigsaw enhanced designers’ understanding of available foundation model capabilities, provided guidance on combining capabilities across different modalities and tasks, and served as a canvas to support design exploration, prototyping, and documentation.
INTRODUCTION
The past year has seen substantial progress in the capabilities of AI foundation models [13].These models, which are pre-trained on vast quantities of data, can perform many tasks "off the shelf" without further training.Consequently, many foundation models essentially become input-output systems, simplifying the complexities of working with AI by abstracting models to their core capability [53].Until recently, creating an AI-enabled system necessitated users to curate their own data, train a model, and occasionally modify the model architecture to adapt to their use cases [10].
With powerful plug-and-play capabilities, many designers have begun embracing AI foundation models to enhance their creative workflows.New foundation models support a wide variety of tasks and modalities, including large language models [43] such as GPT [14] for text generation and processing, image generation models such as Stable Diffusion [36], image segmentation models such as Segment Anything [26], and models for video [42], 3D model [24], audio [28] generation.
However, despite the variety of capabilities offered, the integration of foundation models within the creative process can be challenging.Our initial observations suggest that these models are often used for one-off tasks or as standalone applications.For instance, a designer might use ChatGPT [5] to brainstorm and generate ideas, or they might use Midjourney [7] to generate visual prototypes.To incorporate the results of these models into their broader creative process, designers manually copy and paste the results into another design tool.Moreover, despite the variety of available models, designers typically only use a small selection of highly publicized models (ChatGPT, Stable Diffusion, MidJourney) and are often unaware of the range of capabilities and modalities they could potentially utilize from lesser-known models.
To gain a deeper understanding of designers' challenges when using current AI models in their creative processes, we conducted a formative study with ten designers.From our formative study, we identified four key challenges: (1) Designers are often unaware of the full range of capabilities offered by different types of foundation models.(2) Designers struggle with the need to be "AI-friendly, " which includes difficulties in forming effective prompts and selecting optimal parameters.(3) Designers find it challenging to cross-integrate foundation models that exist on different platforms and are specialized for different modalities.(4) Designers find prototyping with these models to be a slow and arduous process.Based on the findings from the formative study, we derived four design goals, which informed the development of Jigsaw, a blockbased prototype system that represents foundation models as puzzle pieces and allows designers to combine the capabilities of different foundation models by assembling compatible puzzle pieces together.Jigsaw includes features that help designers discover available foundation model capabilities and find the right model for their use case.Jigsaw also includes "glue" puzzle pieces that translate design ideas into prompts for other models, clear explanations of parameters to help users make model adjustments, and an Assembly Assistant that recommends potential combinations of foundation models to accomplish a task specified by the designer.To assess the utility of Jigsaw, we invited ten designers from the formative study to test the system.We evaluate how well designers create creative AI workflows given a design brief and during free exploration.The results show that Jigsaw helps designers better understand the capabilities offered by current foundation models, provides intuitive mechanisms for using and combining models across diverse modalities, and serves as a visual canvas for design exploration, prototyping, and documentation.
This research thus contributes: • A formative study with ten designers that identifies the challenges designers face when using AI foundation models to support their work.• Jigsaw, a prototype system that assists designers in combining the capabilities of AI foundation models across different tasks and modalities through assembling compatible puzzle pieces.
• A user study that demonstrates the utility of Jigsaw to designers and informs areas for future block-based prototyping systems for prototyping with AI foundation models.
RELATED WORK
This work draws on prior research in AI foundation models, visual programming interfaces, and designer-AI interaction.
AI Foundation Models
The term "foundation models" characterizes an emerging family of machine learning models [13], often underpinned by the Transformer architecture [41] and trained on vast amounts of data.The researchers who introduced this term defined foundation models as "models trained on broad data (generally using self-supervision at scale) that can be adapted to a wide range of downstream tasks." [8] The strength of foundation models lies in their capacity for out-ofthe-box usage across various tasks.This signifies an improvement from the previous AI landscape, where users had to create their own datasets for custom use cases and fine-tune models [10].Prominent examples of foundation models include large language models such as GPT [14] which can perform a variety of text generation tasks and image generation models such as Stable Diffusion [36] which can generate a diverse range of images from text-based prompts.Foundation models also go beyond generative models and include models for tasks such as classification [33], detection [57], segmentation [26], spanning a range of modalities including text [14], image [36], video [42], 3D models [24], and audio [28].Many foundation models perform tasks across modalities, such as text-to-x generative models and x-to-text classification models.In turn, this allows foundation models to be treated as x-to-x inputoutput systems.Such abstraction greatly simplifies how people can use and combine such models in larger AI-enabled systems.Our research aims 1) to inform designers about the capabilities offered by foundation models that can be useful for creative tasks, and 2) to incorporate these capabilities into their creative workflows.In particular, we are interested in exploring how designers can combine the capabilities of multiple models across different tasks and modalities by connecting them together on a visual interface.
Visual Programming Interfaces
Visual programming interfaces (VPIs) have been extensively studied as tools to aid users in designing and implementing systems through graphical elements rather than text-based code [31].A key benefit of VPIs is their lower entry barrier for novice programmers [45].There are primarily two main paradigms for VPIs.The first, the dataflow paradigm, lets users specify how a program transforms data from step to step by connecting nodes in a directed graph.Pioneering work in this area includes Prograph [17] and LabVIEW [27].The second paradigm utilizes block-based function representations and lets users create programs by connecting compatible components together.Notable works in this area include Scratch [35] and Blockly [19].Many commercial creative applications have adopted VPIs, including game engines such as Unity [11], CAD tools such as Grasshopper [9], and multimedia development tools such as Max/MSP [12].
VPI concepts have been applied to machine learning applications.For example, Teachable Machine [15] uses a visual interface to help students learn to train a machine learning model.ML Blocks [46] assists developers in training, evaluating, and exporting machine learning model architectures.Very recently, researchers in both academia and industry have worked on VPIs that support the creation of AI workflows through the combination of pre-trained models.Several works have investigated node-based interfaces for building Large Language Model (LLM) pipelines, including PromptChainer [48], FlowiseAI [2], and Langflow [3].Most closely related to our work are Rapsai by Du et al. [18] and ComfyUI [1].Both tools provide a node-based interface for machine learning researchers and enthusiasts to build multimedia machine learning pipelines.These tools are catered more toward users with at least some background knowledge in AI programming, giving users the flexibility to customize the tools through programming at the expense of exposing more technical elements to the user.
Our work builds upon prior and concurrent VPI tools and research.However, we made several design choices for our tool to help better support non-technical designers (Table 1).First, our tool leverages a block-based VPI paradigm, which has been shown to be effective in supporting novice programming learners [35].Second, in the same spirit as other creative AI tools such as RunwayML [4], our tool supports AI capabilities of a diverse range of modalities.Third, our tool offers integrated AI assistance features for designers, such as the Assembly Assistant (Section 4.4), semantic search (Section 4.1.3),and glue pieces (Section 4.1.4).We build on recent advances in the reasoning capabilities of LLMs to power these features [47].To the best of our knowledge, this research is the first to study 1) supporting non-technical designers in prototyping design workflows with AI through a block-based visual interface and 2) utilizing the plug-and-play capabilities of AI foundation models that have emerged over the past year, covering a diverse range of tasks and modalities.
Designer-AI Interaction
Several works from the HCI design community have examined the ways in which designers perceive and interact with AI.Chiou et al. [16] follow a Research through Design (RtD) [58] approach and find that AI can offer designers new perspectives and avenues of design exploration.Shi et al. [39] conduct a landscape analysis of AI and suggest the opportunity to build more tools that enable co-creativity between designers and AI.Yang [49] proposes the vision of designers working with AI as a "design material".This research follows this thread of work to build a tool to help designers prototype new design workflows using AI and with the support of AI.
Subramonyam et al. [40] argue that a challenge with using AI as a design material is that the properties of AI only emerge as part of user experience design.They thus employ data probes with user data to help elicit AI properties and facilitate working with AI as a design material.Yang et al. [51] identify that designers often find designing with AI difficult due to uncertainty about the AI's capabilities and the complexity of the AI's outputs.Gmeiner et al. [20] identify the primary challenges for designers when co-creating with AI design tools as understanding and manipulating AI outputs and communicating design goals to the AI.In this research, we offer mechanisms to help designers overcome these challenges, such as conveying AI capabilities, supporting easy inspection and manipulation of AI outputs with real data, and allowing users to communicate design goals to the AI using natural language.
Liu et al. [30] find that when designers use the "right" prompts, they achieve significantly higher quality results from generative models.However, Zamfirescu et al. [54] find that people generally struggle with writing effective prompts.In this research, we introduce a puzzle piece (translation glue) to help designers automatically translate pieces of text into prompts.Yang et al. [50] find that designers are more successful when they collaborate with data scientists.Using RtD, Yildirim et al. [52] identify that designers develop boundary objects to communicate design intentions with data scientists.In this research, we let designers document their creative process on a canvas (Assembly panel), which designer participants in our user study found to be a useful boundary object for sharing and explaining ideas.
FORMATIVE STUDY
We conducted a formative interview study with ten designers to understand how designers attempt to use AI in their work and inform the development of a new tool to support creative work with AI.
Participants and Procedure
We interviewed ten designers (P1-P10, 6 male and 4 female, aged 24-39), recruited through known contacts and word of mouth.The designers come from diverse specializations, such as interior design, product design, graphic design, and video game design.All participants have more than five years of design work experience and use AI tools to support aspects of their design processes, such as ChatGPT, Midjourney, and DALL-E [34].We conducted one-hour interviews remotely over video conferencing, asking participants to describe their typical creative workflow, how they use AI to support their work, the specific AI tools they use, and the pain points they face using AI.Following the interviews, the first author conducted a thematic analysis of interviews and summarized participants' key challenges.
Findings and Discussion
We identify four key challenges designers face when using AI to support their work.
3.2.1 C1: Limited Knowledge of AI Capabilities.Despite the broad spectrum of AI foundation models available, designers felt they had limited knowledge of existing models and their capabilities.As a result, they felt like they were underutilizing the creative support these models could provide.Designers found it challenging to "understand the capabilities of various models in a crowded market (P1)", making it difficult to determine which model is most suitable for a specific task.Additionally, designers expressed a desire to "easily view a few example results from the models (P5)", which would allow them to quickly assess the model's capabilities and determine if its results align with their intended use case.
C2:
Tedious to be AI-friendly.After deciding on an AI model to use, designers stated that it is challenging to be "AI-friendly."This includes crafting effective prompts (for generative models) and setting optimal parameter values of AI models to ensure good results.Designers stated it can be time-consuming to "master the art of prompt creation (P2)", often dedicating a significant amount of time to simply translating their design idea into a functional prompt.As P5 stated, "behind every stunning image generated by Stable Diffusion lies a designer's patience and a relentless pursuit of the right prompt." Moreover, our participants were often confused about different model parameters and how they affect the model's results, leading to "endless parameter tweaks (P10)".
C3:
Difficult to Combine Multiple Models.Designers felt that current models predominantly cater to simple and singular functionalities.Designers commented that for realistic design workflows, which involve multiple tasks and a range of modalities, they often find themselves having to switch between distinct AI platforms.This fragmented the design process, and as P9 stated, "switching between AI platforms felt like needing a different kitchen gadget for every step in a recipe." In addition, designers often face compatibility issues between models when attempting to combine them, leading to time-consuming troubleshooting.They commented that it would be beneficial to "clearly know which models are compatible with one another (P6)." 3.2.4C4: Slow Prototyping and Iteration.Designers noted that "seamless prototyping and iteration is crucial to the design process (P6)".However, when working with AI, designers frequently found it challenging to quickly build prototypes and view results.Setting up and switching models can be a lengthy process that inhibits rapid experimentation.Furthermore, when creating workflows that involve chaining models, designers often can only view the final result.This makes it difficult to understand how individual models affect the final result and can make it challenging to explain design decisions to clients without tangible intermediate outputs.
Design Goals
To tackle the challenges designers encounter when using AI, we distill four design goals.
3.3.1 D1: Catalog of AI Foundation Models.To help designers gain a better understanding of available AI foundation models, we aim to compile a catalog of existing models.For each model, we should provide straightforward explanations of its capabilities along with examples of their results.Furthermore, we should provide mechanisms for designers to easily find models that can accomplish the specific tasks they have in mind.
D2:
User-friendly instead of AI-friendly.We should provide mechanisms that reduce the need for designers to adapt to the nuances of AI models.First, we should incorporate assisted prompting techniques to help designers translate design ideas into prompts.Second, we should explain model parameters in laypeople's terms, including how altering different values will impact model results.
D3:
Intuitive Interface for Combining Models.We should provide designers with an interface that allows them to easily combine multiple task-specific foundation models across a wide range of modalities.The interface should visually present clear affordances of which models can be combined.In addition, we should provide an assistive tool for suggesting model combinations.
3.3.4D4: Facilitate Effective Prototyping.Designers place significant importance on experimentation and iteration.We should make it effortless for designers to experiment with different model combinations and be able to easily view results.Furthermore, we should let designers view intermediate results within a chain of models to help diagnose errors and aid in communicating design ideas with clients.
JIGSAW
The following outlines Jigsaw's four major components: the (1) Catalog Panel, (2) Assembly Panel, (3) Input and Output Panels, and (4) Assembly Assistant.We then describe how a designer can use Jigsaw with an example interior design workflow.
Catalog Panel
The Catalog Panel assists designers in selecting suitable models for their tasks with a catalog of foundation model components (D1).
4.1.1Curating a catalog of foundation models.We identified six common modalities used in creative work, namely, text, image, video, 3D, audio, and sketches.Jigsaw curates available models across all possible pairwise permutations of modalities (e.g., text-to-text, text-to-image, text-to-video, ...).Jigsaw also includes foundation models that with dual input channels, such as ControlNet [56].For tasks supported by multiple models, we prioritize models based on: 1) inference speed (ideally less than a minute to run), 2) zero-shot capability (plug-and-play use), and 3) the quality of results (models ranked highly on machine learning benchmarks).
Overall, we implemented a catalog of thirty-nine models across six modalities (see Appendix A for a full listing).
4.1.2Representing foundation models as puzzle pieces.Considering foundational models as input/output systems, we represent them as puzzle pieces with input and output arms.There are two types of puzzle pieces: 1) the model piece represents models with customizable parameters, and 2) the input piece accepts input from the user via text, media file, or sketch (see Section 4.3).Jigsaw colorcodes puzzle pieces based on their input and output modalities to signals what pieces are compatible with one another (D3).For example, a text-to-image piece would be colored green on the left and blue on the right.When a user hovers over a puzzle piece, a tooltip provides a description of its capability, typical runtime, and an example of an input and output (D1).
4.1.3Helping users find the right piece.Jigsaw provides two mechanisms for users to find model pieces for their tasks (D1): 1) puzzle pieces are grouped by input modality and 2) users can describe the task in the semantic search bar.The search returns model pieces with high semantic similarity to the query, scored using CLIP [33] text embeddings.
LLMs as glue.
There can be instances where models do not perfectly align, such as situations where intermediate reasoning is required.Drawing inspiration from Socratic Models by Zeng et al. [55], we utilize Large Language Models (LLMs) as connecting elements between model pieces.We refer to these instances as glue pieces.The user first attaches a model piece capable of conveying the content of a modality in text (x-to-text).Next, the user attaches the LLM glue piece for language-based reasoning (text-to-text).
Finally, the user attaches a model piece which translates text back into another modality (text-to-x).To help users connect model pieces in common use cases, Jigsaw includes three types of glue pieces: (1) The custom glue piece accepts any custom user instruction.
(2) The translation glue piece converts a piece of text into a prompt that better aligns with text-to-x models (e.g., Stable Diffusion) (D2) (Figure 3a).We ask GPT to transform an input into a prompt via the following prompt: Here are example prompts for a text-to-<modality> generation model: <list of example prompts>.
Transform <input data> into a prompt.Answer in only the transformed prompt.
(3) The ideation glue piece accepts a design task specified by the user and generates an idea (Figure 3b).We ask GPT to generate an idea via the following prompt.
Generate an idea for <task> based on <input data>.Answer in one short sentence.
Assembly Panel
The Assembly Panel offers an infinite canvas for combining compatible foundation model puzzle pieces (D3) (Figure 4).When the user clicks on a model piece, a parameters sidebar allows users to customize a model's specific parameters.Jigsaw pre-populates each model's parameters with default values that generally yield good results and defines limits so that the user can experiment with different values without the concern of breaking the model (D2).Tooltips explain, in plain English, how the parameter influences model results and recommends optimal values for common scenarios.Users can build multiple chains on the canvas and can run each chain separately, allowing parallel explorations and complex workflows.
Input and Output Panels
The Input and Output Panels allow users to input, view, and download media across modalities.Users can type into the input panel (Figure 6a), upload files (Figure 6b), or draw sketches (Figure 6c).The Output Panel shows the result of the chain and lets users copy (Figure 6d) or download outputs (Figure 6e-i).
The user can select a puzzle piece to view the intermediate inputs and outputs at that specific piece.This allows the user to observe how the data is transformed at each stage (D4).Additionally, within a chain, the user can view how the inputs for a puzzle piece affect the results of a puzzle piece located several steps downstream by holding the shift key to select multiple puzzle pieces.
Assembly Assistant
The Assembly Assistant recommends a chain of puzzle pieces for a user-specified task.The designer would first provide a natural language description of a task, such as "Add sound effects for an illustration."Jigsaw then asks GPT to use a chain of models to accomplish the task via the following prompt: You are given a set of AI models to complete a user's task.There are thirty-nine models: <1.text2text() has reasoning capability.2. text2img() can generate an image from text. 3. text2video() can generate a video from text, ...> You can only use the models given.You do not have to use all the models.You will answer in a JSON format.Here is an example answer: <example combination of puzzle pieces written in a JSON format for the frontend to parse>.Your task is to <user-specified task>.
Prior work has found that asking GPT to evaluate its own results can improve the correctness of its responses [32].We thus ask GPT to evaluate its own answers based on four criteria via the following prompt: <Prompt from the previous step> <Answer from the previous step> Here are four criteria that the answer needs to satisfy.If any criteria are not satisfied, please give me the corrected answer in JSON format.1.Whether the user's task was understood and completed.2. Whether no models outside of the provided ones were used.3. Whether the output and input of each step can be connected.4. Whether it follows the correct JSON format.
Jigsaw then passes the chain of puzzle pieces provided in JSON format to the frontend and adds them onto the Assembly Panel.The designer can make further edits to the chain, just like any manually-created chain.
System walkthrough
We illustrate the interactions supported by Jigsaw using an interior design example.Figure 7 displays the final arrangement of puzzle pieces, referred to as the "model mosaic" in this paper for succinctness.
4.5.1 Ideating a design concept from client material.Zaha is an interior designer tasked with creating and presenting a redesigned interior for a client's new home.She received a photograph of interior from the client.
To begin, Zaha plans to create a design concept using the client's photo as a reference (Figure 7a).She drags an Upload image puzzle piece from the Catalog Panel onto the Assembly Panel, and uploads the client's photo in the Input Panel.Zaha first identifies existing features in the client's home, such as built-in structures, furniture, and lights.Thus, Zaha uses the semantic search bar in the Catalog Panel to find a puzzle piece that can "identify the objects inside the image".Jigsaw returns the Tag image piece as the top result.She adds Tag image after Upload image to identify the objects.Zaha then uses the Ask GPT piece in ideation mode, with "contemporary interior concept" in the Task box, to assist her in brainstorming a concept.After clicking Run, Jigsaw has suggested a design concept of "An airy space with a minimalist fireplace and ladder, illuminated by low-hanging lamps." 4.5.2Designing the 2D and 3D mockups.With a design concept in hand, Zaha proceeds to create the visual design (Figure 7b).Zaha quickly duplicates the Upload image piece to start a new chain, using the client's photo as a reference again.She notices that the reference photo includes people, whom she wants to remove, and adds the Remove people piece.Zaha is interested in experimenting with AI image generation tools but acknowledges that the room's structure must remain intact for the design to be technically feasible.She uses the semantic search bar in the Catalog Panel to find a puzzle piece that can help "preserve the structure of the room".Jigsaw returns Get edge map and Get depth map as the top results.Zaha begins by testing the Get edge map piece and adds the Generate image from text and edge map piece, which takes in both image and text inputs.For text, she inputs the design concept suggested in the previous ideation chain.
Zaha feels that the generated image fails to retain the desired room structure.Recognizing this, she drags the pieces using edge maps into the trash bin.She tries Get depth map and Generate image from text and depth map instead, which better preserves the room's structure.To try different design variations, Zaha inspects the parameters tooltip for the Generate image from text and depth map model in the parameters sidebar.She discovers that she can tinker with the seed value to generate different variations.
Zaha is now satisfied with the redesign, except for the wooden ladder.She believes replacing it with a spiral glass staircase would better fit the contemporary concept.Thus, she searches for a puzzle piece that can modify an image using text instructions and finds the Generate image from text and image piece.Zaha instructs it to "replace the wooden ladder with a glass spiral staircase."The newly generated redesign now features a contemporary glass spiral staircase.
Zaha would like to visualize a 3D mockup.She discovers the Generate 3D model from image piece and attaches this to the chain, but finds that the generated results are low resolution.Thus, Zaha searches the Image section of the Catalog Panel for a piece that can help enhance the image.She finds the Increase image resolution piece.
Presenting the design to the client.
To communicate the contemporary design concept, Zaha would like to incorporate a musical background to complement the design aesthetics.To achieve this, Zaha asks the Assembly Assistant to "help add music based on the image".The Assembly Assistant suggests the following chain of puzzle pieces: Caption image to understand the image, Ask GPT with ideation mode to brainstorm a fitting music description, and Generate music to generate the music (Figure 7c).This outcome is a chill electronic music piece.
USER STUDY
We conducted a user study to understand how Jigsaw could address designers' pain points in working with multiple foundation models, its potential to be integrated into design workflows, and identify improvement areas.
Participants and Procedure
We invited the ten designers from our formative interviews to participate in a one-hour remote user study.They were not exposed to Jigsaw's system or concept prior to the user study.Participants accessed Jigsaw through a web browser, shared their screen, and verbally explained what they were doing and thinking (think-aloud).
Introduction (10 minutes).
Participants provided informed consent, and then received an introduction to Jigsaw's components, as described in Section 4.
Reproduction Task (15 minutes).
Participants were asked to reproduce the interior design model mosaic described in Section 4.5.Participants used the starter image shown in Figure 7a and a detailed design brief of the various steps they would need to create (see Section 4.5).
Free Creation Task (20 minutes).
Participants were asked to freely explore Jigsaw and create their own model mosaics.We encouraged participants to build workflows beyond a simple chain and try out puzzle pieces involving multiple modalities.
Post-Study Interview (15 minutes).
After the creation activities, we conducted a semi-structured interview asking about participants' experiences using Jigsaw, whether they could see Jigsaw being integrated into their design workflow, and to identify areas for improving the system.
Results, Discussion, and Work
All participants completed the reproduction and free creation tasks.Jigsaw appears to help designers discover and prototype new creative workflows.Designers suggested different future improvements.
5.2.1
Helping designers discover and utilize AI capabilities.Participants located AI capabilities via the Catalog Panel's semantic search bar or by filtering pieces by modality.Many participants mentioned that they were able to "discover new AI abilities [they were] previously unaware of (P9)".For example, P2, an illustrator, discovered the capabilities of ControlNet [56], a model that allows users to add additional control to a text-to-image model, such as a guiding sketch.Figure 8 shows the model mosaic created by P2, who used Jigsaw to create an audio-visual story.In Figure 8a, she created visuals for her story.Instead of using a text-to-image model, she discovered and used ControlNet to generate images based on a starting sketch.In total, participants used 8 model puzzle pieces on average (=7.9,=1.29), and all participants explored beyond well-known models such as GPT and Stable Diffusion (see Appendix B for details).As the number of foundation models continues to increase, we plan to expand our set of puzzle pieces for designers over time.
Participants commented that tooltips "gave [them] a solid understanding of the capabilities of each of the puzzle pieces, like what can be expected as output and what types of inputs are suitable (P1)".In addition, participants expressed that the assisted prompting mechanism offered by the translation glue piece "allowed [them] to achieve satisfying results without the need to laboriously rephrase and tweak prompts (P2)": "Now, it's like I speak the AI's language.(P5)".5.2.2 Supporting intuitive prototyping.Participants expressed that "[they] enjoyed the idea of building with AI visually with tangible puzzle pieces (P10)" and found it "easy to pick up and start designing (P3)".In particular, participants appreciated the error-proof design: "Knowing which puzzle pieces can be connected expedites my prototyping.I see the same colors and receive snapping feedback.I don't spend time building a workflow and then compile it to find compatibility errors (P3)".A contribution of this research is showing how the design benefits of block-based VPIs, commonly catered towards novice programming learners [19,35,44], can be effectively applied to the realm of design prototyping for non-technical designers to work with AI capabilities.An interesting extension of Jigsaw could be the implementation of a "tutorial mode" to teach novice designers.The system would disassemble a model mosaic created by an experienced designer into pieces, allowing a novice designer to recreate it and learn from the experienced designer's design process.
Furthermore, participants mentioned that the ability to nearinstantaneously see intermediate outputs in a chain "helped [them] to quickly test out ideas and make adjustments to individual steps as needed (P5)".This aligns with findings from prior research in interactive program debugging tools [22,25,37].
5.2.3
Serving as a brainstorming and documentation canvas.We observed participants making creative uses of the Assembly Panel, including using it to test different partial workflows before combining them into more complete workflows and using it to document their design explorations.Participants expressed that the Assembly Panel provides "a playground to be messy and experimental (P3)" and "makes it easy to track the evolution of an idea (P4)." Moreover, participants commented that the ideation glue piece was "helpful for brainstorming concepts at the beginning [of the design process] (P2)".We observed that designers occasionally passed the outputs of the ideation glue directly into a generation model, as shown in Figure 8c.In other instances, designers maintained a shorter chain solely for concept generation.This is shown in the model mosaic created by P10, a game designer, in Figure 9, who created a video game character.He primarily used the short chain in Figure 9a to generate concepts for his character.We observed that designers frequently created multiple chains to organize different stages of their design process.Participants noted that since the canvas documents their creative process, it could serve as "a boundary object for sharing and explaining ideas to clients (P5)".
Moreover, participants commented that the Assembly Assistant was useful in "generating an initial configuration of puzzle pieces to start working with (P1)".This aids in combating the "blank canvas syndrome (P6)", a common occurrence at the onset of a creative activity [23].In Figure 9b, P10 wanted his video game character to look like Superman flying.However, he initially struggled to come up with a method to accomplish this, so he sought assistance from the Assembly Assistant.The Assembly Assistant recommended a workflow of a reference pose image, a pose extraction model, and a ControlNet model that can be guided by pose.Given this workflow, P10 used a wooden mannequin to specify the pose for his character.
LIMITATIONS AND FUTURE WORK
There are several avenues for improvement that we plan to address for future work.First, we currently implemented one AI model for each design task (e.g., Stable Diffusion for text-to-image).We plan to support the capability of switching between multiple alternative models.We will provide information on the tradeoffs between them (e.g., speed vs. quality) for both the user and as context for Jigsaw's subcomponents (i.e., semantic search and Assembly Assistant), facilitate easy side-by-side comparison, and allow users to filter models by certain criteria (e.g., text-to-image models with a typical runtime of under 10 seconds).Second, we are interested in expanding Jigsaw to let designers define custom puzzle pieces, as suggested by P10, and logic operators such as if/else statements and loops, common in other VPIs [12].Third, we currently use LLMs as glue, using the text modality for intermediate reasoning (Section 4.1.4).We anticipate extending the glue piece to incorporate newer research on multimodal LLMs (MLLMs), such as GPT-4V [6] and LLaVA [29], to add information from additional modalities.Fourth, we plan to expand the Input and Output panels to handle real-time video and audio streams, as suggested by P9.Finally, participants noted that the Assembly Assistant was less robust to ambiguous tasks or tasks that require very complicated mosaics.As the Assembly Assistant uses GPT, we anticipate improvements to the Assembly Assistant as stronger versions of GPT are released.This is also a common challenge recognized by recent machine learning works that also aim to automatically combine expert models to solve complex AI tasks [21,38,47].A path forward, as suggested by P7, could be to improve the Assembly Assistant to support back-andforth interactions with the designer, becoming a design co-pilot that assists designers in creating complex workflows.As more designers use Jigsaw to create model mosaics, we plan to compile them into a template gallery that other designers can modify for their own use cases, as suggested by P8.We believe that the accumulated design templates can serve as a search space for the Assembly Assistant, enhancing its capabilities as a design search engine.
CONCLUSION
This research identifies the challenges designers face when using AI foundation models to support their work.The research prototype, Jigsaw, uses a puzzle piece metaphor to represent foundation models and allows the combination of models by assembling compatible pieces.Feedback from designers using Jigsaw demonstrated that designers discovered new AI capabilities, combined multiple AI capabilities across various modalities, and flexibly explored, prototyped, and documented AI-enabled design workflows.We are interested in extending Jigsaw with more capabilities and hope that this research can help inform future research on block-based prototyping systems for prototyping with AI foundation models.
Figure 2 :
Figure 2: Users can search for model pieces by describing their task in the semantic search bar (a).Users can hover over a model piece to view a description of its capability, typical runtime, and an example input and output (b).
Figure 3 :
Figure 3: The translation glue piece converts a piece of text into a prompt format suitable for text-to-x generation models (a).The ideation glue piece generates an idea for a design task (b).
Figure 4 :
Figure 4: Users can drag puzzle pieces from the Catalog Panel onto the Assembly Panel (a), select pieces on the Assembly Panel by clicking on them (b), and remove pieces by dragging them to the trash bin or pressing the delete key (c).Users can duplicate pieces, and undo and redo actions using hotkeys (d-f).
Figure 5 :
Figure 5: When the user drags a puzzle piece close to another compatible piece, Jigsaw displays a semi-transparent preview of the potential connection.If the user releases the puzzle piece, it will snap into place (a).Conversely, if the user attempts to connect a puzzle piece to an incompatible piece, the new piece will be repelled, ensuring that users do not force a fit (b).Users can move multiple puzzle pieces simultaneously (c).
Figure 6 :
Figure 6: Text inputs can be directly typed into the Input Panel (a).Image, video, 3D, and audio inputs can be uploaded either by drag-and-drop or the file browser (b).Sketch inputs can be drawn (c).Text outputs can be viewed and copied by the user (d).Image, video, 3D, audio, and sketch outputs can be viewed in their respective media viewers and downloaded by the user (e-i).
Figure 7 :
Figure 7: An example model mosaic for interior design.The designer can use Jigsaw to ideate a design concept from client material (a), design the 2D and 3D mockups (b), and add music to enhance the presentation (c).
Figure 8 :
Figure 8: Model mosaic by P2, an illustrator, to create an audio-visual story.P2 uses Jigsaw to create an illustration based on a text description and a reference sketch (a), generate narrations through a cloned voice (b), and generate accompanying sound effects (c).
Figure 9 :
Figure 9: Model mosaic by P10, a game designer, to create a video game character.P10 uses Jigsaw to create a character concept and preview the character's visuals (a).P10 then specifies a pose for the character and generates the character's visuals in the specified pose (b).P10 then generates a line for the character to say and animates the character to deliver the line.
Table 1 :
Comparison of Jigsaw against related tools.Jigsaw supports non-technical designers with a beginner-friendly block editor and offers AI capabilities across multiple modalities.Jigsaw's Assembly Assistant can help automatically recommend a chain of AI models for a designer-specified task. | 8,834 | 2023-10-12T00:00:00.000 | [
"Computer Science",
"Art"
] |
Efficient one- and two-qubit pulsed gates for an oscillator stabilized Josephson qubit
We present theoretical schemes for performing high-fidelity one- and two-qubit pulsed gates for a superconducting flux qubit. The"IBM qubit"consists of three Josephson junctions, three loops, and a superconducting transmission line. Assuming a fixed inductive qubit-qubit coupling, we show that the effective qubit-qubit interaction is tunable by changing the applied fluxes, and can be made negligible, allowing one to perform high fidelity single qubit gates. Our schemes are tailored to alleviate errors due to 1/f noise; we find gates with only 1% loss of fidelity due to this source, for pulse times in the range of 20-30ns for one-qubit gates (Z rotations, Hadamard), and 60ns for a two-qubit gate (controlled-Z). Our relaxation and dephasing time estimates indicate a comparable loss of fidelity from this source. The control of leakage plays an important role in the design of our shaped pulses, preventing shorter pulse times. However, we have found that imprecision in the control of the quantum phase plays the major role in the limitation of the fidelity of our gates.
Introduction
Superconducting circuits containing Josephson junctions [1,2] are widely recognized to be promising systems for the physical implementation of quantum bits. These systems can be made and operated using well-established experimental techniques, and they have the clear potential, in principle, for scalability. Important experimental milestones in coupled superconducting qubits, including the observation of two-qubit gates [3,4] and the measurement of entanglement [5], have already been reached. But because superconducting qubits are condensed matter systems, it has been a hard task to isolate them from their environment. Because of that, these systems tend to suffer from short coherence times, which has imposed serious limitations on achieving very high fidelity gates.
In this paper, we report a theoretical study of a universal set of one-and two-qubit gates implemented using only shaped dc flux pulses for an oscillator stabilized flux qubit. We introduce a simplified but accurate model to describe the dynamics of the lowest states of the qubit as a function of the external control parameters. This model provides a simpler way to analyze the physics of the problem, helping to make it easy to see what operations are necessary for performing the desired quantum gates. As we shall see, there is a trade-off between the speed of a gate and the amount of leakage produced by it (leakage = evolution of states out of the 0-1 computational basis). Smart choices for the shape of the pulses are required to keep leakage at a tolerable level, such that the gate fidelities are not compromised by this process [6,7]. We have been able to keep leakage at the 0.12% level for gate times of the order of a few tens of nanoseconds.
The other important goal of the present work is to consider the effect of low-frequency noise during the gate operation and in the memory state. When designing quantum gates, we search for the best pulse paths such that the loss of fidelity due to the low-frequency noise is minimized. We find that for all gates of interest, the loss of fidelity due to low-frequency noise is never greater than 1%. Achieving this low level of infidelity requires a careful use of symmetries (so that errors accumulated in the first half of a pulse can be cancelled out in the second, for 3 example) and of 'sweet spots' [8,9] (points in the control parameter space that are first-order insensitive to fluctuations).
The last part of this paper analyzes a pulsed two-qubit gate, the controlled phase gate. An interesting feature of the coupling used in this gate scheme is that, although the physical inductive coupling between the qubits is assumed fixed, the effective qubit-qubit interaction is tunable as a function of the control parameters on both qubits, due to the change of character of the bare qubit states as the control parameters are varied. In fact, the effective qubit-qubit interaction can be made negligible in large regions of the flux space, allowing us to perform high-fidelity one-qubit gates for the two-qubit system without extremely stringent control over electrical cross-talk.
The outline of this paper is as follows. In sections 2 and 3, we discuss the physics of our system, and present the simplified model used to simulate the dynamics of the lowestlying levels of an oscillator-stabilized flux qubit. The appendix carries out a detailed derivation of the system Hamiltonian, and the regime of validity of the simplified model is discussed. Section 4 presents the different schemes designed to perform the following basic single-qubit operations: measurement in the standard basis (0/1), measurement in the conjugate basis (+/−), Z -rotation gates and the Hadamard gate. The fidelity of all these gates is given as a function of unwanted shifts from the optimal point of operation in flux and time synchronization of the pulses. In addition, we characterize the noise through the operator-sum representation of the system superoperator. In section 5, the two-qubit system is analyzed. The form of and the reasons for an effective tunable interaction are discussed. Then, a gate in the equivalence class of the controlled-Z gate is proposed, and its fidelity and a characterization of the nature of the noise are presented. Finally, section 6 gives some conclusions.
System Hamiltonian
Our analysis will be focused on the IBM qubit [10,11]. This device, shown in figure 1, consists of a bare qubit, which is a type of flux qubit containing three Josephson junctions and three loops, and a high-quality superconducting transmission line [9,11,12]. The bare qubit are subject to external control via flux lines which change the total magnetic fluxes threading the loops. As previously reported [13], and summarized in the appendix, the bare qubit has a gradiometric structure, and hence its behavior, to good approximation (see the appendix for the validity of this assumption), is only a function of the difference of the magnetic flux in the two large loops, which we denote as ≡ − p . Whenever is an integer multiple of the flux quantum 0 = h/2e, the system potential has a perfectly symmetric structure. These lines in the flux space are referred to as the 'S lines' (S for symmetric).
The bare qubit has two control parameters that define its flux space: one, , causes departures from the parity symmetry just described, and the other, the so-called control flux˜ c , changes the structure of the qubit potential, varying the height of the potential barrier between the two classically stable states. As˜ c is changed, the system potential passes from a doublewell to a single-well structure. This change in the form of the potential as a function of a control parameter produces an essential feature in the quantum behavior of the system: a well-defined change in the character of the qubit eigenstates. While for the double-well regime the qubit eigenstates are almost localized orbitals in the left and right wells, as the single-well regime is approached the qubit eigenbasis goes over to delocalized states, being close to states that are symmetric and antisymmetric with respect to reflection around the midpoint of the potential. In order to operate the qubit, flux lines (highlighted in black tracks) are used to change the total fluxes threading the loops. Readout SQUIDs (two such structures are shown on the left-hand side of the picture) perform the measurement of the state of the qubit. (c) Proposed scheme of the two-qubit system. The qubit-qubit interaction is assumed to arise via the indicated mutual inductance between the two big loops [14]. This results in a qubit-qubit interaction Hamiltonian of the formσ z ⊗σ z . Physical parameters for this qubit: the capacitances and critical currents of the junctions are assumed to be C =10 fF and I c = 1.3 µA; L T = 5.6 nH, L 1 = 32 pH and L 3 = 680 pH are the transmission line, the small and big loop inductances, respectively. The mutual inductances between qubit-transmission line, small loop-control flux line and big loop-bias flux line are respectively 200, 0.8 and 0.5 pH. The transmission line is designed to have a fundamental mode frequency of ω T = 2π × 3.1 GHz. Finally, the qubit-qubit mutual inductance is M = 12 pH.
This change of the character of the states happens over a very small interval of˜ c , because of the exponential increase in the amplitude in tunneling between the wells as the barrier between the two wells is decreased. This transition interval is referred to as the 'portal' [10,11], and pulsing the flux parameters through this portal can create some of the elementary actions needed for the construction of quantum gates. For example, a non-adiabatic pulse through this region can create superpositions between states |0 and |1 .
The fundamental mode of the open-ended transmission line acts as a harmonic oscillator of frequency ω T coupled to the bare qubit. The presence of that structure modifies the quantum behavior of the qubit when the energy splitting of the ground and first excited states of the bare qubit is comparable to or larger thanhω T . In this regime, the two lowest-lying states of 5 the system both have the bare qubit in its ground state; they differ only in their transmission line quantum number. By tuning the energy splitting of the ground and first excited states, one can move information stored in the bare qubit to the transmission line and vice versa. When the qubit has been transferred to this transmission-line embodiment, we say that it is parked. Parking results in a very useful stabilization of the 0-1 frequency as a function of changes in {˜ c , }. In addition, the quantum coherence times while parking are seen to reach several microseconds [15]. Thus, the parking regime will be used as the memory state of the qubit; it will stay in this state when it is awaiting operation. Far away from parking, when the 0-1 energy gap is much smaller thanhω T , the transmission line does not play any role in the dynamics of the 0-1 states, which are just those of the bare qubit. Since the coherence times here are expected to be of the order of only tens of nanoseconds, this regime should be avoided; this part of the parameter space will only be used for state measurement.
So far, we have given a qualitative description of the qubit dynamics. This description is made quantitative with the methodology introduced by Burkard, Koch and DiVincenzo (BKD) [16]. Using the network graph theory, BKD developed a universal method for analyzing any lumped element electrical circuit containing Josephson junctions. The result of this theory is a mapping of the circuit dynamics to that of a massive particle in a potential, whose mass tensor and degrees of freedom are associated with the system capacitances. The system Hamiltonian in this formulation is given by Here L −1 J ;i ≡ (2π/ 0 )I c;i , where I c;i is the critical current of junction i. The diagonal matrix C contains the capacitances of the system. The topology of the circuit is encoded in the matrices M 0 ,N andS (see the appendix for a more detailed presentation of the BKD theory). The first term of the potential equation (2) represents the energy due to the presence of the Josephson junctions. The second term is associated with the inductive energies of each branch of the circuit. The last two terms take into account coupling to external sources of magnetic fluxΦ x and current sources I B . Quantization of the system is introduced by imposing the canonical commutation relation for the variables of charge, Q C , and phase, ϕ: The analysis of the system potential equation (2) for our qubit reveals that, instead of working with the real applied fluxesΦ x ≡ {˜ c , , p }, it is more convenient to introduce 'nonorthogonal' flux coordinates ≡ − p and c ≡˜ c + (where L 1 stands for the small loop inductance, L 3 for the big loop inductance, and M 15 and M 35 for the the small-big loop and two big loops mutual inductances, respectively). In fact, as presented in the appendix, the potential has a definite symmetry as a function of { c , }, rather than {˜ c , }. Since the potential symmetry is a key feature of the system, and we will explore it 6 when designing our gates, an accurate calculation must consider that fact, even though the term proportional to represents a small correction to the real applied flux˜ c . From here onwards, we will only refer to the pair of effective fluxes { c , }.
Since our qubit has four capacitances (three associated with the Josephson junctions and one representing the fundamental mode of the transmission line), the BKD theory leads us to a four-dimensional (4D) potential, for which direct calculations and analysis are more difficult. Thus, we follow the procedure of [13] to reduce the system dimensionality to two, one representing the bare qubit and the other representing the transmission line. The procedure involves organizing the degrees of freedom into 'fast' coordinates, in which the potential rises very steeply (such that the system dynamics are frozen into ground state along these direction), and ones that are 'slow'. Then, using a Born-Oppenheimer approach, the fast coordinates are traced out, resulting in small modifications to the remaining slow-coordinate potential energy. Figure 2 shows the first seven levels of the system spectrum, calculated following the above described steps, as a function of c , for three different values of the bias flux . Note that, by convention, the ground level is always at E = 0. It is clear that the equally spaced harmonic oscillator levels cut through this spectrum for all values of c and ; since the transmission line sees the control flux only via interaction with the bare qubit, its energies are very stable except in the vicinity of energy crossings. Observe that the lowest-lying states in the 'parking' regime, at high values of c , involve only transmission line quantum numbers (i.e. they are the states of a harmonic oscillator). This is true because the energy splitting of the eigenstates of the bare qubit becomes much larger than thehω T energy splitting of the transmission line states.
In addition, because of the interaction between the bare qubit and the transmission line, another structure present for each is an observed avoided crossing gap between the bare qubit and the transmission line states, occurring close to c = 1.45 0 . As one can see, the bias flux plays an important role for small values of c , where the system potential has a double-well structure, and the bare qubit energy splitting is smaller thanhω T . The first plot presents the case on the S line, = 0. Since at the S line the system potential is symmetric, the qubit states are symmetric and antisymmetric superpositions of the degenerate localized orbitals of the left and right wells of the potential. As we move away from the S line ( = 0), the symmetry of the potential is broken and the localized left and right orbitals are no longer degenerate. This explains the appearance of the gap between the ground and first excited states in the second and third plots at small values of c . One important feature of the system potential is a symmetry in going from + to − (see the appendix); under this transformation, the energies of the left and right states are interchanged. Finally, in the third plot it can be seen that the first excited state stabilizes at the transmission line frequency for c < 1.43 0 . This occurs because for most values of c the bias term in the Hamiltonian is large enough by itself to make the bare qubit energy splitting larger thanhω T (but not much larger). As a result, for small c the first excited state corresponds to one excitation of the transmission line, and the second excited state to the excited state of the bare qubit. The qubit is encoded using the lowest two eigenstates. For large values of c , the energy splitting of the bare qubit states is much larger thanhω T , so that the lowest-lying qubit states have purely harmonic oscillator character. (a) The level structure on the S line. In this case, the system potential has a symmetric double-well structure for small values of control flux, consequently the ground and first excited states are nearly degenerate there. (b) and (c) present cases away from the S line. Here, the states |L and |R are no longer degenerate and the system has a gap between the ground and first excited states for small values of control flux.
Four-level model
Although it would be possible, with considerable computational effort, to design and simulate the desired quantum gates by direct evaluation of the time-dependent Schrödinger equation for the circuit Hamiltonian equation (1), a considerable economy is achieved by introducing a simplified model that correctly describes the lowest eigenstates, since we will be only interested in their dynamics for our quantum gates. We start the derivation of such a simplified model by using the first-order perturbation theory to treat the bare qubit coupling with the flux line and with its transmission line. The resulting Hamiltonian due to that approximation has the form whereσ i are the Pauli matrices,σ z |L ≡ −|L ,σ z |R ≡ +|R , and {â † ,â} are canonical bosonic creation and annihilation operators. The states |L and |R represent the localized orbitals found in the left and right potential wells. Because of the change of the state character as a function of c , the amplitude of tunneling , the bias term coefficient b and the qubit-transmission line coupling g also become control flux-dependent. Figure 3 presents their behavior as a function of c . As one can see, the tunneling amplitude becomes negligible for small values of c . This happens because, in this regime, the very large barrier between the two wells ( 100 GHz) in the double-well potential exponentially suppresses the tunneling between their lowest states. However, as the value of c increases, the barrier decreases and the two minima become closer. Consequently, the amplitude of tunneling rapidly increases, and the left and right orbitals start to become more and more delocalized. Around c ≈ 1.45 0 the barrier vanishes rather abruptly and the bare qubit potential enters a single-well regime. The region around c ≈ 1. Once the bare qubit reaches the single-well regime, those coefficients become less sensitive to changes in c , since the nature of the states barely changes as one passes through this regime.
In addition, the inset shows that the amplitude of tunneling can rapidly reach values of several tens of GHz in the parking regime, which are, in general, much larger than the typical value of the transmission line energy splittinghω T ; hence, the lowest-lying states in the parking regime involve only transmission line states.
Unfortunately, even though the Hamiltonian equation (4) already represents an important simplification for the system description, since it has reduced the bare qubit Hilbert space to that of a two-state system, we cannot obtain an analytical solution for its eigenstates/eigenvalues. Thus, we still have to deal with an infinite set of states due to the harmonic oscillator. As we shall see, for the purposes of our work, we must have not only the correct dynamics of the states |0 and |1 , but also an excellent agreement for the minimal gap between the computational basis {|0 , |1 } and the rest of the spectrum of the system. Because of that, the truncation of the harmonic oscillator Hilbert space at its first excited state does not work as a fair approximation of our system Hamiltonian. As presented in figure 4, the simple truncation of the harmonic oscillator Hilbert space fails to give a good description of the gap between the |1 and |2 states. This happens because at the regime of small values of c the function g is appreciable. Hence, the shift applied to the oscillator due to the interaction with the bare qubit becomes important. Consequently, the most adequate description of the system is obtained using the representation of shifted harmonic oscillator states, where the truncation of the harmonic oscillator Hilbert space should give better results.
In fact, a more careful inspection of the Hamiltonian equation (4) reveals that its harmonic oscillator part has the form of a well-known shifted oscillator. Thereby, changing to the representation of the shifted harmonic oscillator states should give us a better picture of the system dynamics, and the appropriate basis in order to perform further truncations of the Hilbert space. Nevertheless, the shift applied to the harmonic oscillator is dependent on the bare qubit state, as can be seen by the term g( c )(â +â † )σ z of equation (4). This important feature of the system leads us to introduce the unitary transformationD(s,σ z ) ≡ e (sâ † −s * â )σ z as the conditional displacement operator of the system. Observe that the operatorD does not commute with the spin part of the Hamiltonian equation (4). Thus, changing the Hamiltonian representation to the shifted harmonic oscillator states, H →D † HD, and then performing the truncation of the harmonic oscillator Hilbert space at its first excited state, we arrive at the following simplified four-level model: where we have defined the operatorsĉ ≡ |0 HO 1 HO | andĉ † ≡ |1 HO 0 HO |, in which |0 HO and |1 HO represent the ground and first excited harmonic oscillator states, respectively. During the procedure described above, we have introduced an ad hoc parameter, the shift parameter s, which can be used to parameterize the Hamiltonian equation (5) in order to obtain the closest level dynamics as possible to that determined by the Hamiltonian equation (1).
A natural choice for the parameter s would be the standard value s = −g( c )/hω T , which leads to the cancellation of the last two terms of equation (5), and the Hamiltonian diagonalization when ≈ 0. However, since the shift imposed on the harmonic oscillator is conditioned on that of the bare qubit state, and the form chosen for s does not take into account the change of character of the bare qubit state, one should expect it would fail when the regime of high tunneling amplitude is reached. Indeed, as can be seen in figure 4, this choice for the parameter s only gives the correct description when the tunneling amplitude is (5), using the shifts s = −g/hω T (trianglesymbolized curve) and s = −g/ ω 2 T + 2 (dashed red curve), equation (6), for three different values of the bias flux . The simple truncation fails when g becomes appreciable (small values of c ). In this regime, the shifted harmonic oscillator states are the preferred system representation. The four-level model using the shift s = −g/hω T does not provide the parking harmonic stabilization, since it does not consider the change of character of the bare qubit. At last, the four-level model using the conditional shift equation (6) gives a very fair approximation of the lowest-lying levels for all regimes of the bias and control flux.
negligible. In fact, the analytical solution of equation (5) using the above shift reveals that, at the limit → ∞, one should expect ω 01 → ∞, which is in complete opposition to the parking stabilization expected for the system eigenstates.
Therefore, a smart choice of the parameter s has to consider the fact that the harmonic oscillator shift is conditioned to the bare qubit state, and that its state character changes as a function of c . As we already know, at the limit → 0, the parameter s should asymptotically reach the value s → −g( c )/hω T , since it correctly decouples the spin and harmonic oscillator degrees of freedom in the Hamiltonian equation (5). In the other limit, → ∞, one should expect observing no shifts to be imposed on the harmonic oscillator, since the system states would be frozen at the bare qubit ground state. Thus, we expect a reasonable interpolation for the parameter s between the two regimes to be In fact, as presented by the dashed curve in figure 4, the form equation (6) for the parameter s gives a very fair approximation for the lowest level dynamics, in particular the energy splittings ω 01 and ω 12 . In addition, from the analytical solution of equation (5) using equation (6), we can check that the ω 01 has the correct harmonic oscillator stabilization, i.e. → ∞ ⇒ ω 01 → ω T .
It is worth pointing out that, although the model equation (5) does not give the correct parking stabilization for the system second excited state |2 , since this state involves the second excited state of the transmission line, for the purposes of our work, it turns out that is not a limitation for the model. Indeed, the lack of parking stabilization for the second excited state |2 would lead to underestimations of the transitions between the states |1 and |2 , and consequently wrong leakage estimations. However, as we are going to assume ω T of the order of several GHz, the energy splitting ω 12 for the parking regime is big enough to suppress those transitions during our dc shaped pulse operations. Furthermore, since at the parking regime the ground state, and the first and second excited states have the same nature (i.e. they are the states of the harmonic oscillator), observing the induced 0-1 transitions due to the dc pulse operations also gives a measurement of the expected 1-2 transitions in this regime. Thus, we end with a very controllable 4D Hilbert space model, spanned by the ground and first excited states of the bare qubit and transmission line, that accurately mimics the main features of the system dynamics determined by the exact circuit Hamiltonian model equation (1). From now onwards, we will only use the Hamiltonian equation (5) with (6), when designing and simulating the quantum gates.
Completing the system description, figure 5 presents the plots for the frequency difference between the ground and first excited states, ω 01 , and the frequency difference between the first two excited states, ω 12 , as a function of { c , }. As mentioned before, the ω 12 plot quantifies the minimal gap between the computational basis {|0 , |1 } and the rest of the spectrum of the system, giving a measure of how likely the system is to undergo leakage during the sweep of a pulsed gate. The ω 01 plot shows how fast the relative phase of the two computational basis states advances if the system is held at some flux values. We envision that there will be a very stable master clock at the transmission-line frequency, and that errors in the accumulated phase difference are unlikely if the system is held in parking, where its phase advance is synchronous with this clock. But when ω 01 departs far from ω T , phase accumulation with respect to the reference is very fast, and we expect (and our numerical studies confirm) that the system is much more susceptible to phase errors in this regime. These plots provide a good 'map' for designing the one-qubit gates, since they indicate the regimes where one should expect an appreciable amount of leakage and very fast relative phase accumulation. The ω 12 plot quantifies the minimal gap between the computational basis {|0 , |1 } and the rest of the spectrum of the system, giving a limit on the rate at which the system can evolve without appreciable leakage. (b) The ω 01 plot indicates the phase accumulation rate with respect to the reference. In regions of very fast rates, a very precise control of the external applied fluxes is required in order to avoid phase noise. Illustrated with dots is the path in the flux space used to implement the Hadamard gate. The number of dots indicates the time spent when passing through that region.
One-qubit gates
Before we move on to the discussion of each gate individually, it is worth summarizing some features present in all of them (including the two-qubit gate). First, the memory and measurement points must be defined. The parking point is taken to be at c = 1.6 0 , and the measurement point to be at c = 1.4 0 . These choices are optimal, in the following sense: for the parking point, the important feature to be considered is how much the ω 01 frequency changes as a function of external controls. Since we envision the qubit spending long times at this position, it is imperative that the frequency deviation δω 01 has very small values for reasonable flux shifts due to the noise. We calculate that the frequency sensitivity is δω 01 /δ c ≈ 10 MHz/ 0 at the chosen memory point, which will give acceptably small memory error. In addition, inaccuracies in the transmission line fundamental model frequency can be corrected using a π -pulse stabilization scheme [17]. This operation (not analyzed in detail here) consists of initially applying a π-rotation gate; the system is then left to evolve in a free evolution for the same amount of time previously spent at the memory state, and finally a new π -rotation is applied. As a result, undesired Z -phase accumulated at the parking point ends as a global phase of the system state.
For the measurement point, we must make sure the left and right states are experimentally distinguishable, and that, since the potential has a periodic structure [13], the potential barrier between different pairs of minima is high enough to avoid tunneling between their states. At c = 1.4 0 , we found that the barrier between the two principal minima is of the order of 10 THz, and the barrier to other minima is even higher (about 20 THz).
Another common feature of all our shaped pulses is our design methodology. We shape our dc pulses using simple sums of tanh functions. As we shall see, our gates can be divided into several parts, and with each part is associated a function of the form δ tanh((t − t max )/τ ), where δ, τ and t max determine the flux excursion, the maximum rate and its time position, respectively. Thus, with these parameters, one can adjust the rate and the flux excursion of each part of the gate, in order to optimize the gate. It turns out that our designed gates do not require maximum flux slew rate and a bandwidth higher than 7 × 10 6 0 s −1 and 1 GHz, respectively. We measure the gates' fidelity using the entanglement fidelity [18] where A Q ≡ U † ideal U real (δ c , δ , δt), with U ideal and U real (δ c , δ , δt) representing the ideal gate and the final achieved transformation, respectively. ρ Q is an equal distribution of the computational basis states |0 and |1 : ρ Q ≡ 1 2 |0 0| + 1 2 |1 1|. Finally, we estimate the probability of leakage using the projection of an arbitrary evolved state, where P j represents the projection operator in the state | j . Figures 6-8 show the proposed measurement gates, phase gate and the Hadamard gate, whose matrix representations in the {|0 , |1 } basis are respectively given by where θ 01 and θ +− are arbitrary phases, whose values are not relevant for the gate implementation, provided that the gates equations (8) and (9) The gate consists of two different regimes: the first, t 7 ns, is designed to be an adiabatic process, thus minimizing leakage when passing through the avoided crossing. The other, t > 7 ns, evolves the system in a non-adiabatic process through the portal, leading to the desired transitions between the |0 and |1 states. The applied flux is maintained constant during the gate, and close to the S line: = 30 µ 0 . Insets: the gate fidelity as a function of unwanted shifts in c and from the optimal point of operation. The main source of the loss of fidelity for the 0/1 measurement is related to 0/1 transitions, and for the +/− measurement it is related to the phase noise. Note the difference in the scales. equation (10). Finally, since the states |0 and e iθ 0 |0 , and |1 and e iθ 1 |1 are physically identical, the convention for the global phases θ 0 and θ 1 is chosen such that the physical implementation of the Hadamard gate has the matrix representation given by equation (11). Once fixed, the phase convention is maintained for all other gate implementations.
In addition, figures 6-8 also present the gate fidelities as a function of unwanted constant shifts from the optimal point of operation in the whole pulse profile. We model the effect of 1/ f flux noise by an ensemble of random constant shifts of this sort. Our assumption of constancy is accurate for those components of the 1/ f noise at frequencies below the inverse of the gate time, around 100 MHz. The 1/ f noise at this frequency and higher is not accurately modeled in this way, but at these frequencies we believe that other sources of high-frequency noise (that is, white noise from the resistances in our circuit) become more important than 1/ f noise. In our work, these noise sources are modeled separately using a quantum bath; see [13] for these calculations.
Our estimates for T 1 and T 2 [16] times indicate that we can expect a very long coherence time in parking, O(1s), and a very short dephasing time, 10 ns, at the measurement position. In the portal region, we calculate coherence times of the order of hundreds of microseconds. As we shall see, except in the measurement processes, our gates are designed to not go lower than the upper limit of the portal. Thus, in principle, for gates of duration of 20-30 ns, the fidelity should not be compromised by decoherence much more strongly than by the imperfections explored in these figures: unwanted shifts in the applied fluxes due to low-frequency noise, and temporal shifts between c and pulses.
Those shifts are assumed uncorrelated and constant during a single gate operation, but random from one 'shot' to another. We model the noise as a normalized Gaussian probability distribution, µ(x) ≡ 1 N e −x 2 /2σ 2 , which we assume to have (at 1 Hz) 6 µ 0 (flux shifts) and 6 ps (time shifts) as its root mean square (rms) deviation, σ , such that approximately 90% of the distribution is found between the values ±10 µ 0 (flux shifts) and ±10 ps (time shifts); we believe that this quality of control will be readily achievable in the laboratory in the coming years. Indeed, recent experimental results [19] indicate that our assumption of having 6 µ 0 as rms deviation of the 1/ f flux noise at 1 Hz is already achievable for Josephson qubits.
We note that by using the 1/ f rms amplitude at 1 Hz in this model, we are implicitly assuming that the 1/ f noise is cut off below this frequency. In fact, the 1/ f spectrum goes much lower (at least three orders of magnitude lower in [19] and in other similar previous studies); but we may assume that in quantum computer operation, qubits are frequently taken offline and recalibrated. Assuming that this recalibration takes place once per second gives the 1 Hz cut-off. Our assumption of Gaussian statistics merely embodies the expectation that the 1/ f noise arises as the summed effect of many independent fluctuators; this is also borne out by the traces of [19]. Table 1 summarizes the average fidelity obtained for the gates discussed considering each channel noise separately: F = dδxµ(δx)F(δx), where F is given by equation (7). As one can see, the minimal expected fidelity found was 99.47% for the Hadamard gate. For the phase, Hadamard and controlled-Z gates, we present the operator-sum representation [20] of the system superoperator where the operators {E k } are the operation elements for the quantum operation E. Having the set of operators {E k } permits the characterization of the noise present during the system evolution.
We find that the model noise considered in this work indicates that the effective noise for the physical implementation of our universal set of quantum gates is heavily biased, i.e., we have found that the effect of phase noise is at least one order of magnitude higher than bit-flip noise and leakage processes. New strategies for fault-tolerant computation have recently been worked out which take advantage of this sort of biasing in the noise [21].
Measurement gates
We present two distinct measurement gates, both very useful for effective quantum error correction and universal quantum computation. As one can see from their matrix representations equations (8) and (9), by measurement gates we mean the unitary operations that are used to prepare the state for the final projection process (measurement). The first gate is a measurement in the standard 0/1 basis. That is, starting from parking, we are to distinguish whether the system is in its ground state (0 quanta in the transmission line mode) or its first excited state (1 quantum in the transmission line mode). This gate, shown as the first plot of figure 6, works by performing an adiabatic evolution of the states |0 and |1 from the parking point to the measurement point, maintaining the bias flux at a constant value off the S line ( = 0). So, through this adiabatic transformation, the qubit states evolve from the configuration |S, 0 (ground state) and |S, 1 (first excited state), to |L , 0 and |R, 0 (where the first label corresponds to the bare qubit states, and the second to those of the transmission line). The matrix representation equation (8) shows that we end with an irrelevant relative phase θ 01 between the |0 and |1 states. The sources of loss of fidelity are leakage at the avoided crossing gap, and 0/1 transitions at the portal. For the path shown, the probabilities of observing leakage and 0/1 transitions are 3 × 10 −4 and 6 × 10 −3 %, respectively. As a result, the expected total net gate fidelity was found: dδ c dδ µ(δ c )µ(δ )F(δ c , δ ) ∼ 99.99%. The second measurement gate is in the conjugate basis. It evolves equal superpositions of |0 and |1 states at the parking point, |± = 1 √ 2 (|0 ± |1 ), to the final states |L , 0 and |R, 0 , respectively, at the measurement point. Since we know an adiabatic evolution would preserve the amplitude of probability of each state in the superposition, we have to design a non-adiabatic pulse in order to implement this +/− measurement. Indeed, by passing at an appropriate rate through the portal region, we achieve the desired transformation. The second plot of figure 6 presents the proposed gate. It consists of two distinct parts: the first occurs up to t 7 ns, when an adiabatic c pulse is applied to the qubit. This part is done slowly to minimize the leakage when the system passes through the avoided crossing gap; it is also tuned so that the correct relative phase is accumulated between the |0 and |1 states, with respect to a reference phase. The second part of the pulse, t 7 ns, causes the qubit to undergo a non-adiabatic evolution through the portal, producing the transitions needed to implement the gate. The bias flux is maintained constant during the whole gate, and it is held as close as is practical to the S linein our calculation we take = 30 µ 0 . The matrix representation equation (9) reveals the ideal transformation, which ends with an irrelevant relative phase between the ground and first excited states. For the path proposed, the probability of leakage is found to be 0.03%. Because the qubit has to stay, for an appreciable amount of time in a region of very fast relative phase accumulation, the main type of noise is a phase noise. The expected total net fidelity for this gate is: dδ c dδ µ(δ c )µ(δ )F(δ c , δ ) ∼ 99.8%.
Phase gate
The phase gate, figure 7, is the simplest operation of our set. In order to accumulate the desired phase, i.e. θ z ( t) ≡ t 0 ω 01 (t) dt, it is sufficient to adiabatically bring the system from the parking point to a position where the ω 01 frequency deviates a few hundreds of MHz from the reference frame; then one just has to wait the appropriate amount of time, corresponding to the desired phase accumulation (see the inset of figure 7). Since this position can be reached without passing through the avoided crossing, the leakage probability is extremely low; we calculate a leakage probability of 10 −7 %. Since we remain essentially within a very stable parking regime, the gate, for any value of θ z , is very insensitivity to fluctuations of both the applied fluxes. Thus, the gate can be performed with a very high total net fidelity: dδ c dδ µ(δ c )µ(δ )F(δ c , δ ) ∼ 99.999%. Since the operation is designed to perform an adiabatic evolution of the system, the 0/1 transitions are almost completely suppressed. Consequently, the main source of (small) fidelity loss can be characterized as a phase noise. Indeed, if we take a look at the operator-sum representation of the system superoperator, equation (12), we obtain the following decomposition: E 0 ≈ 0.99999σ z , E 1 ≈ 0.0031, E 2 = 0 and E 3 = 0, which clearly indicates that we have just one form of noise.
The Hadamard gate
Completing our universal set of one-qubit gates is the Hadamard gate, figure 8. Its implementation is more complex than the previous gates, since it requires the synchronization of the two parameters, c and . As one can see, the gate can be basically divided into three parts: the first, t 10 ns, evolves the qubit in an adiabatic process from the memory state to the portal. During this process, only control flux is changed; once the start of the portal is reached, the second part of the gate is applied. It consists of a non-adiabatic pulse performed by the bias flux . This pulse is responsible for very quickly moving the qubit from one side of the S line to another. This process, 10 ns t 14 ns, is designed to create the desired superpositions between the states |0 and |1 . In order to require reasonable rise-times for this pulse, we have to be as far as possible from the S line. Nevertheless, this excursion in the bias flux is limited due to the very small gap ω 12 region seen in the flux space, see figure 5. Finally, the last step, t 14 ns, adiabatically brings the qubit back to its initial position-again, only c is changed in this regime.
As highlighted in figure 5, the path taken in the flux space due to this fluxes c (t) and (t) composition has the form of a 'U'-shape. This is a very useful feature, since working on both sides of the S line, one can expect to correct in the second half of the pulse some of the errors accumulated during the first. Indeed, because of the symmetry around the S line, in the first order of approximation, the errors due to the shifts in the bias flux, δ , should be canceled out, since at this order ω 01 (± + δ ) = ω 01 (| |) ± αδ . However, for this to be true, a precise synchronization of the c (t) and (t) pulses is required, in order to obtain a symmetric path. For the paths in figure 5, the number of dots indicates the rate as the system evolves: the fewer the dots, the faster the time flux rate; one can clearly see the other strategies used to optimize the gate: at the parking regime, we can perform a very fast evolution, since the ω 01 and ω 12 are very big (∼3.1 GHz); once we have reached the avoided crossing position, in order to avoid leakage, we slow down the evolution. However, this regime also coincides with a fast phase accumulation rate, so we try to not spend too much time at this region. Thus, we find a trade-off that has to be worked out during the optimization of the gate. For the path shown, we expect to observe a probability of leakage of 0.12%.
The second plot of figure 8 shows the fidelity as a function of δ c , δ , and the flux desynchronization δt. It turns out that the gate proposed is very insensitive to fluctuations in δ c . This occurs because we have designed the gate to explore a 'sweet' spot in control flux. As one can see in figure 2, the region close to c = 1.447 0 presents a first-order insensitive point to fluctuations of c for several values of . Thus, since this point is at the portal, we have chosen it to 'sit' the qubit while we perform the bias pulse. The average fidelity considering each noise channel separately is presented in table 1. Because the δ and δt fluctuations essentially have the same effect, changing the relative phase of the two computational basis states, their loss of fidelity is found to be very similar. Thus, we see that the phase noise again plays a major role in the loss of fidelity. The complete noise characterization is obtained through the operator-sum representation of the system superoperator: The net fidelity considering the effects of all noise channels together is given by: dδ c dδ dδtµ(δ c )µ(δ )µ(δt)F(δ c , δ , δt) ∼ 99.46%.
The two-qubit system
The two-qubit system we have used to simulate the two-qubit gate is sketched in figure 1. This layout preserves the same structures present for the one-qubit system, i.e. the transmission line, the readout SQUIDs and the flux lines; in addition, it has a qubit-qubit interaction that is assumed to arise due to the mutual inductance between the two big loops [14] (other qubitqubit coupling implementations using tunable interactions are demonstrated in [22,23]). The qubit-qubit mutual inductance is considered to be a small parameter, such that a first-order perturbation theory is expected to give a very fair description of the system dynamics. Thus, following the procedure adopted to derive equation (5), we obtained the two-qubit system Hamiltonian given by (see the appendix) H A,B represent the single-qubit Hamiltonians, equation (5), of the qubits A and B, respectively. The qubits are assumed to have identical bare qubits, but with different fundamental-mode transmission lines, ω A T = ω B T . This choice was made in order to avoid possible double excitation of the transmission lines due to the transfer of one quantum of energy from one transmission line to another. In addition, because we would like to use the same master clock to track the dynamics of both qubits, we had to impose a rational relation between ω A T and ω B T . In our calculations, we have assumed that ω A T /ω B T = 3/4. The system interaction Hamiltonian H I is given by equation (14). Its terms may be understood as arising due to 'classical' and 'quantum' magnetic field components, in the following sense: for the first two terms, we clearly can identify them having the form of a bias field B applied to each qubit (compare with the bias term of equation (4)). As expected, the bias field B applied to one qubit, let us say qubit i, depends on the control parameter of the other, qubit j. This happens because the external circulating current of qubit j creates a magnetic field, B ( j c ), seen by qubit i. Nevertheless, for a given j c , the qubit i feels the same bias field B ( j c ) whatever the quantum state of qubit j is. Thus, B should be seen as simply the result of Faraday's law applied to a classical circuit: the field j c induces a persistent current in the circuit j, which, in turn, creates a magnetic field B ( j c ) seen by the other qubit. Not surprisingly, the first two terms of H I cannot generate qubit-qubit entanglement. The dot-dashed curve in figure 9 shows B as a function of the control flux. Because of the presence of the Josephson junctions (nonlinear inductances), B has nonlinear behavior for some values of c (double-well regime). As a consequence of the existence of the magnetic field B , for the two-qubit system, the S line of qubit i is shifted to the lines i = k 0 ± B ( j c ) (where k is an integer, and the choice of the sign depends on which loop the flux B is threaded through). Thus, we refer to those 'new' qubit symmetric lines as the S lines. This is a very important effect and must be taken into account when performing one-qubit operations for the two-qubit system.
The last term of equation (14) describes a magnetic field seen by one qubit, determined by the quantum state of the other. The result is a qubit-qubit coupling of the formσ A z ⊗σ B z , which is responsible for generating qubit-qubit entanglement in the system. As one can see in figure 9, even though the physical qubit-qubit inductive coupling is assumed fixed, the effective interaction J is tunable as a function of both control flux parameters. In fact, as shown in the second plot of figure 9, the coupling J never reaches values higher than 450 MHz when one qubit is maintained deeply in the single-well regime (solid curve), rather than 5 GHz observed when both qubits are at the measurement point (dashed curve). This is a direct manifestation of the change of the system state character as a function of control parameter.
In addition, because the transmission line of qubit A sees the qubit B only via interaction with its bare qubit, and the interaction between the bare qubits is in fact small, it turns out that the transmission line states are very stable under the changes of control parameters of the other qubit. Indeed, since the qubit-qubit interaction equation (14) is capable neither of moving the qubits from the single-to the double-well potential regime (for that, terms of the formσ x are needed) nor of changing the quantum number of the transmission lines, once one qubit is parked, the two-qubit system eigenstates stay 'frozen' at the subspace in which the parked qubit Table 2. The expected one-qubit gate fidelities, as a function of unwanted shifts in δ c , δ and δt, when performed on the two-qubit system using the same schemes described in section 4. We observe a small difference between the oneand two-qubit system fidelities only for the +/− measurement gate. eigenstates are close to the states |S, 0 and |S, 1 . Thus, parking one qubit imposes a selection rule on the system that leads to a further reduction, below the already small value of J , of the effective qubit-qubit coupling. In fact, our calculations of the interaction matrix elements where the states i, j, l, m are the 0-1 one-qubit states) show that they never reach values higher than 3 MHz when one of the qubits is parked.
As a result, parking leads to a very effective way to decouple the qubits, thus allowing one to perform one-qubit gates for the two-qubit system discussed. By simply taking into account of the fact that the qubit S lines in the flux space are moved to the S lines, we can use exactly the same shaped pulse schemes presented previously for the one-qubit system, in order to perform the universal set of one-qubit gates. Indeed, our simulations showed that one can expect the same probability of leakage previously reported, and virtually the same expected fidelities, see table 2.
Controlled-Z gate
The two-qubit gate proposed is a gate in the equivalence class [24] of the controlled-Z gate, i.e. it only differs from the controlled-Z gate by local operations to the qubits A and B. The designed gate, the first plot of figure 10, involves an adiabatic evolution of the system states, in order to accumulate the correct relative phases. Both c and of qubits A and B are used to perform the operation. In order to obtain the strongest qubit-qubit interaction, and thus the shortest gate possible, the control fluxes of both qubits are changed simultaneously, A c (t) = B c (t). Also the bias fluxes change identically and, as for the Hadamard scheme, we work on both sides of the S line. The desired unitary transformation has the matrix representation in the two-qubit eigenstate basis {|0 , |1 , |2 , |3 } given by The relative phase is the pertinent parameter of this gate. Since the local invariants [24] of the gate U Z Z are determined by and those of the controlled-Z gate are given by G 1 = 0 and G 2 = 1, we end with the specific condition θ Z Z = π(1 + 2k) for the relative phase.
Since the evolution is done adiabatically, we can write the relative phase as where E i is the instantaneous eigenenergy of the state i. Thus, it is evident that θ Z Z only has non-negligible values when the term Jσ A z ⊗σ B z of equation (14) becomes appreciable. However, as shown in figure 10, to reach that region we have to pass through a regime of a very small gap (∼300 MHz) between the |3 and |4 states (minimum gap between the computational basis {|0 , |1 , |2 , |3 } and the rest of the spectrum of the system). Because of that, our shaped dc gate has a total duration of 60 ns in order to avoid leakage during the evolution. In addition, in order to explore 'sweet' spots of operations, the designed c (t) pulse is not completely flat when passing through the minimum gap region (see figures 10(b) and (c)).
The gate is performed in the following way: starting from the memory state, both control fluxes are changed to the region around c ≈ 1.453 0 . Then, by changing the bias fluxes A,B , the qubits are slowly moved through the region of minimum gap ω 34 , passing from one side of the S line to the other. At this stage, the parameter θ Z Z becomes appreciable (hundreds of MHz). The gate is designed to spend the right amount of time, such that the final evolution will satisfy the necessary condition for θ Z Z . Once the other side of the S line is reached, the qubits are brought back to the memory state. As observed for the Hadamard gate, the two-qubit gate proposed also has a 'U'-shape in the flux space.
The probability of leakage due to this process is ∼ 0.04%. Since the gaps between the computational states are bigger than 700 MHz, unwanted transitions are strongly suppressed. Therefore, phase noise is the main source of the loss of fidelity. As for one-bit gates, this phase noise arises from physical 1/ f noise and pulse timing variations; we assume their magnitudes to be the same as for the one-qubit case, with no correlations between the two qubits. We then calculate (assuming, again, Gaussian noise statistics) the operator-sum representation of the system superoperator: where The total net fidelity considering the effects of all noise channels is given by dδ c dδ dδ tµ(δ c )µ(δ )µ(δ t)F(δ c , δ , δ t) ∼ 99.67%.
The energy splitting ω 34 gives a limit on the rate at which the system can evolve without appreciable leakage. The θ Z Z plot gives the phase accumulation rate of the principal parameter of the gate, equation (16). The path (a) taken in the flux space is illustrated with dots in (b) and (c). The number of dots indicates the time spent when passing through that region.
Conclusion
Obviously, it is hoped that the results of this paper will be of direct relevance for experiments on qubits being performed at IBM, as well as being of general relevance to the problem of precision quantum control in any flux qubit system. The noise mechanisms analyzed here-magnetic 1/ f noise, Johnson noise of circuit resistances and timing errors in pulse channels-are a mixture of fundamental and practical sources of error that are currently understood in the laboratory. Thus, the quantitative fact that the values of the gate infidelity (one minus the fidelity) are at the 1% level and below, is the major result of this paper.
One might ask, given that this paper's main claim to importance is quantitative, whether the approximations made in the quantum model are actually adequate to give sub-1% accuracy. Perhaps the fundamental circuit Hamiltonian may be accepted as being very accurate; but can it be that the sequence of subsequent approximations, namely (i) the representation of the transmission line by a single oscillator mode, (ii) the dimension-reduction resulting from the Born-Oppenheimer approximation, and (iii) the final spectral truncation, involving an interpolated oscillator-displacement parameter, resulting in our four-level model, have the desired sub-1% accuracy? We can see, for example, in figure 4, that the spectral truncation results in changes in the absolute value of the energy eigenvalues, in some places by as much as 7%.
Despite this, we believe that our infidelity numbers are nevertheless quite sound, so that if we calculate an infidelity here of 1%, it may be, in a more accurate calculation, actually equal to 0.9 or 1.1%, but not much different from that. We have a few reasons for saying this. First, for our gate designs, it is generally not necessary that an energy gap be exactly a certain value at a particular point on the flux axis; it is more important that, somewhere in the near vicinity of a particular flux value, the energy gap has a certain size. This will be true even for a model Hamiltonian of moderate accuracy.
Traced to a deeper level, the successful functioning of our gates depends primarily on the following general features of the model. (i) Since many of the evolutions we consider are adiabatic, it is important merely that some integrated properties of the energy eigenvalues over some paths in parameter space be correct. This is fairly easily achieved by a model of moderate accuracy. (ii) In a few cases, basis-changing, non-adiabatic evolutions are important. These are achieved in just three ways: by passing through the portal, by crossing the S line, and (unintentionally) by moving the system in and out of parking. Our model obviously contains all of these features, with trends in the size of gaps that certainly track those of an exact calculation very closely. Furthermore, the general structure of the low-lying eigenvectors is captured in our truncation, meaning that trends in the matrix elements that determine the magnitude of the Landau-Zener tunneling effects are also well represented.
So, we conclude that our qubit, as currently understood, should be capable of gate operation at the 1% noise level. Unfortunately, in the lab, in any given day (or month) there are 'bugs' in the experiment that cause the qubit, often for unexplained reasons, to have fidelities much worse than what we estimate here. But we nevertheless hope that these calculations will have significant practical value. Our gate set is universal, in one of two different ways, in fact: the 0/1 measurement/preparation gates, Hadamard, controlled-Z , and one-qubit phase gates form one well-known universal set [20]; it is less well-known that, with more overhead, Hadamard can be replaced by the +/− measurement/preparation gates [25].
So, can a 'debugged' IBM qubit be used soon for universal quantum computation? The answer is, in our opinion, ultimately yes. The answer would certainly be no if the noise threshold for fault-tolerant quantum computation were in the neighborhood of the oft-quoted value of 10 −5 [26]- [28]. It is not inconceivable for the experiment to get to these values someday, since we find that the infidelities decrease much faster than linearly with the assumed noise levels.
(To get to 10 −5 we would need to get to the very daunting levels of 100 n 0 at 1 Hz for the 1/ f noise amplitudes and 100 fs for timing accuracies; there is optimism that both of these numbers are ultimately attainable.) Fortunately, while 10 −5 was the threshold as it was understood 10 years ago, much recent work shows that with good designs, much higher thresholds are possible [29,30]. According to [30], 1% is in fact on the high end of the noise levels for which fault tolerance may be possible.
But the final answer to the question of whether fault tolerance is possible will require resolving a number of further uncertainties. One effect not included here is the fact that noise will be correlated between qubits that are physically nearby, due to ordinary electrical cross-talk, say. This is certainly deleterious for noise thresholds. However, a great virtue of our parking scheme is that, while parked, qubits are quite immune to electrical noise from nearby sources. Another point that might count in our favor is that, as seen in our operator-sum expansions, the noise has a definite structure (i.e. it is mostly phase noise in most cases), while most analyses of noise thresholds have assumed worst-case, structureless noise. On the other hand, these special noise sources, to the extent that they arise from the physical 1/ f noise, are substantially correlated in time. While fault tolerance is known to survive in the presence of such 'non-Markovian' sources [31], it is not clear what such an effect does to the numerical values of the threshold. Finally, there are purely 'architectural' considerations, e.g. the theoretical analyses assumed that a qubit can be moved to the proximity of any other one without cost. This will surely not be true in any real Josephson architecture.
To summarize, we have demonstrated that using only low-bandwidth electrical pulses, a universal set of high-fidelity quantum gates can be achieved for an oscillator-stabilized Josephson qubit, with a required 'clock time' around 60 ns, and achievable fidelities above 99% per gate operation. Essential to these results are the existence of 'parking' made possible by coupling of the flux qubit to a transmission line, the possibility of mostly adiabatic control, and the availability of two robust basis-changing effects obtained by crossing the 'portal' and the 'S line'. Time will tell if these physical elements make it possible, in the face of the many sources of noise that are present in the solid-state environment, to do large-scale quantum computation.
Appendix A. Hamiltonian derivation
In this appendix, we supply derivations of the Hamiltonians presented in the main text, equations (4) and (14). The main assumption in the derivation is the use of the first-order perturbation theory to treat terms of the system potential. As presented in figure 4 and previously discussed, performing the procedure described here, we could arrive at a very good description for the lowest states of the IBM qubit.
A.1. Brief review of BKD
BKD [16] have introduced a universal method for analyzing any electrical circuit containing Josephson junctions, provided only that its elements can be represented by lumped elements. The methodology can be summarized in a few steps: a network graph is written for the circuit. A network graph is simply a drawing of the circuit where each two-terminal element (inductor, capacitor, Josephson junction, etc) is represented as an oriented labeled branch connecting two nodes. Then, a tree of the network graph (a subgraph that does not contain any loops) is chosen using the following criteria: the tree has to contain all of the capacitors in the system, no resistors 26 or external impedances, no current sources, no Josephson junctions and as few linear inductors as possible. The branches that do not belong to the tree are called chords.
Associated with the tree chosen, there are the so-called sub-loop matrices F X Y , where the label X represents the tree elements (X = C for capacitors and X = K for the tree inductors) and Y the chord branches (Y = J for Josephson junctions, Y = L for linear inductors, Y = R for shunt resistors, Y = Z for external impedances and Y = B for bias current sources). The subloop matrices have entries −1, 0 or 1, and give information about the interconnections in the circuit, determining which tree branches X are present in which loop defined by the chords Y . The entries in F X Y matrix are found as follows: if the corresponding tree element X i does not belong to the loop defined by the corresponding chord Y j , its entry F i, j X Y is zero. If it belongs to the loop but has opposite orientation to Y i , we have F i, j X Y = +1. Finally, if it belongs and has the same orientation, we have F i, j X Y = −1. These steps give an algorithm to encode the topology of the system in a matrix representation. In addition, the formalism assumes that all capacitors should be considered to be in parallel with a Josephson junction, even if it is one with zero critical current.
The physics of the circuit is introduced by imposing the Kirchhoff laws at each node of the network, and defining the electrical characteristics of each branch type as Here, the diagonal matrix I c contains the critical currents of the junctions, and sin ϕ is the vector (sinϕ 1 , sinϕ 2 , . . . , sinϕ N J ). The phase ϕ i represents the superconducting phase difference across the junction i. The linear capacitors are described by equation (A.2), with their capacitance values given by the entries of the diagonal matrix C. The junction resistors are assumed to follow Ohm's law, equation (A.3), where R is the real and diagonal shunt resistance matrix. The impedances are described by equation (A.4), which relates the Fourier transforms of the current and voltage. Z(ω) is the diagonal impedance matrix. Since linear inductors can be tree branches as well as chords, we have to distinguish them to apply the network graph theory. Therefore, the inductance matrix must be organized in the block form shown in equation (A.5), where L is the inductance matrix of the tree inductors, L K is that for the chord inductors, and L L K represents the mutual inductances between the tree and chord inductors. BKD arrived at the system Hamiltonian where the external applied fluxes and current are respectively represented byΦ (66) in [16]. Thus, BKD maps the circuit dynamics to that of a massive particle in a potential, whose masses and degrees of freedom are associated with the system capacitances. The quantization of the system is introduced by imposing the canonical commutation relation for the variables of charge, Q C , and phase ϕ: [ 0 2π ϕ i , Q C; j ] = ihδ i j .
A.2. IBM qubit network graph
The L matrices are denoted: In addition, the capacitances and critical currents of the junctions are taken to be C = 10 fF and I c = 1.3 µA.
A.3. Bare qubit Hamiltonian
Our aim in this section is to present the steps used to derive the approximate Hamiltonian for the bare qubit. In order to reach the appropriate conditions to perform a first-order perturbation theory to treat the coupling between the qubit and the bias flux line, we firstly have to identify the 'slow' and 'fast' coordinates of the system. The slow coordinate is that which joins the two minima when the system presents a double-well structure, or that which has the slowest curvature in the single-well regime. The fast coordinates are those in which the potential rises very steeply, such that the system dynamics is frozen into the ground state along these directions.
The procedure to identify the slow and fast degrees of freedom starts from the exact system potential equation (2) and a unitary transformation R that diagonalizes M 0 (M 0 is a real and symmetric matrix [16]). Using R we can decouple the quadratic form of the potential by transforming the system coordinate into the new coordinates n ≡ Rϕ. Thus, from equation (2), we obtain (ignoring the term due to external sources of current) where we have defined the diagonal matrix λ ≡ RM 0 R T , whose entries are the exact eigenvalues Because of L 1 −2M 15 2(L 3 +M 35 −M 15 ) ≈ 0.024 and the fact that we expect to work with magnetic fluxes of the order of only a few 0 s, we have that, to good approximation, the system potential is only a function of the difference of the magnetic flux in the two large loops, defined as . This property is a direct manifestation of fact that the designed bare qubit has a 'gradiometer' structure. Even though this correction is expected to be very small, we shall consider it during the whole procedure we are going to describe here. The new coordinate system allows us to easily identify, by inspection of each term of equation (A.13), the following symmetry in the potential U n,˜ c , , p = U (n 1 , n 2 , n 3 , c , ) = U (−n 1 , −n 2 , n 3 , c , − ) .
(A. 16) This shows that the system potential presents a definite symmetry in the plane determined by the directions {n 1 , n 2 }, when the system goes from + to − (if we simultaneously adjust the control flux˜ c to maintain the new flux coordinate c , equation (A.15), constant). In addition, equation (A.16) also permits us to determine the conditions for which the system presents the same physical potential U (n 1 , n 2 , n 3 , c , ) = U n 1 , n 2 , n 3 , c , + γ , where γ is an irrelevant constant. In fact, we find that whenever the bias flux difference = − is an integer multiple of the flux quantum, let us say = (k 1 − k 2 − 2k 3 ) 0 (each k i is any integer), equation However, since λ 2 /λ 3 ≈ 0.059 and L −1 j;i /λ 3 ≈ 0.061 (i.e. the potential is much steeper in the direction s 3 than in the others), we have the global minimum of the potential occurring at the position s 3 ≈ 0. As a result, the bare qubit low-level dynamics can be considered frozen at the plane s 3 ≈ 0.
Another important result is derived from the fact that the low-level system dynamics is only governed by the degrees of freedom s 1 and s 2 : the system potential has a symmetry line (S line) at = 0 in the flux space. Indeed, as one can see, when = 0, U (s, c , ) has a perfect symmetry around the origin in the {s 1 , s 2 } plane. Because of that, the low-level system wavefunctions have definite parity symmetry at = 0. In addition, because the same physical potential is found when is changed by an integer multiple of 0 , an S line occurs whenever is an integer multiple of 0 .
Unfortunately, because λ 2 ∼ L −1 j;i , it is not simple, as it was for the direction s 3 , to determine the soft and fast directions in the plane defined by s 3 ≈ 0. However, from the symmetry of the S line one can see that, if the potential has a minimum in s min 1 = 0 and s min 2 = 0, the potential presents a symmetric double-well structure (with maximum at s 1 = s 2 = 0) in the direction connecting the minimum points. This direction is expected to be the slow coordinate of the system, while its perpendicular direction should determine the fast degree of freedom. Consequently, by finding the minima positions in the plane {s 1 , s 2 }, one can determine those directions. Following this procedure, we perform one more rotation, q 1 = s 1 cos θ − s 2 sin θ, q 2 = s 1 sin θ + s 2 cos θ and q 3 = s 3 , such that the direction q 1 connects the minima. When the potential does not have a double-well structure, we define the last rotation so that the potential curvature in the direction q 1 is the smallest.
Thus, after appropriate transformations, we end with phase and flux coordinate systems in which the symmetries presented by the system are much more clearly stated. In addition, the slow, q 1 , and fast, {q 2 , q 3 }, degrees of freedom of the system are also identified. As already mentioned, because the potential is very steep in the fast directions, the low-level system dynamics is frozen into the ground state along these directions. Following the Born-Oppenheimer approach developed in [13], these directions can be traced out and their effects incorporated as small corrections to the remaining slow-coordinate potential energy.
At last, following all the steps described above, we are now in a position to construct term by term Hamiltonian equation (5): • No bias flux line ( = 0): As the system potential has a perfect symmetry around the origin, the bare qubit wavefunctions also have a parity symmetry. Thus, the ground and first excited states are expected to be symmetric and antisymmetric with respect to reflection around the origin. As described in the main text, the system potential is found in a doublewell structure for small values of c . There, the bare qubit eigenstates can be understood as symmetrical and antisymmetrical equal superpositions of the classical orbital states of the left and right wells. For the single-well regime, c 1.5 0 , the bare qubit states are associated with those of a harmonic oscillator. Thus, in this representation, the Hamiltonian of the bare qubit without bias flux line can be written as − 1 2 ( c )σ x , where ( c ) (shown in figure 3) is the ground and first excited state energy splitting.
• The bias flux term: As previously discussed, the bias flux is responsible for breaking the perfect potential symmetry observed at the S line. However, the system presents, as stated in equation (A.16), another important symmetry in going from one side of the S line to another (i.e. passing from − to + ): the system potential at < 0 is identical to that in > 0 by a mirror reflection (a π -rotation for the full multi-dimensional potential) at the origin. In the double-well regime that corresponds to the interchange of the left and right states under the mentioned transformation. The bias term of the Hamiltonian equation (5) is obtained applying a first-order perturbation theory to treat the bare qubitbias flux line coupling. Indeed, because the potential term representing their coupling, U q ≈ q 1 π √ (2/3)λ 2 sin θ, is very small, for typical values of bias flux used in this work ( = O(m 0 )), compared to the other potential terms, the first-order approximation can be done in a very controllable way. Because of the symmetry of the bare qubit eigenstates, the interaction matrix elements i |U q | j (where = S, A are the symmetrical and antisymmetrical bare qubit eigenstates) are nonzero only for those connecting the ground and excited states. Thus, the bias term has the form: 1 2
A.4. Bare qubit-transmission line coupling
Another structure present in the IBM qubit is a high-quality superconducting transmission line (figure 1). We model the fundamental mode of the transmission line as a simple LC circuit of definite frequency ω T coupled to the bare qubit. We assume that its characteristic impedance and inductance are given by Z 0 = √ L T /C T = 110 and L T = 5.6 nH, respectively, and the bare qubit-transmission line mutual inductance equals M qT = 200 pH. In order to find the slow coordinate directions, the pertinent terms of the interaction potential are given by As one can see, the linear terms in equation (A.23) are due to the 'frozen' values of the fastest degree of freedom of each qubits. Moreover, the linear term of the slow coordinate of qubit i arises due to the frozen coordinate of qubit j and vice versa. The interaction matrix elements due to these terms are exactly that of the one-qubit bias term, leading to a Hamiltonian form: Finally, the term q A 1 q B 1 of equation (A.23) only connects states of different parity in both qubits. Thus, this term leads to the form: | 18,200 | 2007-09-10T00:00:00.000 | [
"Physics"
] |
First Report of CTNS Mutations in a Chinese Family with Infantile Cystinosis
Infantile cystinosis (IC) is a rare autosomal recessive disorder characterized by a defect in the lysosomal-membrane transport protein, cystinosin. It serves as a prototype for lysosomal transport disorders. To date, several CTNS mutations have been identified as the cause of the prototypic disease across different ethnic populations worldwide. However, in Asia, the CTNS mutation is very rarely reported. For the Chinese population, no literature on CTNS mutation screening for IC is available to date. In this paper, by using the whole exome sequencing and Sanger sequencing, we identified two novel CTNS splicing deletions in a Chinese IC family, one at the donor site of exon 6 of CTNS (IVS6+1, del G) and the other at the acceptor site of exon 8 (IVS8-1, del GT). These data give information for the genetic counseling of the IC that occurred in Chinese population.
Introduction
Cystinosis is a rare autosomal recessive disorder characterized by the intralysosomal accumulation of the disulphide amino acid cystine, which is the consequence of a defect in the membrane transport protein, cystinosin [1,2].
The cystinosin was coded by CTNS gene, which consists of 12 exons, is located on chromosome 17p13.3, and spans 23 kb of genomic DNA [1,3].
The cystinosis serves as the prototype of inborn error for a small group of lysosomal transport disorders as it was the first described lysosomal storage disease by distinguished European pediatrician Guido Fanconi in early 1930 [3].
Three subtypes of cystinosis have been described according to the age of onset and the severity of the clinical symptoms: infantile cystinosis (IC, the classical and the severest form, OMIM #219800), juvenile cystinosis (the intermediate form, OMIM #219900), and ocular nonnephropathic cystinosis (OMIM #219750) [3].
Since the CTNS gene was cloned as the cause of cystinosis [1], a great number of CTNS mutations spreading throughout the entire gene, including small insertion, deletion, duplication, point mutation, splice-site mutations, promoter mutations, and genomic rearrangements, have been reported, mostly in European-and American-based subpopulations [3][4][5].
Specifically, in the most populated Chinese population, there was only one Taiwan family with two sisters affected by intermediate cystinosis reported, which was the consequence of a homozygous missense mutation (N323K) of CTNS gene [7,10].
In here, we described, for the first time, a Chinese mainland family with two brothers affected by IC diagnosed by exome sequencing. All of the two novel CTNS mutations were involved in the deletions of the splice-sites of CTNS (IVS6+1, del G and IVS8-1, and del GT).
Materials and Methods
A Chinese Han family (from Hunan Province, Central China) with two males affected by renal Fanconi syndrome (Figure 1(a)) and 80 unrelated ethnically matched healthy controls (41 male and 39 female) were recruited in this study. All adult individuals and the parents of the minors who participated in this study gave written informed consent, which was approved by the Ethics Committee of the Hunan Children's Hospital, Changsha City, China. The procedures of the Committee conformed to the principles of the declaration of Helsinki, 2008 edition.
For exome capture, genomic DNA was extracted from peripheral blood (4 mL in heparin sodium tubes) using the phenol/trichloromethane method prescribed by standard protocol. A total of 3 micrograms (ug) of genomic DNA (one patient, II:1, Figure 1(a)) was sheared by sonication and hybridized to the Nimblegen SeqCap EZ Library for enrichment, according to the manufacturer's protocol. The library enriched for target regions was sequenced on the HiSeq 2000 platform to get paired-end reads with read length of 90 bp [11]. A mean exome coverage of 73.66x was obtained that provided sufficient depth to accurately call variants at 99% of each targeted exome. For read mapping and variant analysis, the human reference (genome version hg19, build 37.1) was obtained from the UCSC database (http://genome.ucsc.edu/). Sequence alignment was performed using the program SOAP aligner. SNPs were called using SOAPsnp set with the default parameters after the duplicated reads (produced mainly in the PCR step) were ignored [12]. Short insertions or deletions (indels) affecting coding sequence or splicing sites were identified. The thresholds for calling SNPs and short indels included the facts that the number of unique mapped reads supporting a SNP had to be ≥C4 and the consensus quality score had to be ≥C20 (the quality score is a Phred score, generated by the program SOAPsnp1.03; quality score 20 represents 99% accuracy of a base call). Common mutations of the patient were obtained and were filtered against the dbSNP137, 1000 Genomes project, HapMap project, Exome Sequencing Project (ESP) and our inhouse databases with a frequency >0.005.
Due to the absence of consanguineous marriage identified in the family and two family members exhibiting a similar phenotype of renal Fanconi syndrome, we firstly focused on the candidate-genes of compound-heterozygous mutations. Sanger sequencing was employed to validate the identified potential disease-causing variants with ABI3500 Genetic Analyzer (Applied Biosystems, Foster City, CA, USA) [13]. The primers and PCR conditions were available on request.
Clinical Data.
The propositus (II:1, Figure 1(a)) was the first child of nonconsanguineous parents. He was 3050 g in weight and 47 cm in length at birth and had undergone a full-term pregnancy and normal vaginal delivery. The pregnancy was uncomplicated (no exposures to or use of alcohol, tobacco, and drugs), but the gravida was affected by intrahepatic cholestasis.
The Initial symptom of the propositus was persistent vomiting at the age of 7 months. At that time, his examinations of gastrointestinal B-ultrasound, upper gastrointestinal barium meal, gastrointestinal endoscope, brain MRI, and EEG were at the normal range. His result for chromosome G band analysis (550 bands level) was 46, XY.
At the age of 1 year and 7 months, his symptom of polyuria appeared (he need to intake 2000-2500 mL water daily). His hemoglobin was 87 g/L (normal: 110-180 g/L); his blood amino acid analysis showed nutritional disorders, secondary carnitine deficiency, and ketosis. A subsequent initiation of lactose-free diet, supplemented with carnitine, VitB1, and VitB12, alleviated his vomiting, but his polydipsia and polyuria remained unchanged. At the age of 3 years and 6 months, he was diagnosed to be affected by renal Fanconi syndrome, metabolic acidosis (severe), anemia (moderate), renal rickets, hypokalemia, and generalized amino aciduria.
At the age of 7 years and 6 months, he was diagnosed to be affected by IC by exome sequencing. His height was 100.5 cm (<5 centile, normal: 120.7 cm); he showed severe Corneal crystals (Figure 1(b)) and severe renal rickets (Figure 1(c)).
The patient II:2 (Figure 1(a)) was the younger brother of the propositus II:1 and was 3 years and 4 months old. His initial symptom was glycosuria at the age of 7 months. The urinalysis revealed: glucose: +++, urine protein: ++. The blood chemistry results were pH 6.0 (normal: 7.35-7.45), potassium 2.7 mmol/L (normal: 3.5-5.5 mmol/L), and bicarbonate 10.74 mmol/L (normal: 20-29 mmol/L). From age of 1 year and 5 months, he exhibited polydipsia and polyuria (water intake 3000 mL; urinates 3300 mL daily). He begins to suffer from constipation and his appetite is very poor. Anthropometric evaluation revealed a short stature (height of 81 cm, normal: 92.5 cm) but intelligence is at normal range. He was diagnosed to be affected by renal Fanconi syndrome, metabolic acidosis, hypokalemia, iron deficiency anemia, secondary carnitine deficiency, and Vit Ddependent rickets at the age of 9 months.
Exome Sequencing and Sanger-Sequencing.
We performed exome sequencing of one patient (II:1, Figure 1(a)) in the Chinese family with the symptoms of renal Fanconi syndrome. We generated 6.5 billion bases of 90 bp pairedend read sequence for the patient. About 6.2 billion (96.7%) of the bases passed the quality assessment, 5.5 billion (89.2%) aligned to the human reference sequence, and 3.0 billion bases (54.2%) mapped to the targeted bases with a mean coverage of 64.1-fold. We excluded known variants identified in 1,000 genome project, HapMap, dbSNP132, or YH1 [11].
In the family, due to the nonconsanguineous marriage and two brothers being affected by the same severe syndrome, we mainly focused on the candidate-gene that must meet the following three criteria: (1) the candidate must contain at least two nonhomozygous variants (for the criteria, we obtained only 17 candidate genes, Table 1); (2) the candidate genes must contain one truncated or splice-site mutated allele (for the criteria, we obtained only 2 candidate genes, CTNS and MYH15, Table 1); (3) the candidate variants must be confirmed by Sanger sequencing and must be cosegregated with the renal Fanconi symptoms in the family; for this criteria, we discovered that only two splice-site deletions of CTNS were cosegregated with the syndrome in the family with the autosomal recessive mode (Figure 1(a)), while the MYH15 variants shared no cosegregation in the family as none of the MYH15-variants was detected in the patient II:2.
Further Sanger sequencing of CTNS gene was realized on 80 ethnic-matched, healthy controls and revealed that none of controls carried the CTNS mutation.
All two CTNS mutations detected in this family were involved in splice-site deletion, one at the donor site of exon 6 of CTNS (IVS6+1, del G) (Figure 2(a)) and the other at the acceptor site of exon 8 (IVS8-1, del GT) (this mutation strikes twice; firstly, it leads to IVS8-1, del G; secondly, it leads to the c.462delT) (Figure 2(b)).
Discussion
The majority of diagnosed cystinosis cases were reported in the European and American population with an incidence of one in 100,000-200,000 live births [3].
About 76% of cystinosis patients in European and American population carried the common 57 kb multiexon deletion, while in France's Brittany, with special high incidence with cystinosis, the founder effects of 898-900+24 del27 and W183X mutation exactly existed [14].
Interestingly, in Asia population, very few cystinosis patients and CTNS mutations have been reported. To date, only one case in Japan [8], six cases (four families) in Thailand [6], two cases in India [15] (Tang et al., 2009), one family (two affected sisters with intermediate cystinosis) in Taiwan (which represents Chinese population) [7], and several cases in Iran [9] have been reported.
Since the availability of cystine-depleting medical therapy and the introduction of kidney transplants, the previously fatal disease was recently transformed into a treatable disorder [9,16].
A recent study has pointed out that a huge gap of the outcomes for the nephropathic cystinosis patients existed between developed and developing countries as the limited access to cysteamine [17]. The authors noticed that the main obstacle on the way for the IC-therapy in China was that the IC patients were not diagnosed which as the consequence China was the most populated developing country in the world, to our knowledge; except for one family (two affected sisters with a CTNS homozygous mutation N323K) in Taiwan (which originated from Chinese population) with intermediate cystinosis reported [7], no IC patient was reported in this developing country.
In this study, due to the limited genetics information for CTNS genotypic phenotype available in Chinese population, we firstly carried out the mutation screening of SLC34A3 [18] and SLC34A1 [19] (which independently cause recessive renal Fanconi syndrome) in the family but negative results were obtained (data not shown). Although the negative data observed, we carried out the Whole Exome Sequencing in the family. By exome sequencing, we successfully diagnosed the infantile cystinosis in the family and identified two novel CTNS splice-site deletions in the family. This is the first report where IC in Chinese population and the first time of CTNS splice-site deletion were detected. These data provide The Scientific World Journal information for the molecular diagnosis of cystinosis in the Chinese population and give clues for the CTNS mutation distribution and classification in the world. | 2,700.6 | 2015-03-17T00:00:00.000 | [
"Biology"
] |
Narrow and Broad Beam Attenuation of Diagnostic X Ray Beam Across Lead Sample Found in Ebonyi State , South East Nigeria
Nigeria has over ten million tons of lead predominantly situated in the Abakaliki field in the lower benue trough. The authors decided to embark on a study to document its shielding strength towards X ray beam in the diagnostic energy range. Measurements of narrow and broad beam attenuations were made across different cut out thicknesses. Our results show that the linear attenuation coefficient is lower when compared to the value in the NIST table; 42% for N100 KeV and 18% for N120 KeV. These authors believe might be due to impurities because of the increased weight trying to undermine its shielding capacity or from the X ray factors. However, it could still give adequate shielding performance with a minimum thickness of 4 mm following broad beam attenuation. Received date: 19/08/2018 Accepted date: 10/09/2018 Published date: 20/09/2018
INTRODUCTION
Typical materials for shielding walls, floor and ceiling are lead, concrete and barite [1] .Lead is widely distributed all over the world in form of its sulphide (PbS) known as Galena.It ranks 36th in natural abundance among elements in the earth's crust.Nigeria's most important lead-zinc deposit is the Abakaliki field which is made up of four lodes: Ishiagu, Enyigba, Ameki and Ameri in the lower Benue trough [2] .A large deposit of over 20 million tons of complex sulphide minerals is available in Nigeria especially in Ishiagu, Ebonyi state [3] .It is estimated that Nigeria has over ten million tons of lead ore deposit predominantly found in Zamfara, Niger and Ebonyi states [4] .Lead has several properties that make it advantageous to use alongside its commonness, high density, low melting point, ductility, and relatively inertness against oxygen attack.Lead minerals are easy to mine and are easier to extract from its ores than many other metals.
In Nigeria, there have been lots of challenges in compliance to radiation standards in diagnostic health centers.Using imported lead shields and ordinary thick concrete seams to ameliorate; but still has its toll on the economy.Locally mined lead which could be used for wall designs is at varying stages of characterization.Olubambi [3] did a mineralogical characterization of Ishiagu, Ebonyi state, Nigeria's complex sulphide ore where their study provided relevant mineralogical information on which the processing of the ore could be easily achieved.Onyedika [5] made a detailed mineralogical and elemental characterization of this deposit using XRD, SEM-EDX and ICP -DES based mapping techniques where their results show that the most dominant and valuable metal is lead.(Pb=95.02%mass fraction).Egwuonwu [6] combined Galena samples from Ishiagu, Ebonyi state with concrete and was molded into different densities to ascertain its attenuating capacity to EM radiation.Their results show that a typical Ishiagu galena concrete of about 2.80 g/cm 3 has the capacity of shielding visible blue light with about 2.51 mm TVL and 0.81 mm HVL.Their result, however, did not identify the attenuating strength to be either galena or concrete.The works of Doyema [7] and Geoffrey [8] reported on the severe hazards of lead deposits in Northern Nigeria due to illegal mining.Simba [9] made a case for an environmental remediation for addressing childhood lead poisoning in Northern Nigeria.
No literature has provided us with the attenuating strength of deposit in a relative purified form to e diagnostic energy range and this is the vacuum we intend to fill.We took a survey of all the Radiotherapy centers across the federation and found out that the lead plaster used for its wall designs were all imported from South Africa.The wall designs of some of the diagnostic centers where our locally mined lead was used; its shielding strength were based on mere assumptions.The use of kilovoltage X-ray beams is increasing for radiation therapy in some countries and forms an integral part of radiation treatment due to its low cost, Research & Reviews: Journal of Pure and Applied Physics e-ISSN:2320-2459 p-ISSN:2347-2316 relative free operation and superior outcomes [10,11].Currently, the country intends to build more Radiotherapy centers across the federation.Available broad beam data are found to be unsatisfactory and according to Hoff [12] ; the adequate characterization of a shielding material is important for supporting efficient radiation shielding design processes.The authors believe that the result of this study will give basic information on the radiation shielding capacity of the lead sample deposit found in Ebonyi state.Radiation shielding and protection barrier calculation for members of the public and radiation workers is directly related to the accuracy of published and available data [13] .
The site comprises of sand, lenses of sandstone and limestone, shales's with fine grained micaceous sandstones and mudstones that are Albian in age.Galena which often occurs as fine aggregates and sphalerite are the dominant constituents of the veins [6,14] .The distribution of lead, zinc, and other heavy elements in Ishiagu is due to their occurrence in veins and veinlets.Galena occurs in mineralized veins, mine dumps, folded shales, the Asu River shales and the minor basic intrusive [6,15] .
MATERIALS AND METHODS
Samples of lead were collected from the mining site at Ishiagu, Ebonyi state.The samples were purified via blast furnace process before being molded into slabs for onward X ray beam transmission under the narrow and broad beam geometries.
Narrow and Broad Beam Attenuation
These measurements were done with a comet MXR-320 X-ray tube installed at the Institute of Radiation Protection, University of Ibadan.The set-up sketch is shown in Figure 2. The Medium Exposure Standards (MEES) was used to determine the relative attenuation of a variety of thicknesses of attenuating material in the filter position.Attenuation is defined as the ratio of intensity with and without attenuating material Where I and I 0 are intensity with and without attenuating material thickness, x in the beam.µ is the linear attenuation coefficient.Details of the beam qualities and Half-Value Layer are shown in Table 1 while the effective transmission of the radiation qualities is shown in Table 2.In our experimental set up, µ is determined from narrow beam geometry set up.X ray beam from the source was collimated by using a cylindrical lead block with a central hole of diameter 1 cm.The transmitted beam was also collimated using a similar lead cylinder.We considered both narrow and broad beam attenuation measurements because according to Das [10] , narrow beam data should not be used for organ shielding.BBA data are desired for clinical use; though subject and dependent on the measuring conditions.The lead sample was checked for the presence of radioactivity and no contributory signal above background radiation was detected.Dosimetry codes do not provide any guidelines on clinical matters such as broad beam attenuation (BBA) data [10] .In an ideal broad beam geometry; every scattered or secondary uncharged particle/photon energy in addition to the transmitted photon energy strike the detector, but only if generated in the attenuator by a primary particle on its way to the detector or by a secondary charged particle resulting from such a primary one [16] .When detected by our chamber at a field size of 10 cm 2 × 10 cm 2 ; it approximates to the real-life picture of the BBA strength of our material where I(X,t,E) and I O (X,0,E) are the transmitted intensity of the barrier thickness and the incident intensity respectively.The beam energies used in this study were 100 KeV and 120 KeV.The purified lead attenuator of 10 × 10 cm 2 with precise thicknesses of 2 mm to 6 mm were designed using the fully automated lead sheet rolling mill (87113) and weighted.The detector with the set-up is a 1¾ "2" NaI(Ti) detector coupled to a ten-photomultiplier tube in an integral assembly supplied by Broch Technologies.The output pulses from the PMT anode were fed to the main linear amplifier through a pre-amplifier.Spectra were recorded using a Nucleonix 4k MCA.For each attenuator thickness and energy, five measurements were made to take average.In the narrow beam attenuation, the total counts in the various energy lines were determined and plotted on a semi logarithmic plot as a function of attenuator thickness.The slopes of the resulting straight lines were evaluated by the linear equation.For the BBA; exponential transmission curves were displayed for a better picture of the attenuating strength of each attenuator.
Half Value Layer (HVL) and Tenth Value Layer (TVL)
Half Value Layer (HVL) is the thickness of our lead sample that will reduce the intensity of the beam to half of its initial value.
Tenth Value Layer (TVL) is analogous to HVL except that it is the thickness of our lead sample that will reduce the intensity of the beam to one tenth of each initial value.We will determine this for each energy using The TVL is often used in x ray room shielding design calculation [17] .
Lead Thickness
The lead cut-out is placed in the skin surface to shape the field size.The thickness of the lead pieces is either taken from published data or from direct measurement [10] .In our own case, we made a direct measurement.This thickness could also be used wall design for shielding purposes.These values of air kerma transmission were used for determining the linear attenuation coefficient (Figures 3 and 4) and the attenuating capacity of our lead sample under the BBA.The density of the sample is 12.5 g/cm 3 .The N100 KeV gave a linear attenuation of 40 cm -1 and the N120 KeV has a linear attenuation of 39 cm -1 .The mass attenuation coefficient given from the National Institute of Standards and Technology (NIST) is 5.549 cm 2 /g for N100 KeV and 3.8 cm 2 /g for N120 KeV [18] .Our sample gave a value of 3.2 cm 2 /g and 3.12 for both N100 KeV and N120 KeV respectively given a percentage deviation of 42% and 18% respectively (Tables 3 and 4).This deviation will perhaps require that our sample should further undergo physical and chemical analysis.Alternatively, Tsalafoutas [19] and Rossi [20] are of the view that primary transmission and HVLs from different author do not generally agree well.As stated by Rossi [20] , these differences could be due to generator wave forms, X-ray beam HVL, irradiation field size and segment of the attenuation curve used to calculate the high attenuation HVL.So, comparison of attenuating properties of different materials from different authors could lead to misleading conclusions.The corresponding HVL and Tenth Value Layer (TVL) are 0.017 cm and 0.058 cm respectively (from eqns.( 3) and ( 4)) for the N100 KeV; while that of N120 KeV is 0.018 cm and 0.059 cm (Figures 5 and 6).The broad beam attenuation is used to determine the attenuating capacity of the different cut out thicknesses.The sample thicknesses seem to have a similar capacity to 100 KeV and 120 KeV.Thicknesses of 2 and 3 mm were not adequate for ideal shielding; while thicknesses of 4 mm and above give better shielding performance.
CONCLUSION
Lead ore sample deposit in Ishiagu, Ebonyi state, South Eastern Nigeria has a weaker attenuating capacity when compared to Standard theoretical value; but could still be used for radiation shielding in the diagnostic imaging range.It should still undergo further physical and chemical characterization to account for the deviation of the its linear attenuation coefficients values from the theoretical value given in NIST table or from experimentation from other authors when performed using the same X ray factors.The authors believe that certain impurities or other compounds might be present within the sample which tend to undermine its shielding strength.For both 100 KeV and 120 KeV, a thickness of 4 mm is adequate to provide the required shielding.Increasing the thickness further could warrant to unnecessary cost.This thickness could also be used for shielding purposes in the diagnostic imaging energy range which is common to most diagnostic centers in Nigeria.The values of µ, TVL, and HVL in our study should serve as a data base for the use of this sample.
Figure 2 :
Figure 2: Experimental set up of our narrow and broad beam attenuation measurement.There was the presence of collimator during broad beam measurement.
Table 1 :
Radiation quality used in the study: N for Narrow beam and B for Broad Beam
Table 4 :
Mass attenuation coefficient (cm 2 /g) in comparison with NIST values. | 2,839.2 | 2018-01-01T00:00:00.000 | [
"Physics"
] |
Retraction Retraction: A Brief Analysis of the Application of English Language in British and American Literature Based on Artificial Intelligence
artificial intelligence (AI), abbreviated in English, is the research and development of human body to produce new technologies to simulate and expand the application of human intelligence in machines. The ultimate goal of artificial intelligence is to be able to produce robots that are not different from human beings and only listen to human beings. To achieve this technology, the most important thing is to be able to analyze and study the substantive operation function of human intelligence. The most ideal way of language learning is to be able to personally visit the language environment, feel the atmosphere of the language culture, and have indepth communication with the language owner. However, in reality, such an idealized state is impossible to exist, so literary works have become more ideal language and cultural materials. Both British and American languages use English to communicate, although both belong to English, the cultural differences lead to their different ways of expression. This paper first makes a simple analysis of the artificial intelligence system, then analyzes the differences between British and American literary works, and then interprets and analyzes the application of artificial intelligence system in British and American literary works in detail.
I. Introduction
With the continuous advancement of the age of science and technology, information technology is also constantly reforming and innovating the teaching of colleges and universities in our country.The emergence of artificial intelligence has also transformed the teaching model.Compared with traditional teaching, artificial intelligence model can greatly improve the level of English teaching and stimulate students' interest in English learning.
Artificial Intelligence, abbreviated as AI, was a conceptual term first proposed in 1956.It is the study and development of the human body to produce new technologies to simulate and extend the application of human intelligence to machines.The ultimate goal of artificial intelligence is to be able to produce robots that are not different from human beings and only listen to human beings. To 2 achieve this technology, the most important thing is to be able to analyze and study the substantive operation function of human intelligence.Up to now, the application of artificial intelligence has been quite mature. It can not only imitate some basic skills of human beings, but also analyze and deal with some basic problems like humans through recognition and analysis [1,2].To put it bluntly, artificial intelligence is to make machines imitate all human behaviors, including the ability to do things and think logically. (Figure 1) Figure 1. Artificial intelligence model Artificial intelligence system can be based on voice recognition, voice evaluation and natural language processing and other core technologies, and then developed for intelligent speech system and intelligent writing correction system.Through the establishment of artificial intelligence system and the simulation analysis of data, so that artificial intelligence can read and understand like human beings, enter into deep creation.
Artificial intelligence system can be divided into several basic modules for research and analysis [3].First of all, artificial intelligence needs to act like human beings, that is to say, the intelligent robot created by artificial intelligence should be able to have the ability to act, which is the basic point of the development of artificial intelligence, but also the first stage of the development of artificial intelligence --let the intelligent robot can move freely.
After the intelligent robot has the ability to move, it is necessary to analyze the thinking activities of human beings, so as to develop new technology and apply it to the intelligent robot to make the intelligent robot have the wisdom.At this stage, no major breakthrough has been made so far. The intelligent robots produced at this stage have a fixed mode of intelligence, but they do not have the ability to understand and analyze.
In the traditional literary creation, people need to read and understand a large number of literary works and extend them, so that they can have the foundation of self-creation.Judging from the long history of literary creation, this process cannot be simplified or replaced [4].Therefore, how to apply artificial intelligence technology to traditional literary creation has become a big problem.
The characteristics of British and American literature in the traditional sense
The importance of British and American literature in China's current higher education system can be seen from its being a required course for English majors. However, in the study of British and American literature, due to historical factors, its status and role have been neglected for a long time.In current education system, to the requirement of English level is as long as it can write can read can say, but in the development of literature study, can only read is completely not enough, the essence of The most ideal way of language learning is to be able to personally visit the language environment, feel the atmosphere of the language culture, and have in-depth communication with the language owner [5].However, in reality, such an idealized state is impossible to exist, so literary works have become more ideal language and cultural materials.Literary works, as the most practical learning materials, play an important role in daily learning.
The educational function of literature has a positive influence on the ideological and moral education of learners.
The essence of literature is to convey a kind of self to the life and the feeling of the real world, while in Britain and the United States work, most of the work is based on education readers, promote western humanitarian is given priority to, shows in that era, people in western countries in order to express the lofty ideal of self and all sorts of problems and contradictions in the realistic society.
Understand the local conditions and customs of western countries.
Although most of the British and American literature works are based on the creation of a self-vent as the theme, we can find that in a large number of British and American literature works are mixed with the description of the local conditions and customs.In the real world, not every one of us is able to personally visit the local cultural environment, but through the comprehensive interpretation of British and American literature works, we can know the local cultural customs and social relations in a more comprehensive way.
Improve the communicative ability of English communicators
In the existing large number of British and American literature works, the highly distinctive literary level enables us to learn the expression with British and American characteristics in the process of reading through the whole text.In the traditional English learning, learners can only by the British and American literature works in Chinese mode, but in fact this way of interpretation completely against the original intention of the creator, because we do not understand the local culture and traditional learning mode makes our work in reading, to understand works for conventional thinking [6][7].The indepth interpretation of British and American literature can improve the way we communicate with local people in practical communication, especially in the aspect of oral expression, it can put us in the middle of it.
Differences between British and American literary works
In our daily life, we all use the Chinese model of English to communicate with foreign cultures.Although the other party can understand what we want to express in this way of communication, we cannot communicate in traditional English at a deeper level.Even some of the English articles we usually read are translated and interpreted with a grammar suitable for Chinese people.But actually English there is a huge difference, the expression levels of British and American two languages are used to communicate in English, although belong to English, because of cultural differences in the expression way is different, some concrete is the culture of the two countries is different, so for the same kind of context with a word is also have different understanding [8,9]. This expression mode formed by cultural differences is not only reflected in British and American literature, but also different in daily life. Among the British literary works, one of the local characteristics is that the British literary works are more prominent by female writers and female works, and the natural feminine beauty factor is added in the creation, which makes the overall style of the British literary works incline to realism.American literature works are more in line with the color of local culture, mainly taking American traditional culture as the theme to create works. [8][9]His works are more inclined to the American tough guy style, mainly to show the American spirit.Therefore, when interpreting British and American literary works, we must distinguish their differences, so as to bring ourselves into the local culture with the cultural background, so as to have a deep understanding of their literary works. (Figure 2)
The significance of artificial intelligence in British and American literature
The interpretation of the works of British and American literature must be based on the local characteristic culture, which most of us readers cannot understand.The reason for this problem lies in the huge difference between Chinese traditional culture and British and American culture. Readers cannot separate from the traditional culture and go deep into the local culture, so they cannot deeply interpret the deep meaning in British and American literary works.
The maturity of artificial intelligence could solve this problem to a large extent.Artificial intelligence can collect a large number of cultural resources into its own system, so as to form a literature system with local culture as the background.When interpreting English and American literature, he can make a detailed analysis on the basis of his own system, get the deep meaning expressed by the works, and then show this idea to the readers in front of them through the formation of local culture [10].This way of interpretation of British and American literature can not only perfectly solve the problem that ordinary readers do not have the condition to go deep into local culture, but also can deeply interpret literary works, and then let readers to read and understand in the form of local culture.This mode greatly improves the efficiency and quality of readers' interpretation of the works of chanting plum, and also makes the communication between the two cultures more indepth.
Conclusion
The cultural difference between British and American literary works is the problem that ordinary readers cannot deeply understand literary works, and it is unrealistic to have a deep understanding of local culture and then interpret British and American literary works with local culture as the background in reality.The application of artificial intelligence system provides great help in this regard. It is not only a system with local characteristics, but also a tool that can express cultural differences in a traditional local way.The application of artificial intelligence system in British and American literature, greatly improve the communication between British and American culture, can let the ordinary people more deeply to understand the local culture, and gain a certain age from literary works in the local cultural background information, also make the traditional culture of our country in foreign travels to become more convenient.Based on the English expressions in British and American literary works, this paper deeply interprets and analyzes the expressions of British and American culture in literary works with the background of cultural characteristics and the media of artificial intelligence. | 2,640.4 | 2021-01-01T00:00:00.000 | [
"Linguistics",
"Computer Science"
] |